Well, it’s going to vary from system to system, but because you have a dual-socket system, half the memory is assigned to each socket. Most scalable systems are set up out of the box to interleave memory between the sockets for compatibility with non-NUMA workloads. In your BIOS, there should be an option to disable memory interleave (or enable NUMA). Then when you boot into Linux and run
numactl --show you should see two NUMA nodes.
(Note for anybody reading this with newer systems: some architectures are “sub-NUMA” and can present multiple NUMA nodes per socket to the operating system.)
Now what you can do is use
numactl to isolate each madMAx instance to one socket and the memory assigned to it. This will (in theory) perform better because no traffic will have to pass over QPI, i.e. between sockets. Also, when you create the two RAMdisks you can pass a mount option to indicate which NUMA node to use for each. And you can use
lstopo to figure out which NVMe is attached to which socket so that traffic stays local as well.
Full disclosure: I haven’t actually tried this myself because I don’t have any dual-socket systems available for personal use.
EDIT: Here’s an example from a PowerEdge R630 with two E5-2637v3 processors and 64GB RAM:
$ numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14
node 0 size: 32092 MB
node 0 free: 2355 MB
node 1 cpus: 1 3 5 7 9 11 13 15
node 1 size: 32229 MB
node 1 free: 1137 MB
node 0 1
0: 10 21
1: 21 10