jyvet
jyvet

Reputation: 2201

Binding MPI processes on the second CPU socket with MVAPICH2.2

I'm using NUMA compute nodes where the network (Mellanox InfiniBand HCA) is connected to the second CPU socket (and NUMA node). Is there any environment variable to simply bind all MPI processes to the second CPU socket with MVAPICH2.2?

The MV2_CPU_BINDING_LEVEL=socket MV2_CPU_BINDING_POLICY=bunch combination does not work since it starts regrouping processes on the first CPU socket.

I usually end up using something like: -genv MV2_CPU_MAPPING 10:11:12:13:14:15:16:17:18:19:30:31:32:33:34:35:36:37:38:39 (use all SMTs of the second 10-core CPU socket) but this is ugly and dependant on the amount of cores.

Upvotes: 1

Views: 289

Answers (1)

Rakurai
Rakurai

Reputation: 1016

This isn't an environment variable, but if you're able to modify /etc/default/grub on your systems, then you can isolate the cores on package 0 from the scheduler. Example for your 10-core (hyper threading) CPUs:

GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT isolcpus=0-19"

Upvotes: 0

Related Questions