Reputation: 43
You can find more details about the issue in this Github issue https://github.com/Mellanox/libvma/issues/1080 .
I'm using Mellanox's libvma to optimize my application's networking performance. However, I'm facing an issue with the VMA_INTERNAL_THREAD_AFFINITY setting. Despite specifying the core for the VMA internal thread, it continues to run on the same core as the application thread, leading to context switches and latency issues.
Is there anyone faced a similar issue or know any fix on that issue? Thanks for the help in advance. I can provide more details if it's needed.
I tried running with:
LD_PRELOAD=libvma.so VMA_SPEC=latency VMA_INTERNAL_THREAD_AFFINITY=2 ./app
VMA_INTERNAL_THREAD_AFFINITY=2 doesn't have an impact on the core affinity of the vma process. It's running on the same core with the application thread. Causing context switches and hanging which impact the latency.
Expected behavior: The recommended configuration is to run VMA internal thread on a different core than the application but on the same NUMA node. To achieve this VMA_INTERNAL_THREAD_AFFINITY should work as expected to pin the vma process to the core we want it to be.
Upvotes: 0
Views: 25