Reputation: 409
I'm writing a C++ program that shall solve PDEs and algebraic equations on networks. The Eigen library shoulders the biggest part of the work by solving many sparse linear systems with LU decomposition.
As performance is always nice I played around with options for that. I'm using
g++ -O3 -DNDEBUG -flto -fno-fat-lto-objects -std=c++17
as performance-related compiler options. I then added the -march=native
option and found that
execution time increased on average by approximately 6% (tested by gnu time with a sample size of about 10 runs per configuration. There was almost no variance for both settings).
What are possible (or preferably likely) reasons for such an observation.
I guess the output of lscpu might be useful here, so this is it:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 78
Model name: Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz
Stepping: 3
CPU MHz: 800.026
CPU max MHz: 3100.0000
CPU min MHz: 400.0000
BogoMIPS: 5199.98
Virtualization: VT-x
L1d cache: 64 KiB
L1i cache: 64 KiB
L2 cache: 512 KiB
L3 cache: 4 MiB
Edit: As requested here are the cpu flags:
vendor_id : GenuineIntel
cpu family : 6
model name : Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
vmx flags : vnmi preemption_timer invvpid ept_x_only ept_ad ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple pml
bogomips : 5199.98
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
Upvotes: 2
Views: 1462
Reputation: 50808
There are plenty of reasons why a code can be slower with -march=native
, although this is quite exceptional.
That being said, in your specific context, one possible scenario is the use of slower SIMD instructions, or more precisely different SIMD instructions finally making the program slower. Indeed, GCC vectorize loops with -O3
using the SSE instruction set on x86 processors such as yours (for backward compatibility). With -march=native
, GCC will likely vectorize loops using the more advanced and more recent AVX instruction set (supported by your processor but not on many old x86 processors). While the use of AVX instruction should speed your program up, it is not always the case in few pathological cases (less efficient code generated by compiler heuristics, loops are too small to leverage AVX instructions, missing/slower AVX instructions available in SSE, alignment, transition penality, energy/frequency impact, etc.).
My guess is your program is memory bound and thus AVX instructions do not make your program faster.
You can confirm this hypothesis by enabling AVX manually using -mavx -mavx2
rather than -march=native
and look if your performance issue is still there. I advise you to carefully benchmark your application using a tool like perf
.
Upvotes: 6