Reputation: 5800
How should I interpret the output given by GCC's -fmem-report
flag?
What information can I retrieve from the table and subsequent statistics?
I've tried retrieving the peak memory consumption during compilation and thought intuitively, that the last line of the table (Total
) gives me the value. But these are far from the ones I've seen in top
.
While compiling my project, the highest peak from gcc
processes was around 1.7GB, but the biggest value I can find in the build log is around 750MB.
What other GCC flags can help me in monitoring these ~1.7GB? Or do I need to wrap make
inside a script monitoring gcc
and ld
processes?
Given the following output, what values are the most important and most informative?
Memory still allocated at the end of the compilation process
Size Allocated Used Overhead
8 40k 38k 1200
16 104k 100k 2288
32 296k 295k 5328
64 20k 16k 320
128 4096 384 56
256 48k 45k 672
512 188k 187k 2632
1024 888k 887k 12k
2048 156k 154k 2184
4096 188k 188k 2632
8192 56k 48k 392
16384 16k 16k 56
32768 32k 0 56
65536 64k 0 56
131072 128k 128k 56
24 236k 232k 4248
40 36k 33k 576
48 12k 8496 192
56 4096 2016 64
72 12k 8136 168
80 4096 480 56
88 1448k 1429k 19k
96 12k 10k 168
112 4096 1568 56
120 8192 5040 112
184 16k 14k 224
160 4096 960 56
168 36k 35k 504
152 56k 51k 784
104 4096 416 56
352 516k 486k 7224
136 4096 408 56
Total 4640k 4424k 63k
String pool
entries 16631
identifiers 16631 (100.00%)
slots 32768
deleted 0
bytes 252k (17592186044415M overhead)
table size 256k
coll/search 0.4904
ins/search 0.0345
avg. entry 15.55 bytes (+/- 9.75)
longest entry 66
??? tree nodes created
(No per-node statistics)
Type hash: size 1021, 27 elements, 0.140351 collisions
DECL_DEBUG_EXPR hash: size 1021, 0 elements, 0.000000 collisions
DECL_VALUE_EXPR hash: size 1021, 0 elements, 0.000000 collisions
no search statistics
decl_specializations: size 61, 0 elements, 0.000000 collisions
type_specializations: size 61, 0 elements, 0.000000 collisions
No gimple statistics
Alias oracle query stats:
refs_may_alias_p: 0 disambiguations, 0 queries
ref_maybe_used_by_call_p: 0 disambiguations, 0 queries
call_may_clobber_ref_p: 0 disambiguations, 0 queries
PTA query stats:
pt_solution_includes: 0 disambiguations, 0 queries
pt_solutions_intersect: 0 disambiguations, 0 queries
Upvotes: 1
Views: 1169
Reputation: 9061
fmem-report is defined in common.opt file in gcc source code. You can use ctags and cscope to get the actual file which is setting fmem-report flag and then you need to look the code which is checking this flag. If you dont get this let me know I will find it
Upvotes: 0
Reputation: 2036
The output shows what memory was used during the compilation. GCC/G++ allocates memory in various sized chunks, based on need.
Take the first entry, for example:
Size Allocated Used Overhead
8 40k 38k 1200
This shows that 40K of memory was allocated in 8-byte chunks, of that 40K, 38K was USED by the compiler, and 1200 bytes were 'accounting overhead'. Malloc(3) doesn't always return exactly what you ask for, there's usually a teeny couple of bytes indicating how big this chunk is, various accounting data (who owns this chunk), and if things need to be aligned, there may be unused bytes too.
Basically, this information is just accounting notes.
The hash table stuff near the end is showing how the well hash routine worked, to allow GCC/G++ to find things in its tables, how many collisions occurred (same hash value), which needed to be handled, and so forth.
I do like the 'bytes' entry in the 'String Pool':
bytes 252k (17592186044415M overhead)
HOW much memory does it take to store strings? OMG!! And that's MEGABytes. {Grin} Might be a bug. Might not... how much ram you got?
Overall, GCC/G++ used 1.7GB during your compilation because that was available, consider also, did you use multiple/parallel compilation? (-j switch), that'll add up usage, since the parallel programs don't talk to each other. Compiling the same with 512M of RAM available would just take longer since GCC/G++ would have to stop and clean up more often to keep enough RAM available.
If you wanted to see how it reacts under smaller memory constraints, have a look at the ulimit command, especially the d, v, m and maybe l limits. Remember to use the -S (soft limits) switch too, or you'll have to close the terminal/console/konsole to regain unlimited limits. (sounds like a marketing plan there)
Upvotes: 4