Reputation: 418
Sampling rate can be set for perf record
command using -F
. I want to know what is the sampling rate for intel_pt event i.e., for command
perf record -e intel_pt// -- ./a.out
With -F
in user mode max sampling rate allowed is 8000. While it is possible that perf record
stores the trace few thousand times per second, but the trace event that are recorded using perf record -e intel_pt//
have much higher frequency.
In other words with intel_pt event a trace of an application execution is collected. Is it the case that perf record work differently while recording using intel_pt event, i.e., in some non-sampling mode?
Upvotes: 1
Views: 696
Reputation: 94175
Yes, intel_pt mode of perf record
is different and is not same sampling (statistical) profiling with software (cpu-clock) or hardware (cycles) events. Sampling has 4000 of current EIP samples per second and gives you basic inexact view over code execution. intel_pt is hardware-based tracing technique which generates a lot of data about every control flow instruction (in default perf intel_pt mode) allowing to reconstruct full control flow, but it has bigger overhead. So, frequency of Intel PT is same as how many calls, branches and returns are executed per second by program code (100s of millions).
With sampling on hardware events, perf record
will ask hardware PMU to count some events like CPU cycles, and to generate an overflow interrupt after for example 2 million of such events. On such interrupt perf_events subsystem in kernel will record current OS timestamp, pid/tid of current thread, EIP instruction pointer to ring buffer and reset the PMU counter for new value. perf subsystem does limit maximum frequency of interrupts by autotuning the value, and -F
option can be used to change desired frequency of interrupts. When the ring buffer (around several megabytes in size) is filled, perf
user-space tool will dump it contents into perf.data
file, and you can view raw data with perf script
or perf script -D
. Or just to make histograms with perf report
(sort EIPs by how often there was an interrupt on that EIP instruction address, which is proportional to time taken by that code). This mode has around 4 thousand events per second of thread execution (perf report --header | grep sample_freq
), with 48 bytes per sample, or 192 kilobyte per second. Overhead is basically low enough, but the sampling is not exact.
perf wiki has separate page for intel processor trace (intel_pt) - https://perf.wiki.kernel.org/index.php/Perf_tools_support_for_Intel%C2%AE_Processor_Trace
Control flow tracing is different from other kinds of performance analysis and debugging. It provides fine-grained information on branches taken in a program, but that means there can be a vast amount of trace data. Such an enormous amount of trace data creates a number of challenges, but it raises the central question: how to reduce the amount of trace data that needs to be captured. That inverts the way performance analysis is normally done. Instead of taking a test case and creating a trace of it, you need first to create a test case that is suitable for tracing.
So, intel_pt is tracing (logging) module integrated into CPU hardware, and when armed it will generate "hundreds of megabytes of trace data per CPU per second", according to used settings. With some settings it may event generate tracing data (packet log) faster than it can be written to disk or even to RAM ("overflow packets"). According to https://lwn.net/Articles/648154/ article, perf_events (kernel-mode) in intel_pt mode will just save full packet log into separate (bigger?) ring buffer and perf tool (user-space) will just periodically save data from ring buffer into file for offline filtering, parsing and decode. (Period of saving aux or ring mmap into the file is not the same as overflow interrupt frequency option -F
) PT decoder then will be used to reconstruct PT packet log into perf-compatible samples. Log data volume is huge, overhead is 1% - 5% - 10% or more depending on branch frequency in code executed.
Documentation of intel_pt is manpage man perf-intel-pt
and long text stored inside linux kernel source code at
https://github.com/torvalds/linux/blob/master/tools/perf/Documentation/perf-intel-pt.txt
Intel PT is first supported in Intel Core M and 5th generation Intel Core processors that are based on the Intel micro-architecture code name Broadwell. Trace data is collected by 'perf record' and stored within the perf.data file. ... Trace data must be 'decoded' which involves walking the object code and matching the trace data packets. ... Decoding is done on-the-fly. The decoder outputs samples in the same format as samples output by perf hardware events, for example as though the "instructions" or "branches" events had been recorded. Presently 3 tools support this: 'perf script', 'perf report' and 'perf inject'. ... The main distinguishing feature of Intel PT is that the decoder can determine the exact flow of software execution. Intel PT can be used to understand why and how did software get to a certain point, or behave a certain way. ... A limitation of Intel PT is that it produces huge amounts of trace data (hundreds of megabytes per second per core) which takes a long time to decode
By default perf record -e intel_pt//
is same as -e intel_pt/tsc=1,noretcomp=0/
. config terms
section of manpage man perf-intel-pt
says what is default settings:
tsc
Always supported. Produces TSC timestamp packets to provide timing information. In some cases it is possible to decode without timing information, for example a per-thread context that does not overlap executable memory maps.
noretcomp
Always supported. Disables "return compression" so a TIP packet is produced when a function returns. Causes more packets to be produced but might make decoding more reliable.
pt
Specifies pass-through which enables the branch config term.
branch
Enable branch tracing. Branch tracing is enabled by defaultTo represent software control flow, "branches" samples are produced. By default a branch sample is synthesized for every single branch.
As it says, intel_pt in default mode is used to produce control flow log, by asking hardware to generate log packets for every control flow instruction like call, branch, return, and to add timestamps to synchronize pt log with some service perf samples (like exec or mmap to find actual code being loaded into memory). It tries to generate not too much, for example [single bit is used per conditional branch (tnt)](https://conference.hitb.org/hitbsecconf2017ams/materials/D1T1 - Richard Johnson - Harnessing Intel Processor Trace on Windows for Vulnerability Discovery.pdf#page=12) and several bytes per indirect branch, but there are hundreds of millions branches per second for many programs.
Some useful and short slides on perf + intel_pt:
Update: While intel pt trace log has full trace (there are packets inside for every branch/call/return), perf report
does run conversion from pt log into sample set like in classic perf.data, and there is sampling rate in sample set. This is configured with --itrace
option of perf report
(iNNTT, where NN is amount and TT is type - i/t/us/ns, as described in man page of perf-report:
--itrace Options for decoding instruction tracing data. The options are: i synthesize instructions events g synthesize a call chain (use with i or x) The default is all events i.e. the same as --itrace=ibxwpe, In addition, the period (default 100000, ...) for instructions events can be specified in units of: i instructions t ticks ms milliseconds us microseconds ns nanoseconds (default)
So it seems like by default perf report
will convert full trace log into instruction samples at sampling rate of 100000 instructions (1 perf sample generated per 100 thousands instructions). It can be changed to higher rate, but processing time will increase.
Manpage of perf-intel-pt gives more examples of itrace option usage:
Because samples are synthesized after-the-fact, the sampling period can be selected for reporting. e.g. sample every microsecond sudo perf report pt_ls --itrace=i1usge See the sections below for more information about the --itrace option. Beware the smaller the period, the more samples that are produced, and the longer it takes to process them. Also note that the coarseness of Intel PT timing information will start to distort the statistical value of the sampling as the sampling period becomes smaller. To see every possible IPC value, "instructions" events can be used e.g. --itrace=i0ns --itrace=i10us sets the period to 10us i.e. one instruction sample is synthesized for each 10 microseconds of trace. Alternatives to "us" are "ms" (milliseconds), "ns" (nanoseconds), "t" (TSC ticks) or "i" (instructions). For Intel PT, the default period is 100us. Setting it to a zero period means "as often as possible". In the case of Intel PT that is the same as a period of 1 and a unit of instructions (i.e. --itrace=i1i).
http://halobates.de/blog/p/410 has some additional examples of complex conversions:
perf script --ns --itrace=cr
Record program execution and display function call graph.
perf script by defaults “samples” the data (only dumps a sample every 100us). This can be configured using the --itrace option (see reference below)
perf script --itrace=i0ns --ns -F time,pid,comm,sym,symoff,insn,ip | xed -F insn: -S /proc/kallsyms -64
Show every assembly instruction executed with disassembler.
perf report --itrace=g32l64i100us --branch-history
Print hot paths every 100us as call graph histograms
perf script --itrace=i100usg | stackcollapse-perf.pl > workload.folded flamegraph.pl workloaded.folded > workload.svg google-chrome workload.svg
Generate flame graph from execution, sampled every 100us
Upvotes: 2