Kennerd
Kennerd

Reputation: 73

Interaction of QEMU and KVM from a perspective of Instruction Replacement

Currently I am doing some research on dynamic instruction replacement for x86 Instruction Set Architectures (ISA). Until now, I only did that for RISC-V based processor architectures. Since there are no public domain x86 implementations available which could be synthesized to a FPGA, I have to stick with visualization for now.

My experimental setup is as follows: A guest application (compiled with gcc, no external libraries) is run in qemu-user-mode. (I found this post indeed very helpful: QEMU - Code Flow [ Instruction cache and TCG]) The entire system is run on a Fedora 25 Linux operation system and build from the latest git-sources.

Some instructions are (my own code analysis) internally dispatched to

static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s, int modrm)

From there, I can't tell what is happening to this class of instructions.

gen_nop_modrm(env, s, modrm); (translate.c:8108)

Is the calling method.

My primary objective is to add instructions after an instruction is recognized in order to delay a consecutive execution of the same instruction over again.

I read about how KVM-based QEMU execution works. Obviously, some kind of hypervision is possible (even for USB transactions: https://www.blackhat.com/docs/eu-14/materials/eu-14-Schumilo-Dont-Trust-Your-USB-How-To-Find-Bugs-In-USB-Device-Drivers-wp.pdf) The architecture (although very complex) is so far straight forward.

I am interested in:

  1. How are instructions handled, which are caught by these gen_lea_modrm methods.
  2. Can instructions which are naively passed through via KVM be observed?
  3. The Translation Buffer (tb) is chunked (as far as I understood), can I extend the buffer to inject Instructions?
  4. Are there any build-in facilities to do instruction-profiling

I searched SO thoroughly with the search-terms I had. Any kind of hints, tips or suggestions would really be helpful and appreciated.

Best Regards.

Upvotes: 3

Views: 694

Answers (1)

Peter Maydell
Peter Maydell

Reputation: 11523

TCG and KVM are entirely separate modes of operation for QEMU. If you're using KVM (via -enable-kvm on the command line) then all guest instructions are either natively executed by the host CPU or (for a few instructions mostly doing I/O to emulated devices) emulated inside the host kernel; QEMU's TCG instruction emulation (which is the code you're referring to above) is never used at all. Conversely, if you use QEMU in TCG mode (the default) then we are a pure emulator in userspace and make no use of the host CPU's hypervisor functionality. qemu-user-mode is always TCG emulation, and never KVM.

To answer your question about the TCG code, gen_lea_modrm_0() is not handling a particular class of instructions entirely. It is just dealing with the decode part of instructions of that form -- it looks at the modrm byte of the instruction, loads some further bytes from the instruction stream, and returns a structure indicating the details of the addressing mode which the instruction is using. It also makes sure that the PC has been advanced over the whole instruction including that immediate data. The code which calls gen_lea_modrm_0() then uses the addressing mode information as part of emitting TCG IR operations to do the work. gen_nop_modrm() is a special case, because it is called for instructions which are NOPs of one form or another. So there is no "real work" to do, and the only thing that the call to gen_lea_modrm_0() achieves is to make sure we've advanced the PC past any intermediate data that the insn encodes. We emit no TCG IR operations, and then when the generated code is run, nothing happens, which is exactly what you want for a NOP...

Upvotes: 4

Related Questions