How to configure IOMMU protection for my hypervisor?

1.1k views Asked by At

I'm developing my own bare-metal hypervisor over intel vt-x technology.

My goal is to make it inaccessible to the OS I'm running over my hypervisor in any way, therefore I configured an EPT table to protect from memory access. I believe that now I'm missing protection from devices with DMA access.

I would like to know how do I prevent from all PCI devices access to my hypervisor memory area? Code examples would be perfect for me.

BTW: I test my project using QEMU environment if it's might affect the answer.

2

There are 2 answers

0
AudioBubble On

EPT limits access from CPU only, so you are right: you are missing protection from DMA accesses.
In order to operate IOMMU, you should search the ACPI. Look for a structure with signature DMAR (Intel VT-d) or IVRS (AMD-Vi).
You will be configuring a page table that has almost the same structure to long mode page tables.
Therefore, in addition to specifications of Intel VT-d and AMD-Vi, you should also read ACPI specification about how to look up ACPI tables.
Note that Intel VT-d is not tied with Intel VT-x, nor was AMD-Vi tied with AMD-V.

You can find Intel VT-d specification in Intel's website: https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html
You can find AMD-Vi specification in AMD's website: https://developer.amd.com/resources/developer-guides-manuals/
You can find ACPI specification in UEFI Forum's website: https://uefi.org/specifications

0
Brendan On

I believe that now I'm missing protection from devices with DMA access.

I'm not sure that you are missing protection. Either:

a) You allow guests to access a PCI device directly (e.g. by letting guest access the device's PCI configuration space, memory mapped IO area/s, IO ports and/or IRQs); and therefore also need to worry about the guest asking the device to perform bus mastering or DMA.

b) You do not allow guests to access a PCI device directly (e.g. only provide access to emulated devices, possibly via. some kind of "virt IO" interface); and therefore guest can't ask a real device to perform bus mastering or DMA in the first place.

I would like to know how do I prevent from all PCI devices access to my hypervisor memory area?

It doesn't work like that.

In general I find it much easier to think of it (buses, etc) more like networking, where everything (devices, IOMMU, CPU, memory controller) send and receive packets. E.g. a device sends a packet saying "read N bytes from this address for me" and waits for a packet containing a reply (the requested bytes), or a device sends a packet saying "write these N bytes to this address for me". The IOMMU intercepts these packets. It only knows the contents of the packet (what sent it, if it's read/write, the address, etc) and does not know whether the device is working for the host or working for a guest. If you prevent all PCI devices access to your hypervisor memory area, then you also prevent your hypervisor from using all devices.

Instead; the IOMMU uses "which device (or which group of devices)" as the basis for its decisions. It's almost like each device (or group of devices) has a set of page tables to convert "addresses from device" into "addresses to host". E.g. IOMMU receives a packet from a device saying "read N bytes from this address for me", and IOMMU says "Hey, for that device I need to use these tables to convert the addresses" and may end up sending a "read N bytes from this other completely different address" to the memory controller.

In other words; if you allow a guest to have direct access to a device (and allow the guest to ask the device to do bus-mastering or DMA); then the host (your hypervisor) should not attempt to access that device at all, and the IOMMU would need to be set up to convert "guest physical addresses from device" into "host physical addresses" (which will be like a mirror of the EPT that the CPU uses to convert guest physical addresses into host physical addresses).

Of course it's more complex than that - you'd also have to care about things like IRQs sent from the device (and emulate PCI configuration space).

For how to configure IOMMU, it's way too much stuff to describe here. You want to start with a document called the "Intel Virtualization Technology for Directed I/O Architecture Specification" (it should be easy to find with a web search).