Most large workloads such as SAP HANA require special, highly optimized configuration to run in a virtual machine. Virtual resources such as memory and CPU must be carefully configured to ensure optimum performance of the virtual machine workload. Default VM configuration created by tools such as virt-install are not optimized and often result in poor performance of large workloads due to memory access latencies and incorrect/incomplete information available to the VM's task scheduler.

Currently, users deploying large workloads must manually optimize virtual CPU and memory resources, which can be error-prone and if not done properly can actually degrade performance. This project aims to create a tool that can produce suggested vCPU and vNUMA configuration based on a VM configuration template and capabilities of the target virtual machine host. E.g. something along the lines of

virsh cpu-topology-generate large-vm.xml host-caps.xml

where large-vm.xml contains the desired vCPU and memory amounts, e.g.

<domain> ... <memory unit='GiB'>512</memory> <vcpu placement='static'>128</vcpu> ... </domain>

and host-caps.xml contains the output of 'virsh capabilities' from the target virtual machine host. cpu-toplogy-generate produces libvirt domXML with optimized vCPU, vNUMA, and memory configuration for the target host. A third option might be useful to control the optimization level, e.g. optimize=performance or optimize=compatibility. The latter is for use-cases where performance is desirable, but compatibility (migratability) is required.


Comments

  • jfehlig
    almost 2 years ago by jfehlig | Reply

    Since hackweek I have not had time to work on 'virt-xml-tune', but I was able to make some good progress during hackweek. Given a minimal VM XML file containing desired vCPUs and memory plus a host capabilities description from the target host, virt-xml-tune will produce XML tuned for the target host. Currently the heuristics used for the tuning are simplistic. vCPUs are mapped 1-to-1 to pCPUs. When crossing physical NUMA node boundaries, vCPUs are placed in vNUMA nodes which are mapped to the physical NUMA topology. The same is true for memory tuning.

    As an example, consider a VM with 32 vCPUs and 20G of memory being tuned for a host with 2 NUMA nodes each with 30 CPUs and 16G of memory. virt-xml-tune would produce XML with 2 vNUMA nodes. vNUMA node 0 would contain 30 vCPUS and 16G of memory. The remaining 2 vCPUs and 4G of memory would be contained in vNUMA node 1. Although all vCPUs would be pinned to pCPUs in the respective NUMA nodes, one might object to this being called "tuned" XML :-). Perhaps a better strategy in this case would be to equally distribute the vCPUs and memory across vNUMA nodes.

    Needless to say, it takes more than a (slightly shortened) hackweek to create a tool such as virt-xml-tune. But good progress has been made and I look forward to continuing with it next hackweek, it not before then.

Similar Projects

This project is one of its kind!