Skip to main content

KVM VCpu Pinning With Virsh

CPU allocation to virtual guests with KVM Hypervisor can be done in different ways. In my example vcpu placement is static, but no cpuset is specified, so that domain process will be pinned to all the available physical CPUs. I want to change them and pin vcpus to available physical cpus.

Note: The followings are only examples. So give them a try on a test environment by substituting your values.

# virsh dumpxml test_kvm01
<domain type='kvm' id='18'>
<name>test_kvm01</name>
..
..
<vcpu placement='static'>4</vcpu>
..
..
</domain>


vcpu info can be listed using virsh vcpuinfo command:
# virsh vcpuinfo test_kvm01
VCPU: 0
CPU: 11
State: running
CPU time: 247454.2s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU: 1
CPU: 11
State: running
CPU time: 845257.6s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU: 2
CPU: 53
State: running
CPU time: 237962.2s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU: 3
CPU: 51
State: running
CPU time: 226221.4s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy


Current vcpu pinning configuration can be obtained with the following command:
# virsh vcpupin test_kvm01
VCPU: CPU Affinity
----------------------------------
0: 0-79
1: 0-79
2: 0-79
3: 0-79


With the same command a vcpu can be pinned to corresponding physical cpu:
# virsh vcpupin test_kvm01 0 10
# virsh vcpupin test_kvm01 1 11
# virsh vcpupin test_kvm01 2 12
# virsh vcpupin test_kvm01 3 13


Comments

Popular posts from this blog

Creating Multiple VLANs over Bonding Interfaces with Proper Routing on a Centos Linux Host

In this post, I am going to explain configuring multiple VLANs on a bond interface. First and foremost, I would like to describe the environment and give details of the infrastructure. The server has 4 Ethernet links to a layer 3 switch with names: enp3s0f0, enp3s0f1, enp4s0f0, enp4s0f1 There are two bond interfaces both configured as active-backup bond0, bond1 enp4s0f0 and enp4s0f1 interfaces are bonded as bond0. Bond0 is for making ssh connections and management only so corresponding switch ports are not configured in trunk mode. enp3s0f0 and enp3s0f1 interfaces are bonded as bond1. Bond1 is for data and corresponding switch ports are configured in trunk mode. Bond0 is the default gateway for the server and has IP address 10.1.10.11 Bond1 has three subinterfaces with VLAN 4, 36, 41. IP addresses are 10.1.3.11, 10.1.35.11, 10.1.40.11 respectively. Proper communication with other servers on the network we should use routing tables. There are three

3 Node (Master Slave Slave) Redis Cluster with Sentinel

It is possible to make your Redis cluster Fault Tolerant and Highly Available by building a replica set and then monitor these nodes using sentinel for automatic failover. I am going to give an example setup to explain it. The structure is built with three nodes running one as a master and two as slaves. Master Node: (Centos 7.2) 192.168.1.11 Slave1 Node: (Centos 7.2) 192.168.1.12 Slave2 Node: (Centos 7.2) 192.168.1.13 Edit System settings on each node: /etc/sysctl.conf Disable transparent hugepage (transparent_hugepage=never) on each node: /etc/default/grub Apply grub config and reboot each node: Master Node: /etc/redis/6379.conf Slave1 Node: /etc/redis/6379.conf Slave2 Node: /etc/redis/6379.conf Master Node: /etc/redis/sentinel.conf Slave1 Node: /etc/redis/sentinel.conf Slave2 Node: /etc/redis/sentinel.conf Each Node: /etc/systemd/system/multi-user.target.wants/redis-server.service Each Node: /etc/