External Snapshot of raw images

When external snapshot of raw image is taken, delta is taken into qcow2 files.

virsh # list
Id    Name                           State
—————————————————-
4     cbtool                         running
6     master                         running

virsh # snapshot-create-as master snap1-master “snap1” –diskspec vda,file=/home/snap1.qcow2   –disk-only –atomic

Domain snapshot snap2-master created

snapshots tree :

virsh # snapshot-list  master –tree
snap1-master

virsh # snapshot-create-as master snap2-master “snap2” –diskspec vda,file=/home/snap2.qcow2   –disk-only  –atomic
Domain snapshot snap2-master created
virsh # snapshot-list  master –tree
snap1-master
|
+- snap2-master

Image info:

qemu-img info  /home/snap2.qcow2
image: /home/snap2.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 196K
cluster_size: 65536
backing file: /home/snap1.qcow2
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

How to Delete:

virsh # snapshot-list master
Name                 Creation Time             State
————————————————————
snap2-master         2016-01-07 03:38:10 -0500 disk-snapshot

virsh # snapshot-delete master snap2-master –metadata
Domain snapshot snap2-master deleted

 

 

 

Advertisements

Number of io requests for each io_submit

Async io use io_submit calls. aio=native is used for async io. To get number of io requests for each io_submit from KVM VM,
here you go. My seq write 4k run on SSD. While capturing IOPS, make sure to trace io_submit perf events using below. 
sys_enter_io_submit, sys_exit_io_submit are mandatory. Essentially each io_submit call followed by *_io_getevents
which are irrelevant to present topic. 

 -e syscalls:sys_enter_io_submit -e syscalls:sys_exit_io_submit -e syscalls:sys_enter_io_getevents -e syscalls:sys_exit_io_getevents

Get Iops for this run.

write-4KiBIOPS 76971.9

Get Number of io_submits from captured perf.data. Number of enter_io_submits are fine. Ofcourse same number of exits will be there

[root@perf io-submit-write-4k]# perf script | grep io_submit | grep enter | wc -l
493370
[root@perf io-submit-write-4k]#
Get timestamp of io_submit events. (first and last)
First: 
qemu-kvm  3693 [025]  1914.589390: syscalls:sys_enter_io_submit: ctx_id: 0x7f3f18a61000, nr: 0x000000d1, iocbpp:
                     697 io_submit (/usr/lib64/libaio.so.1.0.1)
                       8 [unknown] ([unknown])
                       0 [unknown] ([unknown])

Last: 

qemu-kvm  3693 [001]  1949.737723: syscalls:sys_enter_io_submit: ctx_id: 0x7f3f18a61000, nr: 0x000000d1, iocbpp: 0x7ffd4e50b7b0
                 697 io_submit (/usr/lib64/libaio.so.1.0.1)
        7f3f1c6c9250 [unknown] ([unknown])
                   0 [unknown] ([unknown])

Number of submits per sec.
Time stamp diff: 1949.737723- 1914.589390 = 35.15
Number of submits:  493370

Submits/sec = 493370/35.15 = 14036.13086771
 IOPS metric is requests/second. We got submits per second. 
Number of requests per submit
requets/submission = (requests/sec) / (submits/sec)
                   =  76971.9 / 14036.130
                   =  5.483840631

So for my 4k write run number of requests per each submit are 5.48

iostat analysis: time spent for each IO request

These are one of the 4K write results to disk vdb (lvm volume on SSD which is irrelevent for present discussion).

“The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them

Wait_Time-vdb-write=0.042727

Throughput :

Throughput-vdb-write=65.902727

“svctm – The average service time (in milliseconds) for I/O requests that were issued to the device.

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
vdb               0.00     0.00 49409.00    0.00   193.00     0.00     8.00     7.04    0.14    0.14    0.00   0.02 100.00

Disk Utilization

Utilization-vdb=74.195455

Frame size; 4K

0.0427 ms = 42.7 microseconds.   42.7 microseconds per request
Throughput is 65 MB/s so 65 * 1024 / 4

= 16640 requests/s

1000000 microseconds/s / 16640 requests/s = 60 ms/request

60 ms/request * 0.75% disk utilization = 45 microseconds/request

So Time spend on each IO request is 45 microseconds/request

Thanks to Stefan

 

 

Memory hotplug support in PowerKVM

Bharata B Rao's Blog

Introduction
Pre requisites
Basic hotplug operation
More options
Driving via libvirt
Debugging aids
Internal details
Future

Introduction

Memory hotplug is a technique or a feature that can be used to dynamically increase or decrease the amount of physical RAM available in the system. In order for the dynamically added memory to become available to the applications, memory hotplug should be supported appropriately at multiple layers like in the firmware and operating system. This blog post mainly looks at the emerging support for memory hotplug in KVM virtualization for PowerPC sPAPR virtual machines (pseries guests). In case of virtual machines, memory hotplug is typically used to vertically scale up or scale down the guest’s physical memory at runtime based on the requirements. This feature is expected to be useful for supporting vertical scaling of PowerPC guests in KVM Cloud environments.

In KVM virtualization, an alternative way to dynamically increase or decrease…

View original post 1,338 more words

Change password of VM image using guestfish

[root@psuriset ~]# guestfish –rw -a /var/lib/libvirt/images/trusty-server-cloudimg-amd64-disk1.img

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: ‘help’ for help on commands
‘man’ to read the manual
‘quit’ to quit the shell

><fs>
><fs> ls
ls should have 1 parameter
type ‘help ls’ for help on ls
><fs> run
100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
><fs> list-filesystems
/dev/sda1: ext4
><fs> mount /dev/sda1 /
><fs> vi /etc/sh
/etc/shadow   /etc/shadow-  /etc/shells
><fs> vi /etc/sh
/etc/shadow   /etc/shadow-  /etc/shells
><fs> vi /etc/shadow
><fs> vi /etc/sudoers

—- here edit passwd and quit using “exit”—

psuriset ALL=(ALL) NOPASSWD: ALL

><fs> quit
[root@psuriset ~]#

[root@psuriset ~]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  ‘help’ for help with commands
‘quit’ to quit
virsh # start cbtool
Domain cbtool started

virsh # list
Id    Name                           State
—————————————————-
15    cbtool                         running

Edit VM image and make passwordless

Either with guestfish or virt-edit this can be done

virt-edit -d centos /etc/passwd -e ‘s/^root:.*?:/root::/’

virsh # start centos
Domain centos started

virsh # console centos
Connected to domain centos
Escape character is ^]

CentOS Linux 7 (Core)
Kernel 3.10.0-229.el7.x86_64 on an x86_64

localhost login: root
Last failed login: Thu Sep 10 09:41:52 UTC 2015 on ttyS0
[root@localhost ~]#

Enable virtio-blk data plane in libvirt for high performance

Usage:

Previous post was based on old method..  New format:

https://libvirt.org/formatdomain.html#elementsIOThreadsAllocation

Will share performance results with data-plane soon.

 

Old Post.

Replace   <domain type=’kvm’>

with

<domain type=’kvm’ xmlns:qemu=’http://libvirt.org/schemas/domain/qemu/1.0′&gt;

And add qemu-commandline options

</devices>
<qemu:commandline>
<qemu:arg value=’-set’/>
<qemu:arg value=’device.virtio-disk0.scsi=off’/>
<qemu:arg value=’-set’/>
<qemu:arg value=’device.virtio-disk0.config-wce=off’/>
<qemu:arg value=’-set’/>
<qemu:arg value=’device.virtio-disk0.x-data-plane=on’/>
</qemu:commandline>
</domain>