Configure Memory Balloon for PowerKVM

Memory ballooning allows the guest memory to be changed dynamically by the host, depending on the amount of free memory available.

To enable ballooning edit XML file.

<devices>
<memballoon model=’virtio’/>
</devices>

virsh # list
Id    Name                           State
—————————————————-
2     rhel65             running

virsh # ^C
[root@powerkvm ~]# virsh qemu-monitor-command –domain 2 –hmp ‘info balloon’
balloon: actual=2048

To change balloon pages

[root@powerkvm ~]# virsh qemu-monitor-command –domain 2 –hmp ‘ balloon 4096’
[root@powerkvm ~]# virsh qemu-monitor-command –domain 2 –hmp ‘info balloon’
balloon: actual=4096

 

Enable Kernel shared Memory (KSM) on PowerKVM

Kernel Same-page Merge (KSM) is process which allows VM’s to share identical memory pages. By sharing pages, the combined memory usage of the guest is reduced. The savings are especially increased when multiple guests are running similar base operating system images.

1) Check if KSM is installed or not.

[root@powerkvm ~]# rpm -qa|grep ksm
ksm-2.0.0-2.1.pkvm2_1_1.20.38.ppc64
[root@powerkvm ~]#

2) KSMtuned status ( To control/tune KSM Services)

[root@powerkvm ~]# service ksmtuned status
Redirecting to /bin/systemctl status  ksmtuned.service
ksmtuned.service – Kernel Samepage Merging (KSM) Tuning Daemon
Loaded: loaded (/usr/lib/systemd/system/ksmtuned.service; enabled)
Active: active (running) since Wed 2014-10-22 01:41:11 PDT; 2 days ago
Process: 14123 ExecStart=/usr/sbin/ksmtuned (code=exited, status=0/SUCCESS)
Main PID: 14172 (ksmtuned)
CGroup: name=systemd:/system/ksmtuned.service
├─ 14172 /bin/bash /usr/sbin/ksmtuned
└─195606 sleep 60

[root@powerkvm4 ~]#

stop/start of KSM tuned :

service ksmtuned start

service ksmtuned stop

3) KSM Service

KSM is included in powerkvm package as shown in                                                            /boot/config-3.10.42-2017.1.pkvm2_1_1.44.ppc64

CONFIG_KSM=y

By default KSM is not started.  To view/start ksm service

[root@powerkvm ~]# cat /sys/kernel/mm/ksm/run
0

If the output is a 0, then enable ksm by running this command:

echo 1 > /sys/kernel/mm/ksm/run

4) Customize KSM tuned configure

User can set options for KSM in the /etc/ksmtuned.conf file

# Configuration file for ksmtuned.

# How long ksmtuned should sleep between tuning adjustments
# KSM_MONITOR_INTERVAL=60

# Millisecond sleep between ksm scans for 16Gb server.
# Smaller servers sleep more, bigger sleep less.
# KSM_SLEEP_MSEC=10

# KSM_NPAGES_BOOST=300
# KSM_NPAGES_DECAY=-50
# KSM_NPAGES_MIN=64
# KSM_NPAGES_MAX=1250

# KSM_THRES_COEF=20
# KSM_THRES_CONST=2048

# uncomment the following if you want ksmtuned debug info

# LOGFILE=/var/log/ksmtuned
# DEBUG=1

5) Monitor KSM

cd /sys/kernel/mm/ksm

pages_shared
The number of pages that have been merged.
pages_sharing
The number of virtual pages that are sharing a single page.
pages_unshared
Number of pages that are candidate to be shared but are not currently shared.
pages_volatile
Number of pages that are candidate to be shared but are changed frequently. These pages are not merged.
full_scan
Number of times that KSM has scanned for duplicated content
merge_across_nodes
Allows merging across NUMA nodes.
pages_to_scan
Number of pages to scan at a time. Setting this to a high value can impact performance.
sleep_milisecs
Amount of time between scans.

Configure NUMA on PowerKVM VM

To improve performance when accessing memory, restrict the guest to allocate memory from a specific set of NUMA nodes. Pin the guest vCPUs to a set of cores located on that same set of NUMA nodes.

NUMA configuration on Host:

[root@powerkvm ~]# numactl –hardware
available: 4 nodes (0-1,16-17)
node 0 cpus: 0 8 16 24 32 40
node 0 size: 32768 MB
node 0 free: 31116 MB
node 1 cpus: 48 56 64 72 80 88
node 1 size: 65536 MB
node 1 free: 63061 MB
node 16 cpus: 96 104 112 120 128 136
node 16 size: 32768 MB
node 16 free: 31770 MB
node 17 cpus: 144 152 160 168 176 184
node 17 size: 65536 MB
node 17 free: 64234 MB
node distances:
node 0 1 16 17
0: 10 20 40 40
1: 20 10 40 40
16: 40 40 10 20
17: 40 40 20 10
[root@powerkvm~]#

To configure NUMA, edit the domain XML configuration file for the guest. For example, to restrict a guest to NUMA node 0, edit the XML configuration file to include the following markup.

<numatune>
<memory nodeset=’0’/>
</numatune>

NUMA node information of respective VM (rhel65_qcow2)  is found here.

[root@powerkvm ~]# cat /sys/fs/cgroup/memory/machine/rhel65_qcow2.libvirt-qemu/memory.numa_stat
total=15773 N0=4254 N1=9711 N16=861 N17=947
file=147 N0=0 N1=142 N16=0 N17=5
anon=15626 N0=4254 N1=9569 N16=861 N17=942
unevictable=0 N0=0 N1=0 N16=0 N17=0
[root@powerkvm ~]#

 

 

 

 

Upgrade PowerKVM to latest release

As of today user might be running powerkvm-2.1.0.2-32 on Power8 server.  User might want to update powerkvm to  latest releases as PowerKVM 2.1.1 out.

1) Make sure to switch off guests

Either from a Management layer ( kimchi/CMO/Openstack..etc) or just use virsh destroy

2) Copy latest powerkvm iso (2.1.1) or latest to powerkvm machine.

3) Run Below

ibm-update-system –upgrade –iso-path=

4) Update all packages and reboot.

Upgrading virtual machines is as same as any other Linux Distribution upgrade ( yum or zypper )

Configure FSP IP on PowerKVM machine

Power8/PowerKVM machines shipped from factory will be provided with default FSP IP.  Power8 machines comes with two FSP ports in back panel.   User would have to change this IP to use his own IP.

1) Where to get default IP?

Default IP is available in User manual  or control panel or below 🙂

eth0: 169.254.2.147

eth1: 169.254.3.147

2) Default password?

Default password is provided in user manuals.

3) How to change IP?

a) Connect  your Laptop & FSP port with a cable and assign IP to your laptop with in range of default FSP Ip’s.

For ex: use 169.254.2.150 for your laptop, use subnet: 255.255.255.0

a) Open ASM on your laptop with default IP.

b) Network Services -> Network configuration

c) Choose Active FSP Port (Where ever cable is connected) and change IP

d) Once IP is changed, you will loose connectivity to FSP.  That means it successful.

 

 

 

 

 

Change Hypervisor mode on Power8 Server (PowerVM to PowerKVM)

Power8 server is capable of running in either PowerVM or PowerKVM hypervisor mode.  USer would have to change firmware on machine to run in specific  Hypervisor mode. PowerKVM runs on OPAL firmware.

1) Open ASM

2) User need to power off machine before changing Hypervisor mode.

3)  System Configuration -> Firmware Configuration. Change Firmware to OPAL to run in on PowerKVM mode. Or PowerVM to run on PowerVM mode.

4) Power on your machine and start using in PowerKVM mode.

 

 

 

 

Setup PowerKVM server

This posy helps you to Install powerkvm machine which is just shipped from factory.   Make sure to get right OPAL firmware for right version of hypervisor from partnerworld (customers)

1) PowerOn machine. Front side of the box, you find a control panel. With right set of values from manual, you will get default FSP ip.
2) Remove all HMC connections.
3) connect your laptop and FSP port using a cable. And configure your laptop with an IP in the range of default FSP IP which you got from control panel.
4) so now you can ping default FSP ip from your laptop.
5) Now open ASM using default FSP up and power off machine. The  go to hypervisor mode. Change mode to powerkvm (default powervm).
6) change FSP ip in network services with desired IP.
7) open ipmi console or connect monitor to power8 box to access console.
8) install can be done using various ways : dvd/pxe/https..etc.
9) insert dvd and start install. Install goes like any other linux box.
10) during network port , make sure to select bridge on active NIC and assign static IP or DHCP.
11) installation is completed. 🙂

Deploy PowerKVM using xCAT

This blog shows how to install powerkvm & managing powerkvm node using xCAT.  Increasing need for dynamics re-provisioning , need a management tool to install PowerKVM on Power8 servers.

PS: powerkvm support is added in xcat-2.9 ( which is in development version)

Use  http://sourceforge.net/projects/xcat/files/yum/devel/core-rpms-snap.tar.bz2 and also download latest xcat-dep package from  sourceforge.net/projects/xcat/files/xcat-dep/2.x_Linux/

1. Use copycds to copy powerkvm ISO to install deploy

[root@xcat]# copycds -n pkvm2.1 ibm-powerkvm-201410101243.iso

Copying media to /install/pkvm2.1/ppc64                                                                                         Media copy operational successful

[root@xcat]

2. verify copycds using lsdef to list data objects definitions.

[root@xcat] lsdef -t osimage

pkvm2.1-ppc64-install-compute (osimage)

ubuntu14.04-ppc64-install-compute (osimage)

ubuntu14.04-ppc64-install-kvm (osimage)

[root@xcat]

3. Define a new node “n1” using mkdef, to modify existing node chdef can be used

[root@xcat] chdef n1 groups=all,kvm cons=ipmi mgt=ipmi

1 object definitions have been created or modified.

4.  Configure IPMI IP and password by setting “bmc” and “bmcpassword

[root@xcat] chdef n1 bmc=10.10.10.11 bmcpassword=pkvm1234

1 object definitions have been created or modified.

5. Configure mac address for node “n1” by setting “mac

[root@xcat] chdef n1 mac=6c:ae:8b:6a:d7:a0

1 object definitions have been created or modified.

6. Configure tftp and console server IP by setting “tftpserver” and “conserver

[root@xcat] chdef n1 tftpserver=10.10.10.12 conserver=10.10.10.12 nfsserver=10.10.10.12

1 object definitions have been created or modified.

7. Configure domain for node “n1” by setting “domain

[root@xcat] chdef -t site domain=example.com

1 object definitions have been created or modified.

8.  Configure IP for the node “n1” by setting “IP

[root@xcat] chdef n1 ip=10.10.10.14

1 object definitions have been created or modified.

9. Sets up /etc/hosts from the xCAT hosts table using “makehosts

[root@xcat] makehosts n1

10. set network boot type to “petotboot” by setting “netboot”

[root@xcat] chdef n1 netboot=petitboot

1 object definitions have been created or modified.

11. Use “nodeset” command to start installation on node “n1”, next time it boots up

[root@xcat] nodeset n1 osimagpkvm2.1-ppc64-install-compute

n1: install pkvm2.1-ppc64-compute

12. Reboot the node “n1” using “rpower

[root@xcat] rpower n1 reset

n1: reset

13. Monitor installation through the IPMI serial console using “rcons

[root@xcat] rcons n1

[Enter ‘^Ec?” for help]

[SOL session operational. Use ~? for help]

Pettitboot bootloader automatically boots the “xCAT” entry obtained from dhcp server

Figure 2: Petitboot

14. nodestat can be used to get status of machine. Status of node changed to sshd once install is complete

Commands can be run on installed node using psh

[root@xcat] nodestat n1

n1: sshd

[root@xcat] psh n1 uptime

n1: 6c:ae:8b:6a:d7:a0, 1 user, load average: 0.14, 0.07, 0.0.6

Setup Chef on standalone PowerKVM Environment

1) What it is?

Chef simplifies the way you perform infrastructure automation and configuration management tasks. The standalone installation of the Chef  creates a working installation on a single server.   This setup is on  standalone environment on PowerKVM node. (No separate chef server is configured.)

2) How to install on powerkvm node.

yum search chef
============================================================================== N/S matched: chef ===============================================================================
chef.ppc64 : The full stack of chef

yum install chef.ppc64

3) Configuration files:

web.json:   This is a pointer to recipe, which user would like to run.

solo.rb: only sets two paths for Chef Solo.

default.rb: sample recipe which user would like to run. User need to keep adding recipes  as per his/her requirements

3.1) Example Cookbook  to start kimchid service on powerkvm.

cat /root/chef-repo/cookbooks/demo/recipes/default.rb

#
# Cookbook Name:: demo
# Recipe:: default
#
# Copyright 2014, YOUR_COMPANY_NAME
#
# All rights reserved – Do Not Redistribute
#

package ‘kimchi’ do
action :install
end

service ‘kimchid’ do
action [  :enable, :start ]
end

3.2) chef.solo

cat chef-repo/solo.rb
file_cache_path “/root/chef-solo”
cookbook_path “/root/chef-repo/cookbooks”

3.3) web.json

cat web.json
{
“run_list”: [ “recipe[demo]” ]
}

4) How to run

chef-solo  -c chef-repo/solo.rb -j web.json
[2014-09-16T13:42:49+01:00] WARN:
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
SSL validation of HTTPS requests is disabled. HTTPS connections are still
encrypted, but chef is not able to detect forged replies or man in the middle
attacks.

To fix this issue add an entry like this to your configuration file:

“`
# Verify all HTTPS connections (recommended)
ssl_verify_mode :verify_peer

# OR, Verify only connections to chef-server
verify_api_cert true
“`

To check your SSL configuration, or troubleshoot errors, you can use the
`knife ssl check` command like so:

“`
knife ssl check -c chef-repo/solo.rb
“`

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

Starting Chef Client, version 11.12.8
Compiling Cookbooks…
Converging 2 resources
Recipe: demo::default
* package[kimchi] action install (up to date)
* service[kimchid] action enable
   – enable service service[kimchid]

* service[kimchid] action start (up to date)

Running handlers:
Running handlers complete

Chef Client finished, 1/3 resources updated in 4.874421085 seconds