OS Virtualization Workshop

Prerequisites

  • Use - - help with a command anytime you're in trouble.
  • Please run your setups as root.

Network configuration

Please execute the following setup on your working machines:

brctl addbr br0
ifconfig eth0 0.0.0.0 up
ifconfig br0 up
brctl addif br0 eth0
dhclient br0

LXC

In this setup we will investigate the features of the LXC tools.

NOTE:

  • most container commands require a -n <CONTAINER_NAME> parameter.

BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete environment for any small or embedded system.

BusyBox has been written with size-optimization and limited resources in mind. It is also extremely modular so you can easily include or exclude commands (or features) at compile time. This makes it easy to customize your embedded systems. To create a working system, just add some device nodes in /dev, a few configuration files in /etc, and a Linux kernel.

LXC contains an upstream template that builds a BusyBox powered container. The advantage of this container is that it provides most of the Linux command line utilities, while having a very small footprint (~2MB). You can find this LXC template on your host at /share/lxc/templates/lxc-busybox.

  • Check that your host machine is capable of running containers - hint: lxc-checkconfig.

* Create the container - we will not focus on the configuration file of the container at this point.

lxc-create -n foo -t busybox -f /share/doc/lxc/examples/lxc-veth.conf
  • Check that the container has been created - hint: lxc-ls. The default path where the container rootfs resides is /var/lib/lxc. What does this path contain? What container-specific data is kept?

* Check the status of the container - hint: lxc-info.

  • Start the container - by using lxc-start -n <CONTAINER_NAME>, you are able to start the container and gain access to its primary console:
lxc-start -n foo
udhcpc (v1.21.0) started
Sending discover...
Sending select for 10.171.74.97...
Lease of 10.171.74.97 obtained, lease time 345600
 
Please press Enter to activate this console.
 
/ #

Although this is the default console of the container and you need no authentication credentials to access it, you cannot escape it unless you stop the container from another terminal. Hence, it is preferred to start the container as a daemon, and connect to it using lxc-console. This will force us to input credentials as well.

lxc-start -n foo -d
lxc-console -n foo
 
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
 
foo login: root
Password:
~ #

What is the status of the container now? (lxc-info)

  • List the processes in the container - hint: lxc-ps. Do this both from your host machine and from the container. What's the difference?

* Restrict container resource usage - start 2 CPU consuming processes in the container:

~ # (while true; do true; done) &
~ # (while true; do true; done) &

These will fill up both the host cores (you may check with top). In order to control a container assigned CPUs, you may use the lxc-cgroup command:

lxc-cgroup -n foo cpuset.cpus
0-1
lxc-cgroup -n foo cpuset.cpus 0

This has restricted container execution on core 0. Check the CPU load with top again.

  • Stop the container - hint: lxc-stop.

* Destroy the container - in order to destroy the container, you must run the lxc-destroy comand. This will delete the container's rootfs and configuration file from the host. However, we will not destroy the container just yet.

Libvirt

In this setup we will explore the configuration options for the Libvirt containers driver. The libvirt_lxc is a different container implementation than LXC, but it is based on the same underlying kernel features.

Libvirt virtual machines are called domains. These are defined from XML files. The complete working cycle with Libvirt domains is:

  • defining a domain - from the XML file
  • starting a domain
  • connecting to a domain
  • stopping a domain
  • undefining a domain

In our scenario, we will start from a basic libvirt domain and add some features to it. For each added feature, you must undefine the domain, and define it again from the updated XML.

NOTE: please check that the libvirtd hypervisor is running (the libvirtd process). If it's not present, start it with libvirtd -d.

$ ps axf | grep libvirtd
10914 pts/1    S+     0:00  |       \_ grep libvirtd
 5328 ?        Sl     0:00 /usr/sbin/libvirtd -d

Basic libvirt_lxc domain

This is created from a minimal XML file and starts a shell in an isolated environment.

  • Create the domain XML:
$ cat foo.xml
<domain type='lxc'>
  <name>foo</name>
  <memory>500000</memory>
 
  <os>
    <type>exe</type>
    <init>/bin/sh</init>
  </os>
 
  <devices>
    <console type='pty'/>
  </devices>
</domain>
  • Define the domain:
$ virsh -c lxc:/// define foo.xml
Domain foo defined from foo.xml
  • Check that the domain is present:
$ virsh -c lxc:/// list --all
 Id    Name                           State
----------------------------------------------------
 -     foo                            shut off
  • Start the domain:
$ virsh -c lxc:/// start foo
Domain foo started
  • Connect to the domain:
$ virsh -c lxc:/// console foo
Connected to domain foo
Escape character is ^]
# ls /bin | grep bash
bash
rbash
# ps
  PID TTY          TIME CMD
    1 pts/3    00:00:00 sh
    9 pts/3    00:00:00 ps
# ps --help
 
Usage:
 ps [options]
 
 Try 'ps --help <simple|list|output|threads|misc|all>'
  or 'ps --help <s|l|o|t|m|a>'
 for additional help text.
 
For more details see ps(1).
  • Stop the domain:
$ virsh -c lxc:/// destroy foo
Domain foo destroyed
  • Undefine the domain:
$ virsh -c lxc:/// undefine foo
Domain foo has been undefined

Custom domain rootfs

In this setup we will use a custom rootfs for our domain. We will use the one created at the previous section with LXC (we haven't destroyed the container, remember?). This way we can have the BusyBox binaries in a Libvirt container.

  • Add the following section tags in the domain XML definition file, under the devices tag:
    <domain type='lxc'>
      [ ... ]
      <devices>
        [ ... ]
        <filesystem type='mount'>
          <source dir='/var/lib/lxc/foo/rootfs'/>
          <target dir='/'/>
        </filesystem>
      </devices>
    </domain>

* Define and start the domain

  • Connect to the domain
# virsh -c lxc:/// console foo
Connected to domain foo
Escape character is ^]
/ #
/ #
/ #
/ # ps --help
BusyBox v1.21.0 (2013-06-14 04:32:50 EDT) multi-call binary.
 
Usage: ps [-o COL1,COL2=HEADER] [-T]
 
Show list of processes
 
        -o COL1,COL2=HEADER     Select columns for display
        -T                      Show threads
 
/ # ps
PID   USER     TIME   COMMAND
    1 root       0:00 /bin/sh
    5 root       0:00 ps
 

Notice that ps is now provided by BusyBox, which means that we are running in the container rootfs.

  • Stop and undefine the domain

BusyBox powered domain

Now we will try to start busybox-init as the container init process.

  • Edit the init process in the domain XML file to point to /sbin/init in the rootfs (the path is relative to the container rootfs):
# cat foo.xml | grep init
    <init>/sbin/init</init>
  • Edit the inittab in the container rootfs to start a single getty process:
# cat /var/lib/lxc/foo/rootfs/etc/inittab
tty1::respawn:/bin/getty -L tty1 115200 vt100

We need to make this change because Libvirt uses a different terminal setup than LXC.

  • Define and start the domain

* Connect to the domain - you will notice that you must input the root / root credentials, as you are now connecting to a getty process. You will also notice that your parent process is now init, as opposed to the previous sh.

# virsh -c lxc:/// console foo
Connected to domain foo
Escape character is ^]
 
smackdab login: root
Password:
login[2]: root login on 'pts/0'
~ # ps
PID   USER     TIME   COMMAND
    1 root       0:00 init
    2 root       0:00 -sh
    3 root       0:00 ps
~ #
  • Stop and undefine the domain

Creating an ArchLinux container

In this setup we will configure and start an ArchLinux container on Debian.

This requires some tools to be available on the host:

  • pacman - the ArchLinux package manager (installed from source)
  • arch-install-scripts

They have both been installed on your working machines.

We will create the ArchLinux container by using the upstream lxc-archlinux template, available on the host at /share/lxc/templates.

  • Check that pacman is running properly on your machine:
# pacman -Syu
:: Synchronizing package databases...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  103k  100  103k    0     0  67361      0  0:00:01  0:00:01 --:--:--  113k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1505k  100 1505k    0     0   468k      0  0:00:03  0:00:03 --:--:--  500k
:: Starting full system upgrade...
 there is nothing to do
  • Create the container - we will pass an additional parameter to the template - -P dhclient - so that we have the package available in the container.
# lxc-create -n arch -t archlinux -f /share/doc/lxc/examples/lxc-veth.conf -- -P dhclient

Enjoy! :)

  • Start and connect to the container:
# lxc-start -n arch
[ ... ]
arch login: root
Last login: Mon Jun 17 11:12:51 on console
[root@arch ~]#

You're now inside your ArchLinux container.

  • Get an IP for your interface:
[root@arch ~]# dhclient eth0
[root@arch ~]# ip addr sh dev eth0
23: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether a6:3a:4a:a8:55:8a brd ff:ff:ff:ff:ff:ff
    inet 10.171.74.133/23 brd 10.171.75.255 scope global eth0
    inet6 fe80::a43a:4aff:fea8:558a/64 scope link
       valid_lft forever preferred_lft forever
  • Configure pacman - edit /etc/pacman.conf to have the following options activated, in this order:
[options]
HoldPkg     = pacman glibc
XferCommand = /usr/bin/curl -C - -f %u > %o
CheckSpace
SigLevel    = Never
 
[core]
Include = /etc/pacman.d/mirrorlist
 
[extra]
Include = /etc/pacman.d/mirrorlist
 
[community]
Include = /etc/pacman.d/mirrorlist
  • Activate a mirror in /etc/pacman.d/mirrorlist - first one will do.

* Check that pacman works by issuing pacman -Syu.

  • Install the vim package - pacman -Sy vim.

In a similar manner, you may install other packages to fit your needs.

  • After you are done, you may stop the container by running halt from inside.
sesiuni/virtualization-networking/session1.txt · Last modified: 2013/06/26 23:55 by laura