Download the archive and unpack it


The archive contains the follosing files

  • ubuntu.img - the virtual machine's disk
  • - the script for setting up the virtual machine
  • - launches multiple streams using iperf
  • - listens on multiple sockets using iperf

Install the necessary packets

sudo ./ prepare

Boot up the virtual machine

sudo ./ start

Open SSH consoles to the virtual machine

ssh root@



Run a simple netperf test

netperf -H
  • What is the bandwidth of the virtual link?
  • How does netperf run? On the virtual machine, check the TCP listening ports
    netstat -tlpn

    Which options are available to the netperf users?

    netperf --help

    The user can choose from a series of tests. Which are them?

    man netperf

    Run a TCP stream test. Now, the virtual machine should be the sender and the host the receiver. Compare the CPU load of the sender and the receiver.

    netperf -H -t TCP_MAERTS -cC

    Find out more about netperf at == == Measure bandwidth and delay on links towards California and Spain



Ping the following hosts

Compare the latencies and verify the number of hops along each way using traceroute.


Run a 2 seconds UDP request-response test and display the CPU usage for both the host and the virtual machine

netperf -H -t UDP_RR -l 2 -cC

How many request-responses have been issued ?

DNS latency

Browse the Web for a few random pages. Do you find the response time acceptable? What does a browser do when you type a web page's address? Which DNS server are you using? Try resolving a random hostname. What is the query time?


Edit /etc/resolv.conf to use Google's DNS instead of your ISP's DNS. Add the following line at the top of the file.


Try resolving again. Is the query time better? Surf the web. Is the lag significant? Revert your nameserver changes.


What is the MTU of the br0 link?

ip link sh dev br0

Display the offloading capabilities of the NICs

ethtool -k br0
ethtool -k eth0


Measure the UDP bandwidth using the default send size (MTU). Display the UDP statistics before and after running the test for the host interface.

netperf -H -t UDP_STREAM -l 1 -cC

Repeat the test sending 64-byte packets.

netperf -H -t UDP_STREAM -l 1 -cC -- -m 64

The throughput is obviously worse and the CPU load is higher. Why? How is the throughput and the CPU load if jumbograms (frames larger than the MTU) are sent on the link?

netperf -H -t UDP_STREAM -l 1 -cC -- -m 1600

Alter the MTU of the interfaces on both the host

ifconfig br0 mtu 256

and the virtual guest

ifconfig eth0 mtu 256

Measure the UDP bandwidth and CPU load using the default MTU.

netperf -H -t UDP_STREAM -l 1 -cC

Revert the changes

ifconfig br0 mtu 1500
ifconfig eth0 mtu 1500



Run a TCP_STREAM test with the default MTU and with a 64-bit send size.

netperf -H -t TCP_STREAM -l 1 -cC
netperf -H -t TCP_STREAM -l 1 -cC -- -m 64

Why does the bandwidth does not decrease?

Packets in flight

What is the maximum window size for TCP on the hosts?

sysctl -n net.core.wmem_max
cat /proc/sys/net/core/wmem_max

How the TCP window be overridden using iperf?

man iperf

On the VM, use iperf to listen on TCP port 8080. Use a meager 1K receive window. Display the statistics once per second.

iperf --server --port 8080 --interval 1 --window 1K

On guest, run a TCP connection to the port opened on the VM. The flow should last 10 seconds.

iperf --client --port 8080 --time 10

Compare the bandwith to the previous TCP results.

Traffic control

On the VM, start On the client, run Inspect its contents


What are the packet loss rates? How does it impact the VoIP and video calls?

Let us apply Quality of Service. Define the traffic classes on the client side.

tc qdisc del dev br0 root
tc qdisc add dev br0 root handle 1: htb
tc class add dev br0 parent 1: classid 1:1 htb rate 1mbit burst 128k
tc class add dev br0 parent 1: classid 1:2 htb rate 40mbit burst 128k
tc class add dev br0 parent 1: classid 1:3 htb rate 5mbit burst 128k
tc class add dev br0 parent 1: classid 1:4 htb rate 3mbit burst 128k

Then classify the traffic.

tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 8000 0xffff flowid 1:1
tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 6000 0xffff flowid 1:2
tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 21 0xffff flowid 1:3
tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 80 0xffff flowid 1:4

Run the iperf server and client scripts again. Is the packet loss reasonable?

sesiuni/virtualization-networking/session6.txt · Last modified: 2013/06/26 08:08 by freescale