Pagini
Workshops
Parteneri
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
sesiuni:virtualization-networking:session6 [2013/06/25 18:43] freescale |
sesiuni:virtualization-networking:session6 [2013/06/26 08:08] (current) freescale [Setup] |
||
---|---|---|---|
Line 1: | Line 1: | ||
= Topology = | = Topology = | ||
{{ :sesiuni:virtualization-networking:netperf_setup.png |}} | {{ :sesiuni:virtualization-networking:netperf_setup.png |}} | ||
+ | |||
+ | = Slides = | ||
+ | {{:sesiuni:virtualization-networking:session6_network_benchmarking_.pdf|}} | ||
= Setup = | = Setup = | ||
Line 6: | Line 9: | ||
Download the archive and unpack it | Download the archive and unpack it | ||
<code> | <code> | ||
- | wget http:///session6.zip | + | wget http://172.16.5.218/session6.zip |
unzip session6.zip | unzip session6.zip | ||
</code> | </code> | ||
Line 50: | Line 53: | ||
man netperf | man netperf | ||
</code> | </code> | ||
- | Run a 2 seconds UDP request-response test and display the CPU usage for both the host and the virtual machine | + | Run a TCP stream test. Now, the virtual machine should be the sender and the host the receiver. Compare the CPU load of the sender and the receiver. |
<code> | <code> | ||
- | netperf -H 192.168.0.2 -t UDP_RR -l 2 -cC | + | netperf -H 192.168.0.2 -t TCP_MAERTS -cC |
</code> | </code> | ||
+ | Find out more about **netperf** at [[http://www.netperf.org/netperf/training/Netperf.html]] | ||
== Speedtest.net == | == Speedtest.net == | ||
Measure bandwidth and delay on links towards California and Spain | Measure bandwidth and delay on links towards California and Spain | ||
Line 72: | Line 75: | ||
</code> | </code> | ||
Compare the latencies and verify the number of hops along each way using **traceroute**. | Compare the latencies and verify the number of hops along each way using **traceroute**. | ||
+ | |||
+ | == netperf == | ||
+ | Run a 2 seconds UDP request-response test and display the CPU usage for both the host and the virtual machine | ||
+ | <code> | ||
+ | netperf -H 192.168.0.2 -t UDP_RR -l 2 -cC | ||
+ | </code> | ||
+ | How many request-responses have been issued ? | ||
== DNS latency == | == DNS latency == | ||
Line 87: | Line 97: | ||
Revert your nameserver changes. | Revert your nameserver changes. | ||
+ | = UDP and TCP = | ||
+ | What is the MTU of the **br0** link? | ||
+ | <code> | ||
+ | ip link sh dev br0 | ||
+ | </code> | ||
+ | Display the offloading capabilities of the NICs | ||
+ | <code> | ||
+ | ethtool -k br0 | ||
+ | </code> | ||
+ | <code> | ||
+ | ethtool -k eth0 | ||
+ | </code> | ||
- | = Power consumption = | + | == UDP == |
+ | Measure the UDP bandwidth using the default send size (MTU). Display the UDP statistics before and after running the test for the host interface. | ||
+ | <code> | ||
+ | netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC | ||
+ | </code> | ||
+ | Repeat the test sending 64-byte packets. | ||
+ | <code> | ||
+ | netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC -- -m 64 | ||
+ | </code> | ||
+ | The throughput is obviously worse and the CPU load is higher. Why? | ||
+ | How is the throughput and the CPU load if jumbograms (frames larger than the MTU) are sent on the link? | ||
+ | <code> | ||
+ | netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC -- -m 1600 | ||
+ | </code> | ||
- | == netperf == | + | Alter the MTU of the interfaces on both the host |
+ | <code> | ||
+ | ifconfig br0 mtu 256 | ||
+ | </code> | ||
+ | and the virtual guest | ||
+ | <code> | ||
+ | ifconfig eth0 mtu 256 | ||
+ | </code> | ||
+ | Measure the UDP bandwidth and CPU load using the default MTU. | ||
+ | <code> | ||
+ | netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC | ||
+ | </code> | ||
+ | Revert the changes | ||
+ | <code> | ||
+ | ifconfig br0 mtu 1500 | ||
+ | </code> | ||
+ | <code> | ||
+ | ifconfig eth0 mtu 1500 | ||
+ | </code> | ||
- | == Availability == | + | == TCP == |
+ | === Coalescing === | ||
+ | Run a TCP_STREAM test with the default MTU and with a 64-bit send size. | ||
+ | <code> | ||
+ | netperf -H 192.168.0.2 -t TCP_STREAM -l 1 -cC | ||
+ | </code> | ||
+ | <code> | ||
+ | netperf -H 192.168.0.2 -t TCP_STREAM -l 1 -cC -- -m 64 | ||
+ | </code> | ||
+ | Why does the bandwidth does not decrease? | ||
+ | |||
+ | === Packets in flight === | ||
+ | What is the maximum window size for TCP on the hosts? | ||
+ | <code> | ||
+ | sysctl -n net.core.wmem_max | ||
+ | cat /proc/sys/net/core/wmem_max | ||
+ | </code> | ||
+ | How the TCP window be overridden using iperf? | ||
+ | <code> | ||
+ | man iperf | ||
+ | </code> | ||
+ | On the VM, use iperf to listen on TCP port 8080. Use a meager 1K receive window. Display the statistics once per second. | ||
+ | <code> | ||
+ | iperf --server --port 8080 --interval 1 --window 1K | ||
+ | </code> | ||
+ | On guest, run a TCP connection to the port opened on the VM. The flow should last 10 seconds. | ||
+ | <code> | ||
+ | iperf --client 192.168.0.2 --port 8080 --time 10 | ||
+ | </code> | ||
+ | Compare the bandwith to the previous TCP results. | ||
+ | |||
+ | = Traffic control = | ||
+ | On the VM, start **iperf_server.sh**. On the client, run **iperf_client.sh**. Inspect its contents | ||
+ | <code> | ||
+ | ./iperf_server.sh | ||
+ | </code> | ||
+ | <code> | ||
+ | cat iperf_client.sh | ||
+ | ./iperf_client.sh | ||
+ | </code> | ||
+ | What are the packet loss rates? How does it impact the VoIP and video calls? | ||
+ | |||
+ | Let us apply Quality of Service. Define the traffic classes on the client side. | ||
+ | <code> | ||
+ | tc qdisc del dev br0 root | ||
+ | tc qdisc add dev br0 root handle 1: htb | ||
+ | tc class add dev br0 parent 1: classid 1:1 htb rate 1mbit burst 128k | ||
+ | tc class add dev br0 parent 1: classid 1:2 htb rate 40mbit burst 128k | ||
+ | tc class add dev br0 parent 1: classid 1:3 htb rate 5mbit burst 128k | ||
+ | tc class add dev br0 parent 1: classid 1:4 htb rate 3mbit burst 128k | ||
+ | </code> | ||
+ | Then classify the traffic. | ||
+ | <code> | ||
+ | tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 8000 0xffff flowid 1:1 | ||
+ | tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 6000 0xffff flowid 1:2 | ||
+ | tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 21 0xffff flowid 1:3 | ||
+ | tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 80 0xffff flowid 1:4 | ||
+ | </code> | ||
- | == ethtool == | + | Run the iperf server and client scripts again. Is the packet loss reasonable? |
- | == MTU == |