Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
sesiuni:virtualization-networking:session6 [2013/06/25 20:41]
freescale
sesiuni:virtualization-networking:session6 [2013/06/26 08:08] (current)
freescale [Setup]
Line 1: Line 1:
 = Topology = = Topology =
 {{ :​sesiuni:​virtualization-networking:​netperf_setup.png |}} {{ :​sesiuni:​virtualization-networking:​netperf_setup.png |}}
 +
 += Slides =
 +{{:​sesiuni:​virtualization-networking:​session6_network_benchmarking_.pdf|}}
  
 = Setup = = Setup =
Line 6: Line 9:
 Download the archive and unpack it Download the archive and unpack it
 <​code>​ <​code>​
-wget http:///​session6.zip+wget http://172.16.5.218/​session6.zip
 unzip session6.zip unzip session6.zip
 </​code>​ </​code>​
Line 54: Line 57:
 netperf -H 192.168.0.2 -t TCP_MAERTS -cC netperf -H 192.168.0.2 -t TCP_MAERTS -cC
 </​code>​ </​code>​
 +Find out more about **netperf** at [[http://​www.netperf.org/​netperf/​training/​Netperf.html]]
 == Speedtest.net == == Speedtest.net ==
 Measure bandwidth and delay on links towards California and Spain Measure bandwidth and delay on links towards California and Spain
Line 100: Line 103:
 ip link sh dev br0 ip link sh dev br0
 </​code>​ </​code>​
-Measure the UDP bandwidth using the default send size (MTU). ​Display the UDP statistics before and after running the test for the host interface.+ 
 +Display the offloading capabilities of the NICs
 <​code>​ <​code>​
-netperf ​-H 192.168.0.2 -t UDP_STREAM -l 2 -cC+ethtool ​-k br0
 </​code>​ </​code>​
-Before and after running the test display the UDP statistics on the receiver i.e. the virtual machine 
 <​code>​ <​code>​
-netstat ​-su eth0+ethtool ​-eth0 
 +</​code>​ 
 + 
 +== UDP == 
 +Measure the UDP bandwidth using the default send size (MTU). Display the UDP statistics before and after running the test for the host interface. 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC
 </​code>​ </​code>​
-How many UDP packets have been received? 
 Repeat the test sending 64-byte packets. Repeat the test sending 64-byte packets.
 <​code>​ <​code>​
-netperf -H 192.168.0.2 -t UDP_STREAM -l -cC -- -m 64+netperf -H 192.168.0.2 -t UDP_STREAM -l -cC -- -m 64 
 +</​code>​ 
 +The throughput is obviously worse and the CPU load is higher. Why? 
 +How is the throughput and the CPU load if jumbograms (frames larger than the MTU) are sent on the link? 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC -- -m 1600
 </​code>​ </​code>​
-The throughput is obviously worse. Has the number of UDP received packets increased? 
  
 +Alter the MTU of the interfaces on both the host
 +<​code>​
 +ifconfig br0 mtu 256
 +</​code>​
 +and the virtual guest
 +<​code>​
 +ifconfig eth0 mtu 256
 +</​code>​
 +Measure the UDP bandwidth and CPU load using the default MTU.
 +<​code>​
 +netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC
 +</​code>​
 +Revert the changes
 +<​code>​
 +ifconfig br0 mtu 1500
 +</​code>​
 +<​code>​
 +ifconfig eth0 mtu 1500
 +</​code>​
  
-== netperf ​==+== TCP == 
 +=== Coalescing === 
 +Run a TCP_STREAM test with the default MTU and with a 64-bit send size. 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t TCP_STREAM -l 1 -cC 
 +</​code>​ 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t TCP_STREAM -l 1 -cC -- -m 64 
 +</​code>​ 
 +Why does the bandwidth does not decrease?
  
-== Availability ​==+=== Packets in flight ​=== 
 +What is the maximum window size for TCP on the hosts? 
 +<​code>​ 
 +sysctl -n net.core.wmem_max 
 +cat /​proc/​sys/​net/​core/​wmem_max 
 +</​code>​ 
 +How the TCP window be overridden using iperf? 
 +<​code>​ 
 +man iperf 
 +</​code>​ 
 +On the VM, use iperf to listen on TCP port 8080. Use a meager 1K receive window. Display the statistics once per second. 
 +<​code>​ 
 +iperf --server --port 8080 --interval 1 --window 1K 
 +</​code>​ 
 +On guest, run a TCP connection to the port opened on the VM. The flow should last 10 seconds. 
 +<​code>​ 
 +iperf --client 192.168.0.2 --port 8080 --time 10 
 +</​code>​ 
 +Compare the bandwith to the previous TCP results. 
 + 
 += Traffic control = 
 +On the VM, start **iperf_server.sh**. On the client, run **iperf_client.sh**. Inspect its contents 
 +<​code>​ 
 +./​iperf_server.sh 
 +</​code>​ 
 +<​code>​ 
 +cat iperf_client.sh 
 +./​iperf_client.sh 
 +</​code>​ 
 +What are the packet loss rates? How does it impact the VoIP and video calls? 
 + 
 +Let us apply Quality of Service. Define the traffic classes on the client side. 
 +<​code>​ 
 +tc qdisc del dev br0 root 
 +tc qdisc add dev br0 root handle 1: htb 
 +tc class add dev br0 parent 1: classid 1:1 htb rate 1mbit burst 128k 
 +tc class add dev br0 parent 1: classid 1:2 htb rate 40mbit burst 128k 
 +tc class add dev br0 parent 1: classid 1:3 htb rate 5mbit burst 128k 
 +tc class add dev br0 parent 1: classid 1:4 htb rate 3mbit burst 128k 
 +</​code>​ 
 +Then classify the traffic. 
 +<​code>​ 
 +tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 8000 0xffff flowid 1:1 
 +tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 6000 0xffff flowid 1:2 
 +tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 21 0xffff flowid 1:3 
 +tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 80 0xffff flowid 1:4 
 +</​code>​
  
-== ethtool ==+Run the iperf server and client scripts again. Is the packet loss reasonable?
  
-== MTU == 
sesiuni/virtualization-networking/session6.1372182101.txt.gz · Last modified: 2013/06/25 20:41 by freescale