Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
sesiuni:virtualization-networking:session6 [2013/06/25 17:09]
freescale created
sesiuni:virtualization-networking:session6 [2013/06/26 08:08] (current)
freescale [Setup]
Line 1: Line 1:
 = Topology = = Topology =
 +{{ :​sesiuni:​virtualization-networking:​netperf_setup.png |}}
 +
 += Slides =
 +{{:​sesiuni:​virtualization-networking:​session6_network_benchmarking_.pdf|}}
  
 = Setup = = Setup =
  
-netstat -tlpn+Download the archive and unpack it 
 +<​code>​ 
 +wget http://​172.16.5.218/​session6.zip 
 +unzip session6.zip 
 +</​code>​
  
-= Bandwidth =+The archive contains the follosing files 
 +* ubuntu.img - the virtual machine'​s disk 
 +* prepare.sh - the script for setting up the virtual machine 
 +* iperf_client.sh - launches multiple streams using **iperf** 
 +* iperf_server.sh - listens on multiple sockets using **iperf**
  
-== netperf ==+Install the necessary packets 
 +<​code>​ 
 +sudo ./​prepare.sh prepare 
 +</​code>​
  
 +Boot up the virtual machine
 +<​code>​
 +sudo ./​prepare.sh start
 +</​code>​
 +
 +Open SSH consoles to the virtual machine
 +<​code>​
 +ssh root@192.168.0.2
 +</​code>​
 +
 += Bandwidth =
 +== netperf ==
 +Run a simple netperf test
 +<​code>​
 +netperf -H 192.168.0.2
 +</​code>​
 +* What is the bandwidth of the virtual link?
 +* How does **netperf** run? On the virtual machine, check the TCP listening ports
 +<​code>​
 +netstat -tlpn
 +</​code>​
 +Which options are available to the **netperf** users?
 +<​code>​
 +netperf --help
 +</​code>​
 +The user can choose from a series of tests. Which are them?
 +<​code>​
 +man netperf
 +</​code>​
 +Run a TCP stream test. Now, the virtual machine should be the sender and the host the receiver. Compare the CPU load of the sender and the receiver.
 +<​code>​
 +netperf -H 192.168.0.2 -t TCP_MAERTS -cC
 +</​code>​
 +Find out more about **netperf** at [[http://​www.netperf.org/​netperf/​training/​Netperf.html]]
 == Speedtest.net == == Speedtest.net ==
 +Measure bandwidth and delay on links towards California and Spain
 +<​code>​
 +http://​speedtest.net
 +</​code>​
 +
  
 = Latency = = Latency =
  
 == ping == == ping ==
 +Ping the following hosts
 +<​code>​
 +192.168.0.2
 +cs.curs.pub.ro
 +gnu.org
 +</​code>​
 +Compare the latencies and verify the number of hops along each way using **traceroute**.
  
-== traceroute ​==+== netperf ​== 
 +Run a 2 seconds UDP request-response test and display the CPU usage for both the host and the virtual machine 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t UDP_RR -l 2 -cC 
 +</​code>​ 
 +How many request-responses have been issued ?
  
 == DNS latency == == DNS latency ==
 +Browse the Web for a few random pages. Do you find the response time acceptable?
 +What does a browser do when you type a web page's address?
 +Which DNS server are you using? Try resolving a random hostname. What is the query time?
 +<​code>​
 +dig ampathos.ro
 +</​code>​
 +Edit **/​etc/​resolv.conf** to use Google'​s DNS instead of your ISP's DNS. Add the following line at the top of the file.
 +<​code>​
 +nameserver 8.8.8.8
 +</​code>​
 +Try resolving //​ampathos.ro//​ again. Is the query time better? Surf the web. Is the lag significant?​
 +Revert your nameserver changes.
  
-Power consumption ​=+UDP and TCP =
  
-== netperf ==+What is the MTU of the **br0** link? 
 +<​code>​ 
 +ip link sh dev br0 
 +</​code>​ 
 + 
 +Display the offloading capabilities of the NICs 
 +<​code>​ 
 +ethtool -k br0 
 +</​code>​ 
 +<​code>​ 
 +ethtool -k eth0 
 +</​code>​ 
 + 
 +== UDP == 
 +Measure the UDP bandwidth using the default send size (MTU). Display the UDP statistics before and after running the test for the host interface. 
 +<​code>​ 
 +netperf ​-H 192.168.0.2 -t UDP_STREAM -l 1 -cC 
 +</​code>​ 
 +Repeat the test sending 64-byte packets. 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC -- -m 64 
 +</​code>​ 
 +The throughput is obviously worse and the CPU load is higher. Why? 
 +How is the throughput and the CPU load if jumbograms (frames larger than the MTU) are sent on the link? 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC -- -m 1600 
 +</​code>​ 
 + 
 +Alter the MTU of the interfaces on both the host 
 +<​code>​ 
 +ifconfig br0 mtu 256 
 +</​code>​ 
 +and the virtual guest 
 +<​code>​ 
 +ifconfig eth0 mtu 256 
 +</​code>​ 
 +Measure the UDP bandwidth and CPU load using the default MTU. 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC 
 +</​code>​ 
 +Revert the changes 
 +<​code>​ 
 +ifconfig br0 mtu 1500 
 +</​code>​ 
 +<​code>​ 
 +ifconfig eth0 mtu 1500 
 +</​code>​ 
 + 
 +== TCP == 
 +=== Coalescing === 
 +Run a TCP_STREAM test with the default MTU and with a 64-bit send size. 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t TCP_STREAM -l 1 -cC 
 +</​code>​ 
 +<​code>​ 
 +netperf -H 192.168.0.2 -t TCP_STREAM -l 1 -cC -- -m 64 
 +</​code>​ 
 +Why does the bandwidth does not decrease?
  
-== Availability ​==+=== Packets in flight ​=== 
 +What is the maximum window size for TCP on the hosts? 
 +<​code>​ 
 +sysctl -n net.core.wmem_max 
 +cat /​proc/​sys/​net/​core/​wmem_max 
 +</​code>​ 
 +How the TCP window be overridden using iperf? 
 +<​code>​ 
 +man iperf 
 +</​code>​ 
 +On the VM, use iperf to listen on TCP port 8080. Use a meager 1K receive window. Display the statistics once per second. 
 +<​code>​ 
 +iperf --server --port 8080 --interval 1 --window 1K 
 +</​code>​ 
 +On guest, run a TCP connection to the port opened on the VM. The flow should last 10 seconds. 
 +<​code>​ 
 +iperf --client 192.168.0.2 --port 8080 --time 10 
 +</​code>​ 
 +Compare the bandwith to the previous TCP results.
  
-== ethtool ==+Traffic control ​= 
 +On the VM, start **iperf_server.sh**. On the client, run **iperf_client.sh**. Inspect its contents 
 +<​code>​ 
 +./​iperf_server.sh 
 +</​code>​ 
 +<​code>​ 
 +cat iperf_client.sh 
 +./​iperf_client.sh 
 +</​code>​ 
 +What are the packet loss rates? How does it impact the VoIP and video calls?
  
-== MTU ==+Let us apply Quality of Service. Define the traffic classes on the client side. 
 +<​code>​ 
 +tc qdisc del dev br0 root 
 +tc qdisc add dev br0 root handle 1: htb 
 +tc class add dev br0 parent 1: classid 1:1 htb rate 1mbit burst 128k 
 +tc class add dev br0 parent 1: classid 1:2 htb rate 40mbit burst 128k 
 +tc class add dev br0 parent 1: classid 1:3 htb rate 5mbit burst 128k 
 +tc class add dev br0 parent 1: classid 1:4 htb rate 3mbit burst 128k 
 +</​code>​ 
 +Then classify the traffic. 
 +<​code>​ 
 +tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 8000 0xffff flowid 1:1 
 +tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 6000 0xffff flowid 1:2 
 +tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 21 0xffff flowid 1:3 
 +tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 80 0xffff flowid 1:4 
 +</​code>​
  
 +Run the iperf server and client scripts again. Is the packet loss reasonable?
  
sesiuni/virtualization-networking/session6.1372169352.txt.gz · Last modified: 2013/06/25 17:09 by freescale