Pagini
Workshops
Parteneri
Download the archive and unpack it
wget http://172.16.5.218/session6.zip unzip session6.zip
The archive contains the follosing files
Install the necessary packets
sudo ./prepare.sh prepare
Boot up the virtual machine
sudo ./prepare.sh start
Open SSH consoles to the virtual machine
ssh root@192.168.0.2
Run a simple netperf test
netperf -H 192.168.0.2
netstat -tlpn
Which options are available to the netperf users?
netperf --help
The user can choose from a series of tests. Which are them?
man netperf
Run a TCP stream test. Now, the virtual machine should be the sender and the host the receiver. Compare the CPU load of the sender and the receiver.
netperf -H 192.168.0.2 -t TCP_MAERTS -cC
Find out more about netperf at http://www.netperf.org/netperf/training/Netperf.html == Speedtest.net == Measure bandwidth and delay on links towards California and Spain
http://speedtest.net
Ping the following hosts
192.168.0.2 cs.curs.pub.ro gnu.org
Compare the latencies and verify the number of hops along each way using traceroute.
Run a 2 seconds UDP request-response test and display the CPU usage for both the host and the virtual machine
netperf -H 192.168.0.2 -t UDP_RR -l 2 -cC
How many request-responses have been issued ?
Browse the Web for a few random pages. Do you find the response time acceptable? What does a browser do when you type a web page's address? Which DNS server are you using? Try resolving a random hostname. What is the query time?
dig ampathos.ro
Edit /etc/resolv.conf to use Google's DNS instead of your ISP's DNS. Add the following line at the top of the file.
nameserver 8.8.8.8
Try resolving ampathos.ro again. Is the query time better? Surf the web. Is the lag significant? Revert your nameserver changes.
What is the MTU of the br0 link?
ip link sh dev br0
Display the offloading capabilities of the NICs
ethtool -k br0
ethtool -k eth0
Measure the UDP bandwidth using the default send size (MTU). Display the UDP statistics before and after running the test for the host interface.
netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC
Repeat the test sending 64-byte packets.
netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC -- -m 64
The throughput is obviously worse and the CPU load is higher. Why? How is the throughput and the CPU load if jumbograms (frames larger than the MTU) are sent on the link?
netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC -- -m 1600
Alter the MTU of the interfaces on both the host
ifconfig br0 mtu 256
and the virtual guest
ifconfig eth0 mtu 256
Measure the UDP bandwidth and CPU load using the default MTU.
netperf -H 192.168.0.2 -t UDP_STREAM -l 1 -cC
Revert the changes
ifconfig br0 mtu 1500
ifconfig eth0 mtu 1500
Run a TCP_STREAM test with the default MTU and with a 64-bit send size.
netperf -H 192.168.0.2 -t TCP_STREAM -l 1 -cC
netperf -H 192.168.0.2 -t TCP_STREAM -l 1 -cC -- -m 64
Why does the bandwidth does not decrease?
What is the maximum window size for TCP on the hosts?
sysctl -n net.core.wmem_max cat /proc/sys/net/core/wmem_max
How the TCP window be overridden using iperf?
man iperf
On the VM, use iperf to listen on TCP port 8080. Use a meager 1K receive window. Display the statistics once per second.
iperf --server --port 8080 --interval 1 --window 1K
On guest, run a TCP connection to the port opened on the VM. The flow should last 10 seconds.
iperf --client 192.168.0.2 --port 8080 --time 10
Compare the bandwith to the previous TCP results.
On the VM, start iperf_server.sh. On the client, run iperf_client.sh. Inspect its contents
./iperf_server.sh
cat iperf_client.sh ./iperf_client.sh
What are the packet loss rates? How does it impact the VoIP and video calls?
Let us apply Quality of Service. Define the traffic classes on the client side.
tc qdisc del dev br0 root tc qdisc add dev br0 root handle 1: htb tc class add dev br0 parent 1: classid 1:1 htb rate 1mbit burst 128k tc class add dev br0 parent 1: classid 1:2 htb rate 40mbit burst 128k tc class add dev br0 parent 1: classid 1:3 htb rate 5mbit burst 128k tc class add dev br0 parent 1: classid 1:4 htb rate 3mbit burst 128k
Then classify the traffic.
tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 8000 0xffff flowid 1:1 tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 6000 0xffff flowid 1:2 tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 21 0xffff flowid 1:3 tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 80 0xffff flowid 1:4
Run the iperf server and client scripts again. Is the packet loss reasonable?