Pagini
Workshops
Parteneri
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
sesiuni:virtualization-networking:session6 [2013/06/26 04:27] freescale |
sesiuni:virtualization-networking:session6 [2013/06/26 07:14] freescale |
||
---|---|---|---|
Line 1: | Line 1: | ||
= Topology = | = Topology = | ||
{{ :sesiuni:virtualization-networking:netperf_setup.png |}} | {{ :sesiuni:virtualization-networking:netperf_setup.png |}} | ||
+ | |||
+ | = Slides = | ||
+ | {{:sesiuni:virtualization-networking:session6_network_benchmarking.pdf|}} | ||
= Setup = | = Setup = | ||
Line 54: | Line 57: | ||
netperf -H 192.168.0.2 -t TCP_MAERTS -cC | netperf -H 192.168.0.2 -t TCP_MAERTS -cC | ||
</code> | </code> | ||
+ | Find out more about **netperf** at [[http://www.netperf.org/netperf/training/Netperf.html]] | ||
== Speedtest.net == | == Speedtest.net == | ||
Measure bandwidth and delay on links towards California and Spain | Measure bandwidth and delay on links towards California and Spain | ||
Line 100: | Line 103: | ||
ip link sh dev br0 | ip link sh dev br0 | ||
</code> | </code> | ||
+ | |||
+ | Display the offloading capabilities of the NICs | ||
+ | <code> | ||
+ | ethtool -k br0 | ||
+ | </code> | ||
+ | <code> | ||
+ | ethtool -k eth0 | ||
+ | </code> | ||
+ | |||
== UDP == | == UDP == | ||
Measure the UDP bandwidth using the default send size (MTU). Display the UDP statistics before and after running the test for the host interface. | Measure the UDP bandwidth using the default send size (MTU). Display the UDP statistics before and after running the test for the host interface. | ||
Line 167: | Line 179: | ||
= Traffic control = | = Traffic control = | ||
- | On the VM, start **iperf_server.sh**. On the client, run **iperf_client.sh** | + | On the VM, start **iperf_server.sh**. On the client, run **iperf_client.sh**. Inspect its contents |
<code> | <code> | ||
./iperf_server.sh | ./iperf_server.sh | ||
</code> | </code> | ||
<code> | <code> | ||
+ | cat iperf_client.sh | ||
./iperf_client.sh | ./iperf_client.sh | ||
+ | </code> | ||
+ | What are the packet loss rates? How does it impact the VoIP and video calls? | ||
+ | |||
+ | Let us apply Quality of Service. Define the traffic classes on the client side. | ||
+ | <code> | ||
+ | tc qdisc del dev br0 root | ||
+ | tc qdisc add dev br0 root handle 1: htb | ||
+ | tc class add dev br0 parent 1: classid 1:1 htb rate 1mbit burst 128k | ||
+ | tc class add dev br0 parent 1: classid 1:2 htb rate 40mbit burst 128k | ||
+ | tc class add dev br0 parent 1: classid 1:3 htb rate 5mbit burst 128k | ||
+ | tc class add dev br0 parent 1: classid 1:4 htb rate 3mbit burst 128k | ||
+ | </code> | ||
+ | Then classify the traffic. | ||
+ | <code> | ||
+ | tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 8000 0xffff flowid 1:1 | ||
+ | tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 6000 0xffff flowid 1:2 | ||
+ | tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 21 0xffff flowid 1:3 | ||
+ | tc filter add dev br0 protocol ip parent 1: prio 1 u32 match ip dport 80 0xffff flowid 1:4 | ||
</code> | </code> | ||
+ | Run the iperf server and client scripts again. Is the packet loss reasonable? | ||