gvisor/benchmarks/tcp
Bhasker Hariharan b070e218c6 Add support for Stack level options.
Linux controls socket send/receive buffers using a few sysctl variables
  - net.core.rmem_default
  - net.core.rmem_max
  - net.core.wmem_max
  - net.core.wmem_default
  - net.ipv4.tcp_rmem
  - net.ipv4.tcp_wmem

The first 4 control the default socket buffer sizes for all sockets
raw/packet/tcp/udp and also the maximum permitted socket buffer that can be
specified in setsockopt(SOL_SOCKET, SO_(RCV|SND)BUF,...).

The last two control the TCP auto-tuning limits and override the default
specified in rmem_default/wmem_default as well as the max limits.

Netstack today only implements tcp_rmem/tcp_wmem and incorrectly uses it
to limit the maximum size in setsockopt() as well as uses it for raw/udp
sockets.

This changelist introduces the other 4 and updates the udp/raw sockets to use
the newly introduced variables. The values for min/max match the current
tcp_rmem/wmem values and the default value buffers for UDP/RAW sockets is
updated to match the linux value of 212KiB up from the really low current value
of 32 KiB.

Updates #3043
Fixes #3043

PiperOrigin-RevId: 318089805
2020-06-24 10:24:20 -07:00
..
BUILD
README.md
nsjoin.c
tcp_benchmark.sh Enable TCP Receive buffer moderation in gonet and benchmark. 2020-06-01 10:29:47 -07:00
tcp_proxy.go Add support for Stack level options. 2020-06-24 10:24:20 -07:00

README.md

TCP Benchmarks

This directory contains a standardized TCP benchmark. This helps to evaluate the performance of netstack and native networking stacks under various conditions.

tcp_benchmark

This benchmark allows TCP throughput testing under various conditions. The setup consists of an iperf client, a client proxy, a server proxy and an iperf server. The client proxy and server proxy abstract the network mechanism used to communicate between the iperf client and server.

The setup looks like the following:

 +--------------+  (native)            +--------------+
 | iperf client |[lo @ 10.0.0.1]------>| client proxy |
 +--------------+                      +--------------+
                                    [client.0 @ 10.0.0.2]
                            (netstack)  |            |  (native)
                                        +------+-----+
                                               |
                                             [br0]
                                               |
          Network emulation applied ---> [wan.0:wan.1]
                                               |
                                             [br1]
                                               |
                                        +------+-----+
                            (netstack)  |            |  (native)
                                     [server.0 @ 10.0.0.3]
 +--------------+                      +--------------+
 | iperf server |<------[lo @ 10.0.0.4]| server proxy |
 +--------------+            (native)  +--------------+

Different configurations can be run using different arguments. For example:

  • Native test under normal internet conditions: tcp_benchmark
  • Native test under ideal conditions: tcp_benchmark --ideal
  • Netstack client under ideal conditions: tcp_benchmark --client --ideal
  • Netstack client with 5% packet loss: tcp_benchmark --client --ideal --loss 5

Use tcp_benchmark --help for full arguments.

This tool may be used to easily generate data for graphing. For example, to generate a CSV for various latencies, you might do:

rm -f /tmp/netstack_latency.csv /tmp/native_latency.csv
latencies=$(seq 0 5 50;
            seq 60 10 100;
            seq 125 25 250;
            seq 300 50 500)
for latency in $latencies; do
  read throughput client_cpu server_cpu <<< \
    $(./tcp_benchmark --duration 30 --client --ideal --latency $latency)
  echo $latency,$throughput,$client_cpu >> /tmp/netstack_latency.csv
done
for latency in $latencies; do
  read throughput client_cpu server_cpu <<< \
    $(./tcp_benchmark --duration 30 --ideal --latency $latency)
  echo $latency,$throughput,$client_cpu >> /tmp/native_latency.csv
done

Similarly, to generate a CSV for various levels of packet loss, the following would be appropriate:

rm -f /tmp/netstack_loss.csv /tmp/native_loss.csv
losses=$(seq 0 0.1 1.0;
         seq 1.2 0.2 2.0;
         seq 2.5 0.5 5.0;
         seq 6.0 1.0 10.0)
for loss in $losses; do
  read throughput client_cpu server_cpu <<< \
    $(./tcp_benchmark --duration 30 --client --ideal --latency 10 --loss $loss)
  echo $loss,$throughput,$client_cpu >> /tmp/netstack_loss.csv
done
for loss in $losses; do
  read throughput client_cpu server_cpu <<< \
    $(./tcp_benchmark --duration 30 --ideal --latency 10 --loss $loss)
  echo $loss,$throughput,$client_cpu >> /tmp/native_loss.csv
done