gvisor/test/benchmarks
Zach Koopmans 98f9527c04 Port nginx and move parsers to own package.
This change:
- Ports the nginx benchmark.
- Switches the Httpd benchmark to use 'hey' as a client.
- Moves all parsers to their own package 'tools'.

Parsers are moved to their own package because 1) parsing output of a command
is often dependent on the format of the command (e.g. 'fio --json'), 2) to
enable easier reuse, and 3) clean up and simplify actual running benchmarks
(no TestParser functions and ugly sample output in benchmark files).

PiperOrigin-RevId: 324144165
2020-07-30 21:17:45 -07:00
..
database Port nginx and move parsers to own package. 2020-07-30 21:17:45 -07:00
fs Port nginx and move parsers to own package. 2020-07-30 21:17:45 -07:00
harness Port ffmpeg benchmark 2020-07-27 10:10:14 -07:00
media Port tensorflow benchmark. 2020-07-28 12:56:06 -07:00
ml Port tensorflow benchmark. 2020-07-28 12:56:06 -07:00
network Port nginx and move parsers to own package. 2020-07-30 21:17:45 -07:00
tools Port nginx and move parsers to own package. 2020-07-30 21:17:45 -07:00
README.md Add profiling to dockerutil 2020-07-26 22:02:51 -07:00

README.md

Benchmark tools

This package and subpackages are for running macro benchmarks on runsc. They are meant to replace the previous //benchmarks benchmark-tools written in python.

Benchmarks are meant to look like regular golang benchmarks using the testing.B library.

Setup

To run benchmarks you will need:

  • Docker installed (17.09.0 or greater).

The easiest way to setup runsc for running benchmarks is to use the make file. From the root directory:

  • Download images: make load-all-images
  • Install runsc suitable for benchmarking, which should probably not have strace or debug logs enabled. For example:make configure RUNTIME=myrunsc ARGS=--platform=kvm.
  • Restart docker: sudo service docker restart

You should now have a runtime with the following options configured in /etc/docker/daemon.json

"myrunsc": {
            "path": "/tmp/myrunsc/runsc",
            "runtimeArgs": [
                "--debug-log",
                "/tmp/bench/logs/runsc.log.%TEST%.%TIMESTAMP%.%COMMAND%",
                "--platform=kvm"
            ]
        },

This runtime has been configured with a debugging off and strace logs off and is using kvm for demonstration.

Running benchmarks

Given the runtime above runtime myrunsc, run benchmarks with the following:

make sudo TARGETS=//path/to:target ARGS="--runtime=myrunsc -test.v \
  -test.bench=." OPTIONS="-c opt

For example, to run only the Iperf tests:

make sudo TARGETS=//test/benchmarks/network:network_test \
  ARGS="--runtime=myrunsc -test.v -test.bench=Iperf" OPTIONS="-c opt"

Benchmarks are run with root as some benchmarks require root privileges to do things like drop caches.

Writing benchmarks

Benchmarks consist of docker images as Dockerfiles and golang testing.B benchmarks.

Dockerfiles:

  • Are stored at //images.
  • New Dockerfiles go in an appropriately named directory at //images/benchmarks/my-cool-dockerfile.
  • Dockerfiles for benchmarks should:
    • Use explicitly versioned packages.
    • Not use ENV and CMD statements...it is easy to add these in the API.
  • Note: A common pattern for getting access to a tmpfs mount is to copy files there after container start. See: //test/benchmarks/build/bazel_test.go. You can also make your own with RunOpts.Mounts.

testing.B packages

In general, benchmarks should look like this:


var h harness.Harness

func BenchmarkMyCoolOne(b *testing.B) {
  machine, err := h.GetMachine()
  // check err
  defer machine.CleanUp()

  ctx := context.Background()
  container := machine.GetContainer(ctx, b)
  defer container.CleanUp(ctx)

  b.ResetTimer()

  //Respect b.N.
  for i := 0; i < b.N; i++ {
    out, err := container.Run(ctx, dockerutil.RunOpts{
      Image: "benchmarks/my-cool-image",
      Env: []string{"MY_VAR=awesome"},
      other options...see dockerutil
    }, "sh", "-c", "echo MY_VAR")
    //check err
    b.StopTimer()

    // Do parsing and reporting outside of the timer.
    number := parseMyMetric(out)
    b.ReportMetric(number, "my-cool-custom-metric")

    b.StartTimer()
  }
}

func TestMain(m *testing.M) {
    h.Init()
    os.Exit(m.Run())
}

Some notes on the above:

  • The harness is initiated in the TestMain method and made global to test module. The harness will handle any presetup that needs to happen with flags, remote virtual machines (eventually), and other services.
  • Respect b.N in that users of the benchmark may want to "run for an hour" or something of the sort.
  • Use the b.ReportMetric() method to report custom metrics.
  • Set the timer if time is useful for reporting. There isn't a way to turn off default metrics in testing.B (B/op, allocs/op, ns/op).
  • Take a look at dockerutil at //pkg/test/dockerutil to see all methods available from containers. The API is based on the "official" docker API for golang.
  • harness.GetMachine() marks how many machines this tests needs. If you have a client and server and to mark them as multiple machines, call harness.GetMachine() twice.

Profiling

For profiling, the runtime is required to have the --profile flag enabled. This flag loosens seccomp filters so that the runtime can write profile data to disk. This configuration is not recommended for production.

  • Install runsc with the --profile flag: make configure RUNTIME=myrunsc ARGS="--profile --platform=kvm --vfs2". The kvm and vfs2 flags are not required, but are included for demonstration.
  • Restart docker: sudo service docker restart

To run and generate CPU profiles fs_test test run:

make sudo TARGETS=//test/benchmarks/fs:fs_test \
  ARGS="--runtime=myrunsc -test.v -test.bench=. --pprof-cpu" OPTIONS="-c opt"

Profiles would be at: /tmp/profile/myrunsc/CONTAINERNAME/cpu.pprof