Go to file
Chris Kuiper 8d9276ed56 Support binding to multicast and broadcast addresses
This fixes the issue of not being able to bind to either a multicast or
broadcast address as well as to send and receive data from it. The way to solve
this is to treat these addresses similar to the ANY address and register their
transport endpoint ID with the global stack's demuxer rather than the NIC's.
That way there is no need to require an endpoint with that multicast or
broadcast address. The stack's demuxer is in fact the only correct one to use,
because neither broadcast- nor multicast-bound sockets care which NIC a
packet was received on (for multicast a join is still needed to receive packets
on a NIC).

I also took the liberty of refactoring udp_test.go to consolidate a lot of
duplicate code and make it easier to create repetitive tests that test the same
feature for a variety of packet and socket types. For this purpose I created a
"flowType" that represents two things: 1) the type of packet being sent or
received and 2) the type of socket used for the test. E.g., a "multicastV4in6"
flow represents a V4-mapped multicast packet run through a V6-dual socket.

This allows writing significantly simpler tests. A nice example is testTTL().

PiperOrigin-RevId: 264766909
2019-08-21 22:54:25 -07:00
.github Update CONTRIBUTING.md 2019-05-30 12:09:10 -07:00
cloudbuild Allow specification of origin in cloudbuild. 2019-06-03 18:05:59 -07:00
g3doc Merge pull request #306 from amscanne:add_readme 2019-06-13 17:20:49 -07:00
kokoro Fix Kokoro revision and 'go get usage' 2019-06-04 11:07:27 -07:00
pkg Support binding to multicast and broadcast addresses 2019-08-21 22:54:25 -07:00
runsc Use tcpip.Subnet in tcpip.Route 2019-08-21 15:31:18 -07:00
test Support binding to multicast and broadcast addresses 2019-08-21 22:54:25 -07:00
third_party/gvsync build: add nogo for static validation 2019-07-09 16:44:06 -07:00
tools Bump Bazel to v0.28.0 2019-08-13 11:21:55 -07:00
vdso Merge pull request #450 from Pixep:feature/add-clock-boottime-as-monotonic 2019-07-19 10:44:45 -07:00
.bazelrc Bump Bazel to v0.28.0 2019-08-13 11:21:55 -07:00
.gitignore Add .gitignore 2018-05-01 09:37:49 -04:00
AUTHORS Change copyright notice to "The gVisor Authors" 2019-04-29 14:26:23 -07:00
BUILD build: add nogo for static validation 2019-07-09 16:44:06 -07:00
CODE_OF_CONDUCT.md Adds Code of Conduct 2018-12-14 18:13:52 -08:00
CONTRIBUTING.md Merge pull request #350 from kshithijiyer:patch-1 2019-07-12 16:15:51 -07:00
Dockerfile gvisor/bazel: use python2 to build runsc-debian 2019-06-17 17:09:06 -07:00
LICENSE Check in gVisor. 2018-04-28 01:44:26 -04:00
Makefile Bump rules_go to v0.19.3 and bazel_gazelle to v0.18.1. 2019-08-16 15:02:11 -07:00
README.md Update canonical repository. 2019-06-13 16:50:15 -07:00
WORKSPACE Bump rules_go to v0.19.3 and bazel_gazelle to v0.18.1. 2019-08-16 15:02:11 -07:00
go.mod Update canonical repository. 2019-06-13 16:50:15 -07:00
go.sum Update canonical repository. 2019-06-13 16:50:15 -07:00

README.md

gVisor

Status gVisor chat

What is gVisor?

gVisor is a user-space kernel, written in Go, that implements a substantial portion of the Linux system surface. It includes an Open Container Initiative (OCI) runtime called runsc that provides an isolation boundary between the application and the host kernel. The runsc runtime integrates with Docker and Kubernetes, making it simple to run sandboxed containers.

Why does gVisor exist?

Containers are not a sandbox. While containers have revolutionized how we develop, package, and deploy applications, running untrusted or potentially malicious code without additional isolation is not a good idea. The efficiency and performance gains from using a single, shared kernel also mean that container escape is possible with a single vulnerability.

gVisor is a user-space kernel for containers. It limits the host kernel surface accessible to the application while still giving the application access to all the features it expects. Unlike most kernels, gVisor does not assume or require a fixed set of physical resources; instead, it leverages existing host kernel functionality and runs as a normal user-space process. In other words, gVisor implements Linux by way of Linux.

gVisor should not be confused with technologies and tools to harden containers against external threats, provide additional integrity checks, or limit the scope of access for a service. One should always be careful about what data is made available to a container.

Documentation

User documentation and technical architecture, including quick start guides, can be found at gvisor.dev.

Installing from source

gVisor currently requires x86_64 Linux to build, though support for other architectures may become available in the future.

Requirements

Make sure the following dependencies are installed:

Building

Build and install the runsc binary:

bazel build runsc
sudo cp ./bazel-bin/runsc/linux_amd64_pure_stripped/runsc /usr/local/bin

If you don't want to install bazel on your system, you can build runsc in a Docker container:

make runsc
sudo cp ./bazel-bin/runsc/linux_amd64_pure_stripped/runsc /usr/local/bin

Testing

The test suite can be run with Bazel:

bazel test //...

or in a Docker container:

make unit-tests
make tests

Using remote execution

If you have a Remote Build Execution environment, you can use it to speed up build and test cycles.

You must authenticate with the project first:

gcloud auth application-default login --no-launch-browser

Then invoke bazel with the following flags:

--config=remote
--project_id=$PROJECT
--remote_instance_name=projects/$PROJECT/instances/default_instance

You can also add those flags to your local ~/.bazelrc to avoid needing to specify them each time on the command line.

Using go get

This project uses bazel to build and manage dependencies. A synthetic go branch is maintained that is compatible with standard go tooling for convenience.

For example, to build runsc directly from this branch:

echo "module runsc" > go.mod
GO111MODULE=on go get gvisor.dev/gvisor/runsc@go
CGO_ENABLED=0 GO111MODULE=on go install gvisor.dev/gvisor/runsc

Note that this branch is supported in a best effort capacity, and direct development on this branch is not supported. Development should occur on the master branch, which is then reflected into the go branch.

Community & Governance

The governance model is documented in our community repository.

The gvisor-users mailing list and gvisor-dev mailing list are good starting points for questions and discussion.

Security

Sensitive security-related questions, comments and disclosures can be sent to the gvisor-security mailing list. The full security disclosure policy is defined in the community repository.

Contributing

See Contributing.md.