Containers and VMs are basically networked with a virtual software switch. This is similar to your home wifi router. Only this router exists inside your host.
The Linux kernel and other OS's let your create these virtual switches on demand. They are known as bridges (software switches). This lets you create a private NAT network that containers and VMs can connect to and get their networking and IP addresses from, similar to how your home devices get their IPs from your wifi router.
You can create a bridge with the brctl command. You may need to install the bridge-utils package.
brctl addbr br0
You can also use the ip link command to create a bridge.
ip link add br0 type bridge
Though we prefer the bridge-utils package to manage bridges.
A DHCP server like Dnsmasq is run on the bridge to provide DHCP services and IPs from a preselected subnet range ie 10.0.3.0/24 to any connecting devices. This is required so containers and VMs can get IPs on startup.
Without this you would need to setup networking manually for each container or VM. To set static IPs see the Dnsmasq section below.
The last step is an iptables rule to enable masquerading so containers and VMs can access the Internet.
iptables -t nat -A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE
You rarely need to do this as the most container or VM application installers usually set up a NAT bridge with Dnsmasq by default. Containers are configured to connect to the default bridge out of the box. In LXC for instance you can change the bridge in the container configuration file.
Internet acccess from the containers to the Internet happens via an iptables masquerading rule as shown above. Your wifi router has an identical IP masquerading setup for internet access for all your home devices.
Just like in your wifi router machines from the the outside world cannot access your local devices, the outside world cannot acccess containers inside your host. They are in a private network inside your host. That's why you need to use port forwarding to access any containers or VMs from outside your host.
Port forwarding simply forwards traffic from a host port to a specific container port. Below is a typical example of a port forwarding command. Again this is done with the ever useful iptables.
iptables -t nat -A PREROUTING -p TCP -i eth0 --dport 80 -j DNAT --to-destination 10.0.3.10:80
Presuming the host network interface is eth0 the command above forwards all network traffic from host port 80 to container with IP 10.0.3.10 port 80.
The downside to port forwarding is at any point one host port can be forwarded to a container, for instance if you have multiple container apps on port 80 at any one time host port 80 can be forwarded to only one of the containers. So you can either expose your container apps on other host ports or an easy workaround is to use an Nginx reverse proxy and using that to serve all the internal container apps. We cover this in more detail in deploying containers apps.
This is usually the default but not the only way to network your containers and VMs.
You can directly bridge one of your hosts network interfaces insteading of using the private NAT bridge and connect containers and VMs to it. This way the containers and VMs will be on the same network as your host and get their IPs directly from the router you host is connected to. In this case no port forwarding is required.
Once again like before let's create a bridge.
brctl addbr br0
Now the important part. To create a host bridge we need to 'bridge' or connect the physical network interface on the host to the bridge.
brctl addif br0 eth0
The 'addif' command bridges or 'connects' the eth0 network device to the 'br0' bridge. This is presuming eth0 is the name of your network interface.
You can also list all active bridges on your system with the brctl command.
Now any containers or VMs connecting to br0 will get their IP and network services directly from the router the host is connected to.
If you have multiple network interfaces on the host you can also choose to bridge a specific network interface.
But for this to work you need to control the router that provides network access to your hosts, as the containers will be getting their IPs from the router.
For instance on a cloud network you will not have access to the router or get enough IPs from the provider and thus you cannot use this method.
With Dnsmasq serving the bridge you can set static IPs by manually configuring them in the VM or container or using the Dnsmasq 'dhcp-host' configuration.
You can easily set a static IP by specifying the IP, netmask and gateway in the container or VMs /etc/networking/interfaces file. Most Linux distributions use this file for network configuration though Redhat and CentOS use a different file.
Out of the box with DHCP the /etc/networking/interfaces file of a container, VM or Host looks like this
auto eth0 iface eth0 inet dhcp
This is presuming the network interface is eth0. To set a static IP you can simply specify the IP address with netmask and gateway and switch dhcp to static as shown below.
auto eth0 iface eth0 inet static address 10.0.4.10 gateway 10.0.4.1 netmask 255.255.255.0
In this case we are assuming the subnet is 10.0.4.0/24. The /24 subnet denotes 256 addresses and that's what the netmask 255.255.255.0 implies. If you use a /16 subnet that translates to 65536 addresses and the netmask will then be 255.255.0.0. More on subnets here.
In this case the gateway is 10.0.4.1 and the netmask is 255.255.255.0. The static IP configured has to be within the subnet, since out chosen subnet is 10.0.4.0/24 and has 254 usable addresses our chosen static IP has to be between 10.0.4.2 and 10.0.4.254. In this case we went with 10.0.4.10.
You can also use the Dnsmasq dhcp-conf configuration to set static IPs. Any DHCP software including your router will have the same option. In this case you would need to specify the container or VM name or its mac address, and the preferred static IP for instance for Dnsmasq:
In this case Dnsmasq will assign the 10.0.4.10 IP to the container or VM with host name mycontainer. You can also use the mac address of the container or VM instead of the host name.
In this case Dnsmasq will assign the 10.0.4.10 IP to the container with mac address 00:16:3e:3c:2a:5d.
The problem with this method is it needs to be added to the dnsmasq configuration file, and this often requires a restart of the dnsmasq instance and the container network and so is not a convenient way to set static IPs.
Flockport lets you set static IPs for from the command line.
Macvlan is supposed to be a slightly more efficient alternative to the Linux bridge. It has 4 modes that provide isolation capabilities to isolate containers from the host or from each other.
Macvlan basically allows a single physical interface to be associated with multiple IPs and MAC addresses. You can use macvlan in bridge mode to connect containers or VMs to the host network so they will be on the same layer 2 network as your host. This is similar to the host bridge discussed earlier.
Also keep in mind with macvlan in bridge mode the containers can reach the network and each other but not the host.
You can create a macvlan bridge with the ip link command.
ip link add mvlan0 link eth0 type macvlan mode bridge
This presumes your network interface is eth0. mvlan0 here is the bridge name.
Now provided your container manager supports macvlan (Both LXC and Docker do) you can attach your containers to the mvlan0 bridge and your containers will get their IPs and networking directly from the router connected to your host.
Ipvlan is interesting in that it can operate in both layer 2 and layer 3 mode. LIke macvlan Ipvlan devices are attached to host networking devices ie eth0. A quick note for those who don't know layer 2 operates at the mac address level and layer 2 at the IP address level. The main advantage Ipvlan brings is in layer 2 mode is multiple devices can share the same mac address. This helps if there are mac address limitations in place on the router.
In layer 3 mode you can direct traffic to your containers or VMs by adding static routes to your router to the containers or VMs via the host network devices. In this case there is a direct layer 3 network. It is recommended to have at least kernel 4.2 for Ipvlan.
Both Macvlan and Ipvlan address specialized needs and offer flexibility and some additional possibilites in isolating and segmenting network traffic.
Dnsmasq by Simon Kelley is a hugely popular DHCP and DNS server used by container and VM networks. Most VM or container managers run an instance of Dnsmasq on the NAT bridge, so there could be multiple bridges served by multiple Dnsmasq instances on a single host. Dnsmasq is also used on many wifi routers so your home devices could well be getting their IPs from a Dnsmasq instance running in the router.
The standard way to configure Dnsmasq is to provide it the bridge name, the subnet range, a domain if required. Once this is done Dnsmasq listens on the bridge for any DHCP requests. Any container or VM on startup will send a DHCP request to the configured bridge via its DHCP client ie dhclient or udhcpc and the Dnsmasq instance on the bridge will revert with an IP and a lease. You can find these leases in the /var/lib/misc/dnsmasq.leases file.
Without the Dnsmasq instance serving the bridge containers and VMs would not get IPs on startup, and you would have to configure networking manually on each container and VM. In this case you would have to add an IP, netmask and gateway manually to the container.
Most container and VM managers setup a default bridge on installation. For instance LXC uses the lxcbr0 bridge. Flockport also uses the lxcbr0 bridge. Flockport also lets you add new bridges on demand, connect containers to different bridges, add multipe network interfaces and set static IPs from the command line.
Docker uses the docker0 bridge but does not use Dnsmasq, the Docker daemon manages networking manually. This is because Docker is a single process container and there is no dhclient process that you would get in other containers to get an IP by DHCP.
The popular Linux VM manager Libvirt sets up the virbr0 bridge for VMs, similarly Virtualbox usually uses vbox0 and Vmware vmnet0. These bridges all operate in a more or less similar manner.