LXC Networking Guide

Containers need to have an IP so they are available on the network. In Linux 'bridges' are used to connect VMs to a network. Think of a bridge as software switch that is created in the host. Now virtual machines and containers can connect to this bridge for their networking.

Bridges are a basic functionality of the Linux kernel and are usually created using the bridge-utils package.

Depending on your environment you can configure 2 main types of bridges.

    • Host Bridge - your containers/VMs are directly connected to the host network and appear and behave as other physical machines on your network
    • NAT Bridge - A private network within your host that containers and VMs connect to. This is a standalone bridge with a private network (subnet) and all networking happens through the host's IP. To learn more about the NAT bridge please see the NAT and Autostart sections below.

There are 2 types of IPs, public IPs usually provided by your ISP or server provider that can be directly routed from the internet and private IPs that are private to your network and cannot be accessed from the Internet. A NAT network is a subnet of private IPs, for instance the network set up by your home wifi router usually in the 192.168.1.0/24 subnet for all your computers, mobile and tablet devices is a NAT network.

A NAT network for your containers is similar to this, only your host is acting as a router with the containers and VMs connected to a software switch or bridge on a private subnet created within the host.

Note: The Flockport Installer automatically setups and enables container networking out of the box with LXC's default 'lxcbr0' bridge with DHCP so nothing needs to be done.

This section is a reference for container networking and is going to cover configuring bridged mode networking, NAT, static IPs, Public IPs, domains, deploying in cloud VMs and enabling autostart for containers.

Host Bridge

This bridges containers or VMs to your host network so they appear as other physical machines on your network. This type of bridge is created by  bridging the host's physical interface usually 'eth0'.

In Linux eth0 is typically the name of the first physical network interface, for systems that have multiple network cards,  interfaces will typically be eth0, eth1, eth2 etc

In this type of bridge containers/VMs are on the same layer 2 network as the host have all the network capabilities of the host ie can connect directly to the internet, connect and be reached from other machines on the network, and if assigned public IPs can be reached directly from the Internet.

To create a direct bridge in Debian or Ubuntu edit the /etc/network/interfaces file and make sure it looks like below

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0

Now you can configure containers to connect to the br0 bridge and they will get their IPs from the router your host is connected to and be on the same network as your host.

Containers are by default stored in /var/lib/lxc. Every container has an individual config file, for instance a container named debian will have a config file in /var/lib/lxc/debian/config. Container settings including network settings are stored in this file. To change the network bridge for debian container edit the 'lxc.network.link' value.

You can also set a system wide setting in /etc/default/lxc file like below so containers created have a default network setting.

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.network.mtu = 1500

This configures all new containers to connect to the default lxcbr0 bridge. You can change the bridge when required system wide or in the individual container setting.

You can set static IPs for you containers either from the router settings by specifying specific IPs by mac address, or in the container OS network setting.

If you are on a network with public IPs or a service or dedicated server or cloud provider that gives you multiple public IPs (and allows you to bridge) you can bridge your hosts's eth0 to br0 for instance and assign static public IPs to the containers via the container's networking config.

NAT Bridge

A NAT bridge is a standalone bridge with a private network that is not bridged to the host eth0 or physical network like above. It exists as a private subnet in the host.

In many cases a user may have little control of network DHCP, so getting IPs assigned automatically will be impossible or the user may not want to bridge the hosts physical interface. In this case using a NAT bridge is the best option. It has other functions but let's leave that for later.

A NAT bridge is a standalone bridge with no physical interfaces attached to it that basically creates a private network within the host computer with the containers/VMs getting private IPs.

These IPs are not directly accessible from the outside world. Only the host and other containers on the same private subnet can access them.  The containers need to use NAT (network address translation) to access the internet.

Most VM managers have a nat bridge by default. For instance KVM has virbr0, VMWare has VMnet1 and VMnet8. LXC ships with a NAT bridge by default called lxcbr0.  In Ubuntu that works out of the box.

This is unfortunately not shipped or configured by other distros. The Flockport LXC installer enables it by default for multiple distributions. You can also see our post here on how to enable it in Jessie, CentOS and Fedora.

The lxcbr0 bridge is set up by the lxc-net script that sets up the bridge and basic networking in the 10.0.3.0/24 subnet including DHCP and internet connectivity for containers.

For those coming from VMWare the lxcbr0 bridge is similar to VMnet8, If we disable the iptables masquerading rule it is like VMnet1.

If you have LXC installed you can check the bridge is up

brctl show

You should see an lxcbr0 entry. The containers are configured to connect to this bridge via their config files, and get IPs on this private subnet. Here is part of the container's config file that configures networking. Notice the lxc.network.link value is set to lxcbr0. If you were using the host bridge this would be br0.

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0
lxc.network.hwaddr = 00:16:3e:f9:d3:03
lxc.network.mtu = 1500

The lxcbr0 bridge is setup by the lxc-net script. Have a look at the script in /etc/init.d/lxc.net in Debian or /etc/init/lxc.net.conf in Ubuntu.

The script does 3 things, creates the standalone lxcbr0 bridge, setup dnsmasq to serve dhcp on this bridge on the 10.0.3.0/24 subnet and sets up a few iptables rules (masquerade) so the containers have internet access. It also has a number of flags to enable domains, change subnets etc.

Deploy containers in cloud VMs

In a cloud VMs the average user may not have access to the network DHCP or enough public IPs, so a NAT Bridge is often the only option.

To enable NAT networking for your containers please refer to the NAT section above.

The containers with the default LXC bridge 'lxcbr0' setup, will have access to the internet but if you need to make any services on the container available to the world, you need to forward ports from the host to the container.

For instance to forward port 80 from the host IP 1.1.1.1 to a container IP 10.0.3.165 you can use the iptables rule below.

iptables -t nat -I PREROUTING -i eth0 -p TCP -d 1.1.1.1/32 --dport 80 -j DNAT --to-destination 10.0.3.165:80

This will make for instance an Nginx web server on port 80 of the container available on port 80 of the host.

For advanced users who do have access and would prefer to use a bridged network and public IPs for containers, follow the instructions in the bridged networking section above.

If you already have a bridge, you can connect LXC containers to your current bridge by specifying it in the LXC container config file.

Configure static IPs for containers

LXC containers have MAC addresses. If you are using the default lxcbr0 network to assign a static IP to a LXC container create a dnsmasq.conf file in /etc/lxc/dnsmasq and add the line below to assign a specific static IP to a container name.

dhcp-host=containername,10.0.3.21

Please note you need to restart the lxc-net service for this to the effect. For those not using the default LXC network, you can assign specific IPs to containers by mac address in your router or DHCP app configuration.

You can also use the container OS's networking configuration to set up a static IP. For instance in a Debian or Ubuntu container to set a static IP 10.0.3.150 for the container, you can use the configuration like below in the container's /etc/network/interfaces file.

auto eth0
iface eth0 inet static
address 10.0.3.150
gateway 10.0.3.1
netmask 255.255.255.0

Set-up container autostart

LXC has the capability to start single or groups of containers automatically on boot. Important for servers hosting services.

The Flockport Installer and Ubuntu LXC packages already enables autostart capability for containers by default so nothing needs to be done.

The lxc init script usually in /etc/init.d/lxc or /etc/init/lxc is responsible for autostarting containers.

To configure a container to autostart on reboot, add the following line to the container's config file at containername/config

lxc.start.auto = 1

You can also group containers and autostart groups of containers.

lxc.group = groupname

You can also stagger container starts. Please see our LXC Advanced Guide for more details.

Domain Names

You can access your containers by their names instead of their IPs. This can be useful for service discovery. By default the lxc-net script does not enable any domains.

You can enable a domain say 'lxc' in the /etc/init.d/lxc-net script or in /etc/default/lxc-net in Ubuntu. There is an entry called LXC_DOMAIN="" which is empty by default. You can add 'lxc' there and restart the lxc-net service.

If you run a utility like dig, you will see the lxcbr0 interface IP 10.0.3.1 once configured acts like a name server for the lxc domain. Suppose you have an container named Nginx running the dig command below.

dig @10.0.3.1 nginx.lxc

This should return the IP of the Nginx container. Nice! But to use this you need to configure a dns server like Dnsmasq to associate the lxc name with the 10.0.3.1 server. Dnsmasq is already used in Ubuntu and is used in some way or the other in most distributions.

It's a simple matter of adding an entry like below to your dnsmasq config.

server=/lxc/10.0.3.1

Once this is done if you have an nginx or mysql container you can ping nginx.lxc or mysql.lxc and it will resolve the names.

If you are using NetworkManager create a lxc.conf file with the same line in the /etc/NetworkManager/dnsmasq.d/ folder and restart Network Manager for the change take effect. Now you should be able to ping containers by their name.lxc.

Note: container name is the configured hostname of the container. Usually the 2 are the same unless configured differently inside the container.

Further Reading & Resoruces

Flockport App Store

Flockport Get Started

Container Basics

Container Networking

Container Storage

Stay Updated on Flockport

Recommended Posts

Leave a Comment

Login

Register | Lost your password?