Flockport lets your build high performance overlay networks that operate at near line speed, even for encrypted networks. For encrypted networks we use Wireguard which is a brand new encrypted network protocol that operates at near line speed without the performance penalty.

By default containers are on an private or internal network in your host. They are only acccessible by the host and by each other. To connect containers across hosts you need to create an overlay network.

Flockport lets you build two kinds of overlay networks.

A layer 2 overlay puts containers across hosts in the same subnet. Containers will have IPs in the same range ie, etc and be able to support layer 2 services.

A layer 3 overlay connects containers subnets across hosts. Containers across hosts can thus reach each other.

It's important to note here that in layer 3 overlays containers across hosts need to be in different subnets. So before building a layer 3 overlay you need to change the default container subnet. You can easily do this with the setsubnet command on each server.

flockport setsubnet

Here we changed the default subnet to

Flockport uses Vxlan to build layer 2 networks and BGP and Wireguard for layer 3 networks. BGP and Wireguard are basically routing protocols. Wireguard adds encryption to a layer 3 overlay.

Please note we are building overlays to connect containers across servers, not building networks to connect servers themselves. The servers should be on the same subnet and have direct connectivity to each other.

To build networks across servers in different networks or across the cloud we need to first create a virtual private network between the servers. This is where Wireguard becomes useful. You can use Wireguard to build a virtual private network of servers and then connect containers across them. The other option is to use tunnels to connect network segments.

To learn about connecting containers across servers in different networks please refer to the Wireguard, Peervpn and Tunnels section below.

When building networks, storage pools, and distributed services we use a pattern. We create a default configuration to represent the service and then add, modify or remove from it.


Vxlan uses multicast and is best used in an internal network you have control over. Most cloud networks do not support multicast

Servers in a Vxlan network need to be in the same subnet.

To show you how Vxlan works we are going to use an example network with 3 servers s1, s2, and s3 and build a Vxlan network to put containers across the 3 servers in the same subnet.

To build a Vxlan overlay we will first create the network with the createnetwork command

flockport createnetwork vx1 vxlan

This creates a default vxlan configuration with the vx1 name. We are using vx1, you can use any name.

Once you create a network you can add servers to it with the addnetwork command

flockport addnetwork vx1 s1 s2

This will bring up the vxlan network connecting servers s1 and s2.

You can add your local system when adding servers with the keyword local

flockport addnetwork vx1 local s1 s2

This will create a vxlan network that connects your local system, s1 and s2

You will notice a new bridge vxl0 on your vxlan network servers. Now any containers connected to the vxl0 bridge will be on the same layer 2 network across servers.

When adding vxlan servers you can specify the network interface the Vxlan network should use with the colon separator

flockport addnetwork vx1 s1:eth1 s2:eth1

if interfaces are nor provided the default outgoing network interface is chosen. In some cases you may want to connect servers across different interfaces than their default network interface and you can then specify the interface.

You can list networks with the listnetworks command

flockport listnetworks

This will list details of all managed networks in the cluster

You can add and remove multiple servers from the vx1 network.

flockport addnetwork vx1 s4 s5

This add servers s4 and s5 to the vx1 network

Remove servers from the vx1 network with the delnetwork command

flockport delnetwork vx1 s5

This will remove server s5 from the vx1 network

The vxl0 bridge is created with a default subnet of and with DHCP and masquerading enabled. Containers on this network will get an IP in the range. The DHCP server is run on one of the servers used to create the network

You can override the default values with your own settings when creating the network

flockport createnetwork vx1 vxlan -s subnet -i link-interface -b bridge -v vxid -p port -d domain -m multicast-ip -n dhcphost

You can provide your own subnet range with the -s flag. The default subnet used for vxlan networks is VXID is the network identifier in Vxlan and is used to segment traffic. The default vxid used is 42. The -b flag sets the name of the vxlan bridge. The default is vxl0. You can also set the vxlan port and multicast IP with the -p and -m flags respectively.

You can delete network with the removenetwork command

flockport removenetwork vx1

This will remove the vx1 network. Use this option carefully. It will completely remove the network on all servers in the vx1 network.

A note on network level discovery. On local systems all containers can access each other on the .app domain. This is configured into the dnsmasq instance that provides DHCP services to containers. It's also active on containers across hosts on any Vxlan network deployed by Flockport. In this case containers across hosts can access each other at the .vxapp domain.
If you are using kvm with libvirt the libvirt iptables rules drops vxlan traffic and needs to be changed to accept vxlan traffic.


BGP is the Internet routing protocol. It's highly scalable, proven and robust and our first choice for building flexible container networks.

For this section we are going to use an example network with 3 servers and connect container subnets within the 3 servers. The servers are s1, s2 and s3 with container subnets, and respectively.

BGP automates route management. You can add and remove subnets and BGP takes care of the routing. With BGP we can add and remove container subnets and let BGP take care of the routing.

BGP works on the concepts of peers with each peer announcing routes. Our servers will be peers and each server will announce the container subnets within them. BGP peers distribute routes to each other.

To create a new bgp network use the createbgp command. When creating a BGP network an initial server and its IP for use in the BGP network must be specified.

flockport createbgp bnet s1

This will initialize the bgp network on server s1. bnet is the name of the network, this can be any name. s1 is the server and is the server's IP address to be used for the BGP network. You can also initialize your localhost as the anchor server by using the keyword local in place of servername

Now you can add members to the newly created bnet BGP network with the addbgp command

flockport addbgp bnet peer s2

This adds server s2 as a peer to the bnet bgp network. We use the keyword 'peer' to add peers to the network. The addbgp command is used to add both peers and subnets to the network

Let's add one more member to the bnet network. Notice all peers are in the same subnet

flockport addbgp bnet peer s3

This adds server s3 to the bnet network

We can now announce each servers container subnets to the BGP network. To add subnets we use the addbgp command but this time instead of 'peer' we use the keyword 'network'.

Let us announce server s1's container subnet to the bnet network

flockport addbgp bnet network s1

This will add the subnet to the bnet network. All members of the bnet network will now be able to reach any container in the subnet.

Once we announce the rest of the container subnets, containers across subnets and servers will be able to ping each other.

flockport addbgp bnet network s2

We just added s2's container subnet to the bnet network

flockport addbgp bnet network s3

Now all container subnets across s1, s2, s3 servers have been added to the bnet network. At this point all containers across servers will be able to ping each other

We can add more servers and networks to the bnet network. Every added server will have access to all the containers in the bnet network and can announce its own container subnets. The routes will automatically be distributed.

You can list peers and networks in a bgp network with the listnetworks command

flockport listnetworks

This will list all servers and shared networks in the bgp network

You can also get server specific BGP info with the showbgp command

flockport showbgp s1

This will show you all bgp peers and routes known to the s1 server

You can delete servers and networks from a bgp network with the delbgp command

To delete a shared network

flockport delbgp bnet network s1>

This will delete the network shared by the s1 server

To delete a server from the network

flockport delbgp bnet server s2

This will remove the s2 server from the bnet network and also remove any container subnets shared by the s2 server

To remove a bgp network use the removebgp command

flockport removebgp bnet

This will remove the bgp network from all servers.


Wireguard is a new protocol that allows you to build encrypted layer 3 networks. What makes Wireguard unique is it does this at near line speed. Our internal test show Wireguard networks operating at around 900MB/s in a gigabit network which is extremely fast for an encrypted network.

For this section like before we are going to use an example network with 3 servers and connect container subnets within the 3 servers. The servers are s1, s2 and s3 with container subnets, and respectively.

There are ongoing efforts to merge Wireguard to the kernel but while that happens it needs to be installed as a kernel module which makes setup a bit more complex than we'd like.

However we have managed to streamline the process somewhat and hopefully its a bit easier to setup for users.

At a high level a Wireguard network requires a main server that co-ordinates the routing and network traffic.

To create a wireguard network use the createnetwork command.

flockport createnetwork wirex wireguard

This creates a wireguard network with the name wirex. You can choose any name.

Once the network is created you need to first add a main server with the addnetwork command

flockport addnetwork wirex server s1

This adds server s1 as the wirex main server. This will setup the wireguard network on the s1 server

Once you have added a lead server you can now add client servers using the keywork 'client'

flockport addnetwork wirex client s1 s2

This adds servers s1 and s2 to the wirex network.

You can now share container subnets with the keyword 'subnet'.

flockport addnetwork wirex subnet s1:

This shares the container subnet on s1 with the rest of the wirex container. The network to be shared is specified with a colon.

At this point both s2 and s3 will be able to ping containers in the subnet

Let's now share server s2 and s3's container subnets to the wirex network

flockport addnetwork wirex subnet s2:

This shared server s2's container subnet with the wirex network. Now all servers in wirex will be able to reach the network.

We can do the same for s3's subnet

flockport addnetwork wirex subnet s3:

This shares s3 container subnet with the wirex network

At this point all servers and containers in the Wirex network will be able to ping each other. And the connection is encrypted

You can get network details with the listnetwork command

flockport listnetworks

This will list all Flockport managed networks including their members and shared networks.

To remove subnets from wirex use the delnetwork command.

flockport delnetwork wirex subnet s3:

This will remove the shared by server s3 from the wirex network

To delete a client from the wirex network use the delnetwork command with the keyword client

flockport delnetwork wirex client s3

This will delete the s3 server from the wirex network.

The main server cannot be deleted from the network without removing the network.

To remove the entire wirex network use the removenetwork command.

flockport removenetwork wirex

Use this option with care. This will remove all servers from the wirex network and delete it

This gives you a lot of flexibility in organizing your networks. You can create exclusive networks with encrypted traffic.

You can also specify server IPs when adding them. This is for when you want the wireguard network to be on a specific interface on the server.

flockport addnetwork wirex server s1:

This adds s1 as the wirex server with the interface linked to IP

You can also specify IPs for client servers.

flockport addnetwork wirex client s2: s3:

This adds s1 and s2 to the wirex network on the IPs specified.


Till now we have been creating overlay networks to connect containers across servers that are on the same subnet. To connect discrete networks ie servers that may not be on the same subnet you have two options. You can create a virtual private network or use tunnels to connect each discrete point and then let routing take over.

VPNs connect servers thay are not on the same subnet and infact could be in different datacenters. This kind of network extension should be used carefully and has a performance penalty.

For instance if you want to connect servers across cloud providers ie AWS to GCE or Vultr to Digital Ocean you would need to create a VPN network.

Flockport uses Peervpn to build an layer 2 VPN overlay. Servers are connected by their public IP and containers across servers are put in the same layer 2 subnet.

For this example we are going to use Peervpn to connect servers across 2 cloud networks. The servers will be s1, s2, s3. s1 and s2 are in cloud network A and s3 is in cloud network B.

To create a new Peervpn layer 2 network we use the createnetwork command with the keyword l2

flockport createnetwork vpx l2

This will create a l2 network called vpx with a default configuration. You can choose any name.

You can add servers to the network with the addnetwork command.

flockport addnetwork vpx s1 s2

This will create a layer 2 overlay network with Peervpn that connects s1 and s2 servers

You will notice 2 new network interfaces on your vpx network servers. vpl0 and peer0. If you run brctl you will also notice peer0 is bridged to the vpx interface.

Connecting containers to the vpl0 bridge will put them on the same subnet across servers

You can add servers to the network with the addnetwork command.

flockport addnetwork vpx s3

This connects s3 to the vpx network. If you need to connect specific network interfaces on the server you can specify the server IP when adding the server.

flockport addnetwork vpx s3:

Each server in the vpx network will have the vxl0 bridge with a default subnet of

You can connect containers to the vpl0 bridge across servers and they will get IPs in the range and be in the same subnet. A quick recap how to manage container networking.

flockport setnet c1 vpl0

This will change the default bridge on container c1 to vpl0.

Or you can add another network interface to c1 and connect that to the vxl0 bridge

flockport addnet c1

This adds a new network interface to container c1. Now connect this interface to the vxl0 bridge.

You can list interfaces on containers with the listnet command.

flockport listnet c1

This will list all network interfaces on container c1 and you should see the new network interface. By default any new network interfaces are connected to the host's default container bridge.

flockport setnet c1 eth1:vxl0

This connects c1's eth1 network interface to the vxl0 bridge

You can add and remove servers from the vpx network

flockport adddnetwork vpx s3 s4

This will add servers s3 and s4 to the vpx network

And remove any added servers with the delnetwork command

flockport delnetwork vpx s4 s3

You can list networks with the listnetwork command

flockport listnetworks

This will list all networks in the cluster

You can remove the vpx network with the removenetwork command.

flockport removenetwork vpx

This will remove the vpx network

Using tunnels

Tunnels create a point to point connection between 2 servers. They are usually used to connect 2 network segments.

One use case would be to connect two discrete BGP networks across segments. We could connect 2 ebgp endpoints with a tunnel and share the tunnel IPs as routes in the network. This way container subnets across 2 network segments can access each other. This is a pretty advanced use case and must be approached with knowledge.

Flockport let's you create both GRE and IPIP tunnels. We will use an example of 2 servers on 2 networks. s1 with IP and s2 with IP The servers should be able to ping each other.

You can create a tunnel with the addtunnel command

flockport addtunnel gre s1 s2

This will create a gre tunnel between s1 and s2. Tunnel endpoint IPs will be and You can use your own set of IPs for tunnel endpoints but they should be in the same subnet and not clash with any other IP range in use on your network

Similarly to create an IPIP tunnel

flockport addtunnel ipip s1 s2

This will create an ipip tunnel between servers s1 and s2

Once tunnel endpoints are up you can add routes to subnets on each side

You can also specify server IPs when creating tunnels if you want the tunnel interface to be created over a specific server IP

flockport addtunnel ipips1 s2 -l s1-IP -r s2-IP

This will create an ipip tunnel between s1 and s2 over the IPs specified by the -l and -r flags.

Other options

You don't always need overlays to connect containers across hosts. On internal networks that you control you can easily connect containers across hosts with simple bridging.

For instance on a multi server network you can easily bridge a network interface on each server for instance eth1 to br0 and connect containers to this bridge. All containers connected to this bridge across servers will be on the same layer 2 network.