Overview

Flockport provides a number of managed services cluster wide. These are designed to be easy to use and integrate various subsystems.

You can add and remove services on demand and get full visibility of any deployed services when active.

Service discovery

Service discovery lets your applications across servers be found via dns or a http rest api.

Containers register any service they host on startup to endpoints that make this available clusterwide.

When service discovery is enabled all containers check for service endpoints on startup and publish any defined services to discovered endpoints. When containers are stopped the services are automatically deleted from the endpoint.

You can define services for containers. For instance you can define backends services for an Nginx or Haproxy container. Once services are published to endpoints they are discovered via http and dns.

You can also add checks along with services. For instance if the backend service is available on port 80 you can add a check for the port.

Service discovery is provided by Consul and integrated into containers

For this example we are going to use 5 containers across 3 servers. The containers will be named c1 onwards and the servers from s1.

Container subnets on the 3 servers are 10.0.4.0/24, 10.0.5.0/24 and 10.0.7.0/24 and the subnets are connected to each other.

Visit the networks section to learn how to connect containers across servers if you haven't yet.

The endpoint will be added to server s1 and containers across all servers will use the s1 endpoint to publish services and for discovery

You can add a service endpoint with the addep command. You also need to choose a static IP for the endpoint. This IP will be used cluster wide to access Consul.

flockport addep s1 10.0.4.10

This will add a consul endpoint to the s1 server and automatically set it to the specified IP 10.0.4.10.

The static IP selected has to be in the container subnet range for the server you are deploying the consul endpoint to. In this case that is server s1 and its container subnet is 10.0.4.0/24 so 10.0.4.10 is a valid static IP for this server.

Once you add the endpoint it will be accesible to servers across the cluster.

Containers on startup check for an endpoint and publish any service defined. The next step is to add a service to a container

Lets assume our 5 example containers host wordpress instances at port 80 that we want to be discovered by an Nginx load balancer.

You can add services hosted by containers with the addservice command.

flockport addservice c1 wordpress 80

This will add the wordpress service at port 80 to container c1.

We will repeat this for all 5 example containers

flockport addservice c2 wordpress2 80

This will add the wordpress2 service at port 80 to container c2.

Now on startup c1 and c2 will publish the wordpress:80 and wordpress2:80 service to the Consul endpoint.

Consul makes published services via the .consul domain. The instances will be available on DNS as wordpress.consul:80 and wordpress2.consul:80.

Our Nginx load balancer will be able to pick up all wordpress instances via dns.

We could also have defined the service as wordpress without numbering the instances. In that case consul will automatically provide round robin dns replies to any dns queries.

You can list endpoints with the listep command

flockport listep

This will list all endpoints across the cluster.

Similarly you can list services hosted by containers.

flockport listservice c1

This will list any services hosted by c1

You can also add checks to containers with the addcheck command.

flockport addcheck c1 mycheck tcp 80.

This adds a port 80 tcp check to c1 by the name mycheck.

You can also add a http check

flockport addcheck c1 mycheck http mywordpress.org

This add a http check with url mywordpress.org with the name mycheck

Containers on startup will publish both services and checks to the consul endpoint.

You can get an overview of services and checks published with the services command.

flockport services

This will list all services and checks know to endpoints

You can also access the Consul GUI on the Consul instance to get an overview of all known services and checks

You can delete any defined services with the delservice command

flockport delservice c1

This will delete any defined service from c1

Similarly you can delete any checks with the delcheck command. Note you need to specify the check name when deleting it.

flockport delcheck c1 mycheck

This will delete mycheck from c1.

You can also publish adhoc services and checks to endpoints.

flockport pubservice minio 10.0.7.10:9000

This will publish the minio service to any available endpoints. Any dns query for minio.consul will resolve to 10.0.7.10:9000

flockport pubcheck minio-check tcp 10.0.7.10:9000

This will add the minio-check to available endpoints.

Having just one consul instance creates a single point of failure. It's a good idea to have multiple consul instances. Consul recommends 3 instances for quorum. When you add a second endpoint it is automatically synced to any existing endpoints.

You can delete endpoints with the delep command. Endpoints once deployed get a automatically generated name. Lets assume our endpoint is called consul-10.

flockport delep consul-10

This will remove the endpoint

Load balancers

A load balancer scales applications across multiple app instances, in this case container app instances. It does this by distributing traffic across app instances. These applications instances can be located across your cluster.

Flockport uses Nginx and Haproxy for load balancers.

A load balancer uses the concepts of frontends and backends. The frontend is the load balancer the and the backends are the application instances.

For this example we are going to use 5 containers spread across 3 servers. The containers will be named c1 onwards and the servers s1. The container subnets will be 10.0.4.0/24, 10.0.5.0/24, 10.0.70.24 and will be linked to each other

You can deploy a load balancer with the addlb command.

flockport addlb haproxy s1

This deploys a haproxy load balancer to server s1.

The lb instance name is autogenerated. You can use the listlb commands to get details on deployed lbs.

flockport listlbs

This will list details on all deployed lbs including the autogenerated name. Lets us assume the lb name is ha1.

You can add backends to the ha1 lb with the addbackends command. Backends need to be specified by the IPs of the container application and port separated by a colon.

flockport addbackends ha1 web1:10.0.4.20:9000 web2:10.0.5.20:9000 web3:10.0.7.20:9000

In the case of Haproxy lbs you also need to add a 'name' for the backends seperated by a colon. In this case we used 'web1', 'web2', 'web3'. This does not apply to Nginx lbs.

This will add the container IP and port specified as backends to the ha1 load balancer.

Any request for the application hitting the ha1 load balancer will be distributed to the backends specified.

You may need to forward the load balancer port to the host. The specific port depends on the configuration but its usually port 80 or port 443

flockport pub ha1 80

This forwards the ha1 loadbalancers port 80 to the host. This is if we run the command on the host ha1 is located in.

Since we deployed the ha1 load balancer on server s1 we should run

flockport pub s1:ha1 80

This will forward ha1 port 80 to the host port 80 on s1.

You can also deploy Nginx load balancers

flockport addlb nginx s1 -u mywordpress.org -b 10.0.4.10:80 -c cert -k key

When creating an Nginx load balancer you must specify the URL and at least one backend.

This creates an nginx loadbalancer with the mywordpress.org url on server s1 with a single backend 10.0.4.10:80 defined.

The url for the lb is specified with the -u flag. The backend with the -b flag. You can also specify certificate and keys with the -c and -k flags.

The lb is autogenerated. You can specify a name with the -n flag for name. For this example let us assume the name is ngnixlb-10.

With the lb deployed you can now add backends with the addbackends command.

flockport addbackends nginxlb-10 10.0.4.10:80 10.0.5.10:80 10.0.7.10:10

This adds the container IPs specified as backends for the nginx10-lb.

If you have an endpoint defined you can add the backend as a service which the nginx lb can discover via dns

You can remove backends with the delbackends command

flockport delbackends nginxlb-10 10.0.4.10:80

This will remove the 10.0.4.10:80 backend from the nginx10-lb.

Any deployed lb gets a system generated name. You can specify a name with the -n flag.

flockport addlb haproxy s1 -n ha1

This adds a haproxy loadbalancer to server s1 with the name ha1

You can also use the setlb command to specify loadbalancer settings.

flockport setlb ha1 mode leastconn

This configures the ha1 lb to distribute load according to least connected servers.

For Haproxy you can choose from 3 modes. Source, leastconn and roundrobin. The default is roundrobin.

For Nginx lbs the modes options are ip_hash, least_conn and default. The default is round robin

You can also change the url, configure ssl, configure workers numbers and set resolvers with the setlb command.

Webservers

Web servers are similar to load balancers but without the scaling. Load balancers scale applications and are tied to a particular application. You can add multiple instances of the app container to the lb to scale the application. Webservers can publish multiple app containers across the cluster.

Web servers allow you to expose multiple applications across your cluster in a single place.

We use nginx as the webserver and it is essentially acting as reverse proxy for container apps.

In this example we will use 3 container applications across 3 servers. The containers subnets are connected. We are going to use a single webserver to publish all apps.

You can add a webserver with the addws command

flockport addws s1

This will add an nginx webserver to s1. The name of the web server will be autogenerated.

You can get the name with the listws command

flockport listws

This will list details on all currently deployed web servers. Let's presume the webserver autogenerated name is nginx11

Let's start by publishing a minio container on server s2 to nginx11.

flockport pubapp nginx11 myminio.org 10.0.5.20:9000

This will publish the minio container with url myminio.org to the nginx11 webserver. 10.0.5.20 is the ip of the minio container and 9000 is the port where the minio app is served.

The myminio.org url is now available on the nginx11 webserver.

Let's publish a mattermost container on server s3 to the nginx11 webserver

flockport pubapp nginx11 mymattermost.app 10.0.7.40:9425

This will publish the mattermost container with url mymattermost.app to nginx11

You will be able to access mymattermost.org at the nginx11 webserver ip

Let's also publish a wordpress container on server s1 to nginx11

flockport pubapp nginx11 mywordpress.org wordpress.app:80

This will publish the wordpress container with url mywordpress.org to nginx11. Notice we did not provide the wordpress container ip.

If the app container and webserver are on the same host you can directly use the container name as containers can be discovered via dns by the webserver

All 3 of our apps are now published on the nginx11 webserver. But how do you access them on our browser?

If the webserver is on the same host as the browser you simply add a entry to your /etc/hosts file pointing to the webserver IP. You can get the webserver IP with the listws command but let us assume its 10.0.4.100

myminio.org 10.0.4.100

This associates myminio.org with the nginx11 webserver ip.

You can also use the flockport addhosts command to add associate urls with ips.

flockport addhosts myminio.org 10.0.4.100

This will add a myminio.org entry to /etc/hosts

Now you can point your browser at myminio.org and you should get the minio application.

However if you are on another host you need to do a few more things.

Webservers are configured to makes apps available on port 80 or 443 for ssl

In this case we deployed the nginx11 webserver to the s1 server. So we need to forward the nginx webserver port 80 to the s1 host

flockport pub s1:nginx11 80

This forwards the nginx11 webservers port 80 to the host

Now we need to add an entry to our /etc/hosts but this time pointing to s1 servers IP. Remember we have already forwarded our webservers port to the host. Let's assume s1's ip is 192.168.122.11

myminio.org 192.168.122.11

This associates the myminio.org url with s1's ip. Now you should be able to access myminio.org on your browser.

You should also be able to access all the other published apps on your browser by adding url entries to your hosts file

If service discovery is enabled you do not need to provide the ip of the app container when publishing the app as it can be discovered via dns by the web server.

flockport pubapp nginx11 myminio.org minio.consul:9000

When service discovery is enabled you need to append the .consul domain to the app. You should now be able to access myminio.org on nginx11

You can also enable ssl when publishing an app to a webserver with the -c and -k flags

flockport pubapp nginx11 myminio.org 10.0.4.20:9000 -c cert -k key

The -c and -k flags specifiy certificates and keys when publishing an app. When you publish an app with ssl the webserver will automatically publish the app on port 443

You can also use the -s flag to generate self signed certificates for any published app. This is useful for testing.

flockport pubapp nginx11 myminio.org 10.0.4.20:9000 -s

This will generate self signed certs for myminio.org an make it available on port 443

You can remove published apps with the delapp command

flockport delapp nginx11 myminio.org

This will delete the myminio.org app from the nginx11 webserver

When adding a webserver you can specify a name use the -n flag

flockport addws s1 -n nginx100

This will add a webserver with the name nginx100

Databases

You can roll out databases cluster wide. This provides significant flexibility as we can roll out not only database instances but also add and remove databases on demand.

Currently you can deploy both Mysql and Postgresql databases.

To add a database instance simply use the adddb command.

flockport adddb mysql s1 wordpress wordpress

This adds a mysql db instance to server s1 and creates a wordpress user and database.

You can get details of deployed databases use the listdbs command.

flockport listdbs

This will list all databases deployed cluster wide

Similarly for Postgresql databases

flockport adddb postgresql s2 discourse discourse

This will add a postgresql database to the s2 server and add a discourse user and database.

Lauched db instances get autogenerated names. If you prefer to specify a name you can use the -n flag

flockport addb mysql s1 wordpress wordpress -n mysql-10

This will launch a mysql instance called mysql-10 and add an wordpress user and database.

You can specify an allowed IP range for access control with the -a flag and a password with the -p flag. By default access control is set to the local subnet, and the password is autogenerated.

High availability

High availability ensures your services are up even if a server goes down. One way to do is to have 2 matched servers and add a floating ip

All services that depend on the server are configured to connect to the floating ip.

In case one of the servers goes down the floating ip is available on the other and this ensures your services are highly available.

VIP services are only available for use on servers.

Flockport deploys keepalived for HA

Let's use 2 matched servers for this example, s1 and s2 that both provide the same services.

To enable HA use the addvip command to specify the 2 servers selected for HA and choose a floating ip.

flockport addvip s1 s2 192.168.122.100

This enables HA on s1 and s1 and adds a floating ip 192.168.122.100. The floating IP must be in the range of the servers subnet. Since Flockport will not have control of the server subnet you must reserve this IP beforehand.

Now dependent services can be configured to access the floating ip on s1 and s2 and even if one server goes down the services will be available.

You can list all current vips with the listvips option

flockport listvips

This will list all current vip's in operation in the cluster

You can remove a previously added VIP with the delvip command. The vipname is autogenerated and available from the listvip command

flockport delvip vipname

This will remove the floating ip.

Job scheduler

The job scheduler is designed to automatically deploy container apps cluster wide, track them and upgrade them when required.

This is a very early preview release that performs basic scheduling. This will become more sophisticated.

When you schedule a job flockport evaluates all servers in the cluster and allocates jobs for ideal placement.

You can add a job with the addjob command.

flockport addjob [c1] [count] -n network -g group -r replicas -s strategy

c1 is the base container and count is the number of jobs to deploy for this container. That is all that is required to launch a job.

The options let you finetune your deployment. Network specifies which network to attach the job to. Group defines job stickiness. Jobs in the same group will be placed on the same server. Replica determines how many jobs will be launched per server. And strategy determines job placement. Currently only 'resource' strategy is supported.

You can get details of all jobs deployed with the listjobs command.

flockport listjobs

This will list all jobs currently deployed in the cluster.

You can upgrade jobs with the upgradejobs command.

flockport upgradejob c1

This will upgrade the job specified across the cluster.

You can perform rolling upgrades with the interval option.

flockport ugradejob c1 -i 20

This will wait 20 seconds before upgrading the job instances deployed.

You can scale jobs with the scalejob command.

flockport scalejob [c1] [count] -g group

This will scale the c1 job by count specified.

You can delete jobs with the deljobs command.

flockport deljob c1

This will remove the job from active deployment.