Deploying Container Apps

Containers typically exist in a private network within the host. Which means they are not accessible outside the host. For instance if you are running an Nginx web server in a container on port 80 while you can access the Nginx container from within the host its not accessible from outside.

To make it accessible from outside the host on port 80 you need to use port forwarding. On Linux this is easily done with the iptables utility. For instance to forward host port 80 to container with IP 10.0.4.20 port 80 we would run the command below.

iptables -t nat -A PREROUTING -p TCP -i eth0 --dport 80 -j DNAT --to-destination 10.0.4.20:80

That's all and good but in case you have multiple container apps on port 80 you can only port forward to one container at a time. This is where using a reverse proxy like Nginx becomes useful. Some container platforms refer to this as an ingress controller but that is needless verbiage. It's just an Nginx reverse proxy or incase you want to load balance container application instances you would typically use either Nginx or Haproxy.

Reverse Proxies

You can configure Nginx to serve various container apps on your server or internal network. This way all the containers can continue to be in the private network and you need to only expose the Nginx container. You can of course run apps on other ports but often port 80/443 are usually required for most apps

This works not only on single hosts but also on internal networks. You can have a single Nginx container serving any number of apps from the internal network. So all your PHP, Python or Ruby apps etc can be served to the outside world by Nginx. You can also terminate SSL connections with Nginx.

Let's use a real world example to illustrate this. Suppose you have Wordpress, Minio and Redmine containers running on your host. You can simply configure an Nginx container instance to serve the 3 apps. A typical Nginx configuration to serve a Wordpress container instance for example would look like this.

upstream backend {
    server 10.0.4.100:80;
}

    server {
      listen 80;
      server_name mywordpress.org;
      access_log /var/log/Nginx/mywordpress.access.log;
      error_log /var/log/Nginx/mywordpress.error.log;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header HOST $http_host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://backend;
    }
}

This assumes the Wordpress container IP is 10.0.4.100 and the URL you want to access the Wordpress app is mywordpress.org. You can replicate the config for each container app you want to serve simply changing the upstream server IP to your container IP and port, and the server_name.

This is not limited to containers within a single host. You can use Nginx to serve apps from across your internal network.

You can also use Nginx for SSL termination like below.

upstream backend {
    server 10.0.4.120:9402;
}

    server {
      listen 80;
      server_name: myminio.org;
      return 301 https://$host$request_uri;
    }

    server {
      listen 443 ssl;
      server_name myminio.org;
      ssl_certificate     myminio.org.cert;
      ssl_certificate_key myminio.org.key;
      ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
      ssl_ciphers         HIGH:!aNULL:!MD5;
      ssl_session_cache builtin:1000 shared:SSL:10m;

      access_log /var/log/Nginx/myminio.access.log;
      error_log /var/log/Nginx/myminio.error.log;
      
    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header HOST $http_host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

This is assuming the Minio container IP is 10.0.4.120 and the URL you would like access the Minio app is myminio.org. You can of course configure the URL as required. In these examples we used Nginx. You can also use Apache or any other web server to do the same.

Load Balancing

You can also configure an Nginx or Haproxy load balancer on the same principle to load balance multiple instances of apps across a cluster.

Below is a typical configuraton for an Nginx load balancer. This is serving 3 backend Redmine container instances defined in 'upstream backend'

upstream backend {
    server 10.0.4.140:3000;
    server 10.0.5.150:3000;
    server 10.0.7.170:3000;
}

    server {
      listen 80;
      server_name myredmine.org;
      access_log /var/log/nginx/myredmine.access.log;
      error_log /var/log/nginx/myredmine.error.log;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header HOST $http_host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://backend;
    }
}

You can also use Haproxy to do the same. A typical Haproxy configuration would look like this.

global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
nbproc 1
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
tune.ssl.default-dh-param 2048

defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

frontend www-http
bind *:80
option forwardfor
stats enable
stats refresh 10s
stats uri /haproxy?stats
stats realm "haproxy stats"
stats auth admin:password
default_backend app

#resolvers flockport

#frontend www-https

backend app
balance roundrobin
server web01 10.0.4.140:3000 check
server web02 10.0.5.150:3000 check
server web03 10.0.7.170:3000 check

Flockport has this functionality built in and lets you deploy both Nginx and Haproxy instances to serve your container apps on a single host or across the network.


RELATED POSTS