CoreOS Fleet, link redundant Docker container

261 views Asked by At

I have a small service that is split into 3 docker containers. One backend, one frontend and a small logging part. I now want to start them using coreOS and fleet.

I want to try and start 3 redundant backend containers, so the frontend can switch between them, if one of them fails.

How do i link them? If i only use one, it's easy, i just give it a name, e.g. 'back' and link it like this

docker run  --name front --link back:back --link graphite:graphite -p 8080:8080 blurio/hystrixfront

Is it possible to link multiple ones?

2

There are 2 answers

1
Greg On BEST ANSWER

The method you us will be somewhat dependent on the type of backend service you are running. If the backend service is http then there are a few good proxy / load balancers to choose from.

The general idea behind these is that your frontend service need only be introduced to a single entry point which nginx or haproxy presents. The tricky part with this, or any cloud service, is that you need to be able to introduce backend services, or remove them, and have them available to the proxy service. There are some good writeups for nginx and haproxy to do this. Here is one:

haproxy tutorial

The real problem here is that it isn't automatic. There may be some techniques that automatically introduce/remove backends for these proxy servers.

Kubernetes (which can be run on top of coreos) has a concept called 'Services'. Using this deployment method you can create a 'service' and another thing called 'replication controller' which provides the 'backend' docker process for the service you describe. Then the replication controller can be instructed to increase/decrease the number of backend processes. You frontend accesses the 'service'. I have been using this recently and it works quite well.

I realize this isn't really a cut and paste answer. I think the question you ask is really the heart of cloud deployment.

0
yp28 On

As Michael stated, you can get this done automatically by adding a discovery service and bind it to the backend container. The discovery service will add the IP address (usually you'll want to bind this to be the IP address of your private network to avoid unnecessary bandwidth usage) and port in the etcd key-value store and can be read from the load balancer container to automatically update the load balancer to add the available nodes.

There is a good tutorial by Digital Ocean on this: https://www.digitalocean.com/community/tutorials/how-to-use-confd-and-etcd-to-dynamically-reconfigure-services-in-coreos