Using Azure Container Service with volume mapping

1.4k views Asked by At

I deployed a dockerized app to azure using the Azure Container Service. It is a NodeJS/Express app using MongoDB. Everything is working fine, but now what I want to do is set up a volume mapping between one of my internal project folders and a folder on the VM.

This works fine in regular docker, I simply run the following command when starting the container:

docker run -d --net=784849494 -p 5555:80 -v /www/uploads:/var/www/myapp/uploads myapp

Basically, I make an uploads folder in www in my VM and it maps to my project folder uploads.

This is the part I'm confused about, when I create the folder in my VM that azure spun up for me which is accessed by

user@myazureapp -p 2200 -L 22375:12.0.0.1:2375 -i mykey

this does not work. I'm guessing the folder needs to be created in another VM that is integrated with the container service. But I am not sure where that is and cannot find it.

1

There are 1 answers

0
rgardler On

Short version:

You don't want to be using on VM folders, you need some kind of network drive for your data. Perhaps use a MongoDB service or store your MongoDB files in a storage account where you have guarantees of data access. To do the latter you might, for example, use an Azure Files volume driver for Docker (see https://azure.microsoft.com/en-us/blog/persistent-docker-volumes-with-azure-file-storage/)

Longer version:

The machine you are creating your folder on is the master. It is not where your containers are deployed, it is where the Docker Swarm master is located. This is why you can't see it. You need to create the folder on the agents.

However, you don't know where Swarm is going to deploy your container in the cluster and thus there is no guarantee that it will be placed on a VM where your folder is located. Even if you get lucky and the first time it is deployed you land on the correct VM there is no guarantee that it will be restarted on the same VM should the Docker ever need to restart.

You could create a folder on every agent, but in the case of a container restart you would not be guaranteed of it restarting on the same VM and thus your container would not have access to the same data. Even if you did land on the same VM you may still be in trouble since if your VM is restarted, perhaps to be service healed by Azure, there is no guarantee that the VM disks will still be there. VM Disks are ephemeral and I'm assuming that you don't want this data to go away.

In many cases the best option is to use a service for your data rather than run it in your cluster, e.g. https://learn.microsoft.com/en-us/azure/documentdb/documentdb-protocol-mongodb. This means someone else is responsible for backup, availability, scalability, performance etc. It does mean you are paying for the service but when you take into account the costs of managing your own service it is often cheaper - and you'll need a smaller ACS cluster.

If you really want to host your own MongoDB instance then you might consider using a Volume driver that ensures your MongoDB container always has access to a networked storage location regardless of where it is deployed. For example, you could use Azure Files volume driver for Docker (see https://azure.microsoft.com/en-us/blog/persistent-docker-volumes-with-azure-file-storage/).