Locally mount AWS EFS using docker-compose

6.2k views Asked by At

I am wondering if there is an easy way to make use of Amazon EFS ( Elastic File System ) to be mounted as a volume in a local docker-compose setup.

Reason being that for local development, volumes created are persisted on my laptop - If I were to change machines, I can't access any of that underlying data. A cloud NFS would solve this problem as it would be readily available from anywhere.

The AWS documentation (https://docs.aws.amazon.com/efs/latest/ug/efs-onpremises.html) seems to suggest the use of AWS Direct Connect / VPN - is there any way to avoid this by opening port 2049 (NFS traffic) in a security group that listens on all IP addresses, and applying that security group to a newly created EFS?

Here is my docker-compose.yml:

version: "3.2"
 
services:
  postgres_db:
    container_name: "postgres"
    image: "postgres:13"
    ports:
      - 5432:5432
    volumes:
      - type: volume
        source: postgres_data
        target: /var/lib/postgresql/data
        volume:
          nocopy: true
    environment: 
      POSTGRES_USER: 'admin'
      POSTGRES_PASSWORD: "password"
 
volumes: 
  postgres_data:
    driver_opts:
      type: "nfs"
      o: "addr=xxx.xx.xx.xx,nolock,soft,rw"
      device: ":/docker/example"

I am getting the below error:

ERROR: for postgres_db  Cannot start service postgres_db: error while mounting volume '/var/lib/docker/volumes/flowstate_example/_data': failed to mount local volume: mount :/docker/example:/var/lib/docker/volumes/flowstate_example/_data, data: addr=xxx.xx.xx.xx,nolock,soft: connection refused

Which I interpret to be that my laptop connection is not part of the AWS EFS VPC, hence it is unable to mount the EFS.

For added context, I am looking to dockerize a web scraping setup and have the data volume persisted in the cloud so I can connect to it from anywhere.

3

There are 3 answers

0
nnsense On

In short, the answer is no, it is not possible to mount an EFS volume on your local machine or anything running outside the AWS VPC without some VPN or port redirect: EFS uses a private IP from the subnet(s) CIDR range, it has no public IP and its DNS is an internal one, cannot be resolved from internet.

What you could do instead is to use S3, if that can be an option (first caveat I can think of: no unix permissions).

On Linux, s3fs is a reliable option.

Once installed and mounted, you can use it as a volume on your deployment, just like any other directory, you can also automate it as shown here, by running s3fs as a container and mount it into a secondary container (and locally, which is nice if you want to access the files directly):

Into the .env file you set your auth to S3:

AWS_S3_BUCKET=
AWS_S3_ACCESS_KEY_ID=
AWS_S3_SECRET_ACCESS_KEY=

then, into docker-compose.yaml:

version: '3.8'

services:
  s3fs:
    privileged: true
    image: efrecon/s3fs:1.90
    restart: unless-stopped
    env_file: .env
    volumes:
      # This also mounts the S3 bucket to `/mnt/s3data` on the host machine
      - /mnt/s3data:/opt/s3fs/bucket:shared

  test:
    image: bash:latest
    restart: unless-stopped
    depends_on:
      - s3fs
    # Just so this container won't die and you can test the bucket from within
    command: sleep infinity
    volumes:
      - /mnt/s3data:/data:shared
2
sgohl On

EFS assumes nfs4, so:

version: '3.8'

services:

  postgres:
    volumes:
      postgres_data:/var/lib/postgresql/data

volumes: 

  postgres_data:
    driver_opts:
      type: "nfs4"
      o: "addr=xxx.xx.xx.xx,nolock,soft,rw"
      device: ":/docker/postgres_data"

Of course, the referenced nfs-export/path must exist. Swarm will not automatically create non-existing folders.

Make sure to delete any old docker volumes of this faulty kind/name manually (on all swarm nodes!) before recreating the stack:

docker volume rm $(docker volume ls -f name=postgres_data -q)

This is important to understand: Docker NFS Volumes are actually only the declaration where to find the data. It does not update when you update your docker-compose.yml, hence you must remove the volume so any new configuration will appear

see output of

docker service ps stack_postgres --no-trunc

for more information why volume couldn't be mountet

Also make sure you can mount the nfs-export via mount -t nfs4 ...

see showmount -e your.efs.ip.address

1
Miacis Wang On
volumes:
  nginx_test_vol:
    driver_opts:
      type: "nfs"
      o: "addr=fs-xxxxxxxxxxxxxxxxxx.efs.us-east-1.amazonaws.com,rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
      device: ":/nginx-test"

This makes me use it very well