I am wondering if there is an easy way to make use of Amazon EFS ( Elastic File System ) to be mounted as a volume in a local docker-compose setup.
Reason being that for local development, volumes created are persisted on my laptop - If I were to change machines, I can't access any of that underlying data. A cloud NFS would solve this problem as it would be readily available from anywhere.
The AWS documentation (https://docs.aws.amazon.com/efs/latest/ug/efs-onpremises.html) seems to suggest the use of AWS Direct Connect / VPN - is there any way to avoid this by opening port 2049 (NFS traffic) in a security group that listens on all IP addresses, and applying that security group to a newly created EFS?
Here is my docker-compose.yml:
version: "3.2"
services:
postgres_db:
container_name: "postgres"
image: "postgres:13"
ports:
- 5432:5432
volumes:
- type: volume
source: postgres_data
target: /var/lib/postgresql/data
volume:
nocopy: true
environment:
POSTGRES_USER: 'admin'
POSTGRES_PASSWORD: "password"
volumes:
postgres_data:
driver_opts:
type: "nfs"
o: "addr=xxx.xx.xx.xx,nolock,soft,rw"
device: ":/docker/example"
I am getting the below error:
ERROR: for postgres_db Cannot start service postgres_db: error while mounting volume '/var/lib/docker/volumes/flowstate_example/_data': failed to mount local volume: mount :/docker/example:/var/lib/docker/volumes/flowstate_example/_data, data: addr=xxx.xx.xx.xx,nolock,soft: connection refused
Which I interpret to be that my laptop connection is not part of the AWS EFS VPC, hence it is unable to mount the EFS.
For added context, I am looking to dockerize a web scraping setup and have the data volume persisted in the cloud so I can connect to it from anywhere.
In short, the answer is no, it is not possible to mount an EFS volume on your local machine or anything running outside the AWS VPC without some VPN or port redirect: EFS uses a private IP from the subnet(s) CIDR range, it has no public IP and its DNS is an internal one, cannot be resolved from internet.
What you could do instead is to use S3, if that can be an option (first caveat I can think of: no unix permissions).
On Linux, s3fs is a reliable option.
Once installed and mounted, you can use it as a volume on your deployment, just like any other directory, you can also automate it as shown here, by running
s3fs
as a container and mount it into a secondary container (and locally, which is nice if you want to access the files directly):Into the
.env
file you set your auth to S3:then, into
docker-compose.yaml
: