I'm new to docker volumes and trying my best to process the file in docker and save the file in local directory to sync it to AWS S3.
Here is my docker-run command with --volume
which is using local-directory path attaching it to container. When i run the command on my local system(MacOS). It creates a file in local-directory(created_file.txt
)
docker run -v /*/test_image:/src -t test python3 test_script.py --pick_path=/src/image_lenna.jpeg
Here is the same docker-run command on my EC2 instance
docker run -v /home/ubuntu/test_file/test_image:/src -t test python3 test_script.py --pick_path=/src/image_lenna.jpeg
The output in local system:
The output in remote system:
The Dockerfile contents
FROM nginx
RUN apt-get update && apt-get install --no-install-recommends --no-install-suggests -y curl
RUN apt-get install unzip
RUN apt-get -y install python3
RUN apt-get -y install python3-pip
USER root
RUN ["apt-get", "install", "-y", "libsm6", "libxext6", "libxrender-dev"]
WORKDIR .
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY test_script.py .
RUN echo "hello world"
The command that I used to generate the docker file is:
docker build -t test .
.
refers to the folder where my Dockerfile
resides
I'm not getting why the file is not getting created on EC2 instance. Am I missing something?
This was kinda silly mistake. The write path in the python script should also be the path that is mentioned for docker volume.
Here the local path is attached to
docker volume
namedsrc
. So the python script should write tosrc
folder.