Access files on host server from the Meteor App deployed with Meteor Up

178 views Asked by At

I have a Meteor App deployed with Meteor UP to Ubuntu. From this App I need to read a file which is located outside App container on the host server.

How can I do that?

I've tried to set up volumes in the mup.js but no luck. It seems that I'm missing how to correctly provide /host/path and /container/path

volumes: {
      // passed as '-v /host/path:/container/path' to the docker run command
      '/host/path': '/container/path',
      '/second/host/path': '/second/container/path'
    },

Read the docs for Docker mounting volumes but obviously can't understand it.

Let's say file is in /home/dirname/filename.csv.

How to correctly mount it into App to be able to access it from the Application?

Or maybe there are other possibilities to access it?

2

There are 2 answers

0
Mikkel On

Welcome to Stack Overflow. Let me suggest another way of thinking about this...

In a scalable cluster, docker instances can be spun up and down as the load on the app changes. These may or may not be on the same host computer, so building a dependency on the file system of the host isn't a great idea.

You might be better to think of using a file storage mechanism such as S3, which will scale on its own, and disk storage limits won't apply.

Another option is to determine if the files could be stored in the database.

I hope that helps

0
SimonSimCity On

Let's try to narrow the problem down.

Meteor UP is passing the configuration parameter volumes directly on to docker, as they also mention in the comment you included. It therefore might be easier to test it against docker directly - narrowing the components involved down as much as possible:

sudo docker run \
  -it \
  --rm \
  -v "/host/path:/container/path" \
  -v "/second/host/path:/second/container/path" \
  busybox \
  /bin/sh

Let me explain this:

I'd expect that you also cannot access the files here. In this case, dig deeper on why you can't make a folder accessible in Docker.

If you can, which would sound weird to me, you can start the container and try to access into the container by running the following command:

docker exec -it my-mup-container /bin/sh

You can think of this command like SSH'ing into a running container. Now you can check around if it really isn't there, if the credentials inside the container are correct, etc.

At last, I have to agree it @mikkel, that it's not a good option to mount a local directoy, but you can now start looking into how to use docker volume to mount a remote directory. He mentioned S3 by AWS, I've worked with AzureFiles on Azure, there are plenty of possibilities.