I want to make http request from server running on localhost currently to api endpoint inside running container. I can't use portmapping as I am creating multiple containers one container per request.
this is how my code looks I am using dockerode create,run,stop and delete container for each request. Container creation and run happens successfully but axios request gets timed out. Is there any solution for this?
const data = await request.json();
const { language, program, inputs } = data;
const docker = new Docker();
const containerId = v4();
// 1. I am trying to spin new container using dockerode on each request
const container = await docker.createContainer({
Image: "project-name",
name: containerId,
});
await container.start();
const containerInfo = await container.inspect();
const ipAddress = containerInfo.NetworkSettings.IPAddress;
const containerUrl = `http://${ipAdress}/api/run`;
// 2 .Send code as input to that container's api/run endpoint (Where compiler logic is)
const response = await axios.post(containerUrl, data);
// 3. Then stop and remove after getting output
await container.stop();
await container.remove();
Except in one very specific host setup, you can't access the container-private IP addresses. You need to publish a port and you need to access the container through its published port. However, you don't have to pick a port yourself; if you don't assign a port then Docker will choose an unused port on its own. (In
docker run
syntax, this is equivalent todocker run -p 12345
with only a container port number and nothing else.)Now when you inspect the container, the
PortBindings
value will be filled in.The
host
value is harder to determine programmatically. If your application is running on plain Docker on a native-Linux host, or on Docker Desktop, thenlocalhost
will be right. If this application itself is running inside a container with access to the host's Docker socket, you will needhost.docker.internal
or a similar value. I generally use a Minikube VM in my day job, and itsminikube ip
address is what I need. You can also set up to use a remote Docker daemon and the host name will be something else entirely.As a comment suggests, I might rethink this approach. Launching a container per request is actually kind of expensive, there are a lot of lifecycle issues to think about (what happens to the container if the main process creashes), and launching containers implies unrestricted root-level access to the host. I also hinted at some environment-specific differences in the previous paragraph, and this code won't run at all in non-Docker container environments like Kubernetes. If you can set up a single long-running server and make HTTP requests to that, it will be much more maintainable (and much easier to actually run).