I'm exploring the Docker capabilities for ShinyProxy on Azure and I want to set it up in a simple, expandable, and affordable way. As far as I understand there are five ways to set up Docker-based services on Azure.
Questions
My question is two-fold,
- on the general experience with deploying ShinyProxy based containers that spawn and destroy other containers based on connected user sessions;
- how to correct my approach?
Approaches
(Listed from most to least desirable; tested all except for the virtual machine approach.)
A) App Service Docker or Docker Compose setup
Most experience with, complexity is abstracted away.
With this approach, I found out that the current implementation of Docker and Docker Compose for Azure App Services does not support (ignores) the networks which are required (as far as I understand) to let ShinyProxy communicate with other containers on the internal network. In a Docker Compose file I've specified the following (and verified that it works locally):
networks:
app_default:
driver: bridge
external: false
name: app_default
If I understand the documentation properly you are just unable to create any custom networks for your Containers. It's not clear if you can create a custom Azure vnet that could be used for this or not (I'm not experienced with creating those).
The second important part of this ShinyProxy setup is to map the docker.sock file in the host and container together. Again this can be done through the Docker Compose file or parameters for a single Docker file. This is how I've specified it in my Docker Compose file (and verified that it works):
volumes:
# The `//` path prefix only works on Windows host machines, it's because
# Windows uses the Windows Pipeline system as a workaround for these kinds
# of Unix filesystem paths. On a Linux host machine, the path prefix
# needs to only contain a single forward slash, `/`.
# Windows Host volume
# docker_sock: //var/run/docker.sock
# Linux Host volume
docker_sock: /var/run/docker.sock
And then use the docker_sock named volume to map with the containers /var/run/docker.sock file, docker_sock:/var/run/docker.sock.
Because of those two problems, trying to visit any specs defined in the ShinyProxy application.yml file will just result in a Connection refused or File could not be found Java errors. Both correspond to the communication over network and docker.sock mapping.
B) Container Instances
New type of service, seems nice and easy
Pretty much the same problems as the App Service approach.
C) Container Apps
New type of service, seems nice and easy
Pretty much the same problems as the App Service approach.
D) Kubernetes Service
Requires a lot of additional configuration.
Tried, but abandoned this approach because I don't want to deal with an additional configuration layer and I doubt that I need this much control for my desired goal.
E) Virtual Machine
Requires a lot of setup and self-management for a production environment.
Haven't tried yet. There seem to be a couple of articles that go over how to approach this.
To Reproduce Locally
Here are some modified examples of my configuration files. I've left a couple of comments and also commented out properties in there.
ShinyProxy application.yml:
# ShinyProxy Configuration
proxy:
title: ShinyProxy Apps
landing-page: /
heartbeat-enabled: true
heartbeat-rate: 10000 # 10 seconds
heartbeat-timeout: 60000 # 60 seconds
# Timeout for the container to be available to ShinyProxy
container-wait-time: 20000 # 20 seconds
port: 8080
authentication: none
docker:
# url: http://localhost:2375
privileged: true
internal-networking: true
container-network: "app_default"
specs:
- id: hello_demo
container-image: openanalytics/shinyproxy-demo
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app.
container-network: "${proxy.docker.container-network}"
# container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
logging:
file:
name: shinyproxy.log
server:
servlet:
context=path: /
sprint:
application:
name: "ShinyProxy Apps"
ShinyProxy Dockerfile:
FROM openjdk:8-jre
USER root
RUN mkdir -p "/opt/shinyproxy"
# Download shinyproxy version from the official source
RUN wget https://www.shinyproxy.io/downloads/shinyproxy-2.6.0.jar -O "/opt/shinyproxy/shinyproxy.jar"
# Or, Copy local shinyproxy jar file
# COPY shinyproxy.jar "/opt/shinyproxy/shinyproxy.jar"
COPY application.yml "/opt/shinyproxy/application.yml"
WORKDIR /opt/shinyproxy/
CMD ["java", "-jar", "/opt/shinyproxy/shinyproxy.jar"]
docker-compose.yml:
networks:
app_default:
driver: bridge
external: false
name: app_default
# volumes:
# The `//` path prefix only works on Windows host machines, it's because
# Windows uses the Windows Pipeline system as a workaround for these kinds
# of Unix filesystem paths. On a Linux host machine, the path prefix
# needs to only contain a single forward slash, `/`.
# Windows only volume
# docker_sock: //var/run/docker.sock
# Linux only volume
# docker_sock: /var/run/docker.sock
services:
# Can be used to test out other images than the shinyproxy one
# hello_demo:
# image: openanalytics/shinyproxy-demo
# container_name: hello_demo
# ports:
# - 3838:3838
# networks:
# - app_default
# volumes:
# - //var/run/docker.sock:/var/run/docker.sock
shinyproxy:
build: ./shinyproxy
container_name: app_shinyproxy
# Change the image to what you've called your own image to
image: shinyproxy:latest
# privileged: true
restart: OnFailure
networks:
- app_default
ports:
- 8080:8080
volumes:
- //var/run/docker.sock:/var/run/docker.sock
With all the files in place, just run docker compose build && docker compose up.