I'm exploring the Docker capabilities for ShinyProxy on Azure and I want to set it up in a simple, expandable, and affordable way. As far as I understand there are five ways to set up Docker-based services on Azure.
Questions
My question is two-fold,
on the general experience with deploying ShinyProxy based containers that spawn and destroy other containers based on connected user sessions;
how to correct my approach?
Approaches
(Listed from most to least desirable; tested all except for the virtual machine approach.)
A) App Service Docker or Docker Compose setup
Most experience with, complexity is abstracted away.
With this approach, I found out that the current implementation of Docker and Docker Compose for Azure App Services does not support (ignores) the networks which are required (as far as I understand) to let ShinyProxy communicate with other containers on the internal network. In a Docker Compose file I've specified the following (and verified that it works locally):
networks:
app_default:
driver: bridge
external: false
name: app_default
If I understand the documentation properly you are just unable to create any custom networks for your Containers. It's not clear if you can create a custom Azure vnet that could be used for this or not (I'm not experienced with creating those).
The second important part of this ShinyProxy setup is to map the docker.sock file in the host and container together. Again this can be done through the Docker Compose file or parameters for a single Docker file. This is how I've specified it in my Docker Compose file (and verified that it works):
volumes:
# The `//` path prefix only works on Windows host machines, it's because
# Windows uses the Windows Pipeline system as a workaround for these kinds
# of Unix filesystem paths. On a Linux host machine, the path prefix
# needs to only contain a single forward slash, `/`.
# Windows Host volume
# docker_sock: //var/run/docker.sock
# Linux Host volume
docker_sock: /var/run/docker.sock
And then use the docker_sock named volume to map with the containers /var/run/docker.sock file, docker_sock:/var/run/docker.sock.
Because of those two problems, trying to visit any specs defined in the ShinyProxy application.yml file will just result in a Connection refused or File could not be found Java errors. Both correspond to the communication over network and docker.sock mapping.
B) Container Instances
New type of service, seems nice and easy
Pretty much the same problems as the App Service approach.
C) Container Apps
New type of service, seems nice and easy
Pretty much the same problems as the App Service approach.
D) Kubernetes Service
Requires a lot of additional configuration.
Tried, but abandoned this approach because I don't want to deal with an additional configuration layer and I doubt that I need this much control for my desired goal.
E) Virtual Machine
Requires a lot of setup and self-management for a production environment.
Haven't tried yet. There seem to be a couple of articles that go over how to approach this.
To Reproduce Locally
Here are some modified examples of my configuration files. I've left a couple of comments and also commented out properties in there.
ShinyProxy application.yml:
# ShinyProxy Configuration
proxy:
title: ShinyProxy Apps
landing-page: /
heartbeat-enabled: true
heartbeat-rate: 10000 # 10 seconds
heartbeat-timeout: 60000 # 60 seconds
# Timeout for the container to be available to ShinyProxy
container-wait-time: 20000 # 20 seconds
port: 8080
authentication: none
docker:
# url: http://localhost:2375
privileged: true
internal-networking: true
container-network: "app_default"
specs:
- id: hello_demo
container-image: openanalytics/shinyproxy-demo
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app.
container-network: "${proxy.docker.container-network}"
# container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
logging:
file:
name: shinyproxy.log
server:
servlet:
context=path: /
sprint:
application:
name: "ShinyProxy Apps"
ShinyProxy Dockerfile:
FROM openjdk:8-jre
USER root
RUN mkdir -p "/opt/shinyproxy"
# Download shinyproxy version from the official source
RUN wget https://www.shinyproxy.io/downloads/shinyproxy-2.6.0.jar -O "/opt/shinyproxy/shinyproxy.jar"
# Or, Copy local shinyproxy jar file
# COPY shinyproxy.jar "/opt/shinyproxy/shinyproxy.jar"
COPY application.yml "/opt/shinyproxy/application.yml"
WORKDIR /opt/shinyproxy/
CMD ["java", "-jar", "/opt/shinyproxy/shinyproxy.jar"]
docker-compose.yml:
networks:
app_default:
driver: bridge
external: false
name: app_default
# volumes:
# The `//` path prefix only works on Windows host machines, it's because
# Windows uses the Windows Pipeline system as a workaround for these kinds
# of Unix filesystem paths. On a Linux host machine, the path prefix
# needs to only contain a single forward slash, `/`.
# Windows only volume
# docker_sock: //var/run/docker.sock
# Linux only volume
# docker_sock: /var/run/docker.sock
services:
# Can be used to test out other images than the shinyproxy one
# hello_demo:
# image: openanalytics/shinyproxy-demo
# container_name: hello_demo
# ports:
# - 3838:3838
# networks:
# - app_default
# volumes:
# - //var/run/docker.sock:/var/run/docker.sock
shinyproxy:
build: ./shinyproxy
container_name: app_shinyproxy
# Change the image to what you've called your own image to
image: shinyproxy:latest
# privileged: true
restart: OnFailure
networks:
- app_default
ports:
- 8080:8080
volumes:
- //var/run/docker.sock:/var/run/docker.sock
With all the files in place, just run docker compose build && docker compose up.
Related
ISSUE: I have a docker image running for neo4j and one for express.js. I cant get the docker images to communicate between eachother.
I can run neo4j desktop, start a nodemon server and they will communicate.
SETUP:
NEO4J official docker image
NEO4J_AUTH none
PORTS localhost:7474 localhost:7687
Version neo4j-community-4.3.3-unix.tar.gz
NODEJS Image
PORTS 0.0.0.0:3000 :::3000
Version 14.17.5
Express conf
DEV_DB_USER_NAME="neo4j"
DEV_DB_PASSWORD="test"
DEV_DB_URI="neo4j://localhost" //for image purpose for local its bolt://localhost:7687
DEV_DB_SECRET_KEY=""
let driver = neo4j.driver(
envConf.dbUri,
neo4j.auth.basic(envConf.dbUserName, envConf.dbUserName)
);
package.json
"#babel/node": "^7.13.10",
"neo4j-driver": "^4.2.3",
I can remote into the neo4j image through http://localhost:7474/browser/ so its running.
I cannot use the server image to call a local neo4j instance.
when i call the apis in the server image i get these errors
If i use neo4j protocal:
Neo4jError: Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1629484043610, routers=[], readers=[], writers=[]]
If i use bolt protocal:
Neo4jError: Failed to connect to server. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0. Caused by: connect ECONNREFUSED 127.0.0.1:7687
Ive been scouring the documentation for a while any ideas would be most welcome!
I was able to achieve the communication by using docker-compose. The problem is that both containers acted as separate networks and i could not find a way to allow the server to communicate with the database. Running docker-compose and building both containers within a single compose network allows communication using the service names.
take note this is tab sensitive!!
docker-compose.yml
version: '3.7'
networks:
lan:
# The different services that make up our "network" of containers
services:
# Express is our first service
express:
container_name: exp_server
networks:
- lan
# The location of dockerfile to build this service
build: <location of dockerfile>
# Command to run once the Dockerfile completes building
command: npm run startdev
# Volumes, mounting our files to parts of the container
volumes:
- .:/src
# Ports to map, mapping our port 3000, to the port 3000 on the container
ports:
- 3000:3000
# designating a file with environment variables
env_file:
- ./.env.express
## Defining the Neo4j Database Service
neo:
container_name: neo4j_server
networks:
- lan
# The image to use
image: neo4j:latest
# map the ports so we can check the db server is up
ports:
- "7687:7687"
- "7474:7474"
# mounting a named volume to the container to track db data
volumes:
- $HOME/neo4j/conf:/conf
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/plugins:/plugins
env_file:
- .env.neo4j
with this you can use docker to run both the server and database and anything else while still using change detection rebuilding to develop and even build multiple environment images at the same time. NEAT
My simple docker-compose.yaml file:
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- .hi:/var/www/html
ports:
- 8000:80
in folder hi/ I have just an index.php with a hello world print in it. (Do I need to have a Dockerfile here also?)
Now I just want to run this container with docker compose up:
$ docker compose up
host path ("/Users/xy/project/TEST/hi") not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section
What has "docker compose" up to do with Azure? - I don't want to use Azure File Share at this moment, and I never mentioned or configured anything with Azure. I logged out of azure with: $az logout but got still this strange error on my macbook.
I've encountered the same issue as you but in my case I was trying to use an init-mongo.js script for a MongoDB in an ACI. I assume you were working in an Azure context at some point, I can't speak on that logout issue but I can speak to volumes on Azure.
If you are trying to use volumes in Azure, at least in my experience, (regardless if you want to use file share or not) you'll need to reference an Azure file share and not your host path.
Learn more about Azure file share: Mount an Azure file share in Azure Container Instances
Also according to the Compose file docs:
The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
So the docker-compose file should look something like this
docker-compose.yml
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- hi:/var/www/html
ports:
- 8000:80
volumes:
hi:
driver: azure_file
driver_opts:
share_name: <name of share>
storage_account_name: <name of storage account>
Then just place the file/folder you wanted to use in the file share that is driving the volume beforehand. Again, I'm not sure why you are encountering that error if you've never used Azure but if you do end up using volumes with Azure this is the way to go.
Let me know if this helps!
I was testing to deploy docker compose on Azure and I faced the same problem as yours
then I tried to use docker images and that gave me the clue:
it says: image command not available in current context, try to use default
so I found the command "docker context use default"
and it worked!
so Azure somehow changed the docker context, and you need to change it back
https://docs.docker.com/engine/context/working-with-contexts/
This is how my docker file looks like
version: "3"
services:
controller:
build: ./cd_controller
image: cd_controller:latest
env_file: $PWD/robot-configurations/.env
cap_add:
- ALL
links:
- test_fixture
volumes:
- dependencypackages:/root/dependency-packages
- robotconfigurations:/root/robot-configurations
container_name: controller_g5
test_fixture:
image: docker-standard-all.ur-update.dk/ur/pytest-ur:0.7.1
volumes:
- pytestfixture:/workspace
- controllertests:/workspace/controller-tests
entrypoint: /workspace/entrypoint.sh
container_name: test_fixture
stdin_open: true # docker run -i
tty: true # docker run -t
volumes:
robotconfigurations:
driver_opts:
type: none
device: $PWD/robot-configurations
o: bind
...
Basically it has two two services/containers controller&test_fixture. controller has source code running and test_fixture contains all the test_cases. test_fixture needs to talk to controller through a socket. Since docker-compose creates a network among its containers, in my py-test cases I am simply using
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("controller_g5", 30001)) # controller_g5 is name of the controller container
So far every thing look fine. But I realized that I have multiple versions/features of the source code. So I would like to create a multiple instances of a same thing each have their own network. Does naming the container blocks the container creation with same name in different network ? Also I am not sure how to spin up one more network with similar container on the same host machine. I came across Multiple isolated environments on a single host But couldn't manage to get a sample example.
Any help is greatly appreciated.
You just need to use a different project name. By default compose uses the directory name in which your project is as the project name. docker-compose takes care of setting up the isolated network appropriately for each project.
To create two instances of your project namespaced as "dev" and "test", you can run it as follows:
docker-compose -p dev up
docker-compose -p test up
From https://docs.docker.com/compose/reference/overview/
-p, --project-name NAME Specify an alternate project name
(default: directory name)
You need to remove container_name field for multiple project instances to work. Compose prepends the container names with project name automatically but it won't do it if container_name is specified. You would get container name conflict when starting another project from the compose file if container_name is used. After removing container_name, your services will get service names as hostnames.
Running Consul with docker desktop using windows containers and experimental mode turned on works well. However if I try mounting bitnami consul's datafile to a local volume mount I get the following error:
chown: cannot access '/bitnami/consul'
My compose file looks like this:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
If I remove the volumes part, everything works just fine, but I cannot persist my data. If followed instructions in the readme file. The speak of having the proper permissions, but I do not know how to get that to work using docker desktop.
Side note
If I do not mount /bitnami but /bitnami/consul, I get the following error:
2020-03-30T14:59:00.327Z [ERROR] agent: Error starting agent: error="Failed to start Consul server: Failed to start Raft: invalid argument"
Another option is to edit the docker-compose.yaml to deploy the consul container as root by adding the user: root directive:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
user: root
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
Without user: root the container is executed as non-root (user 1001):
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c590d7df611 bitnami/consul:1 "/opt/bitnami/script…" 4 seconds ago Up 3 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec 0c590d7df611
I have no name!#0c590d7df611:/$ whoami
whoami: cannot find name for user ID 1001
But adding this line the container is executed as root:
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac206b56f57b bitnami/consul:1 "/opt/bitnami/script…" 5 seconds ago Up 4 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec ac206b56f57b
root#ac206b56f57b:/# whoami
root
If the container is executed as root there shouldn't be any issue with the permissions in the host volume.
Consul container is a non-root container, in those cases, the non-root user should be able to write in the volume.
Using host directories as a volume you need to ensure that the directory you are mounting into the container has the proper permissions, in that case, writable permission for others. You can modify the permission by running sudo chmod o+x ${USERPROFILE}\DockerVolumes\consul (or the correct path to the host directory).
This local folder is created the first time you run docker-compose up or you can create it by yourself with mkdir. Once created (manually or automatically) you should give the proper permissions with chmod.
I am not familiar with Docker desktop nor Windows environments, but you should be able to do the equivalent actions using a CLI.
I want create a complete Node.js environment for develop any kind of application (script, api service, website ecc.) also using different services (es. Mysql, Redis, MongoDB). I want use Docker to do it in order to have a portable and multi OS environment.
I've created a Dockerfile for the container in which is installed Node.js:
FROM node:8-slim
WORKDIR /app
COPY . /app
RUN yarn install
EXPOSE 80
CMD [ "yarn", "start" ]
And a docker-compose.yml file where adding the services that I need to use:
version: "3"
services:
app:
build: ./
volumes:
- "./app:/app"
- "/app/node_modules"
ports:
- "8080:80"
networks:
- webnet
mysql:
...
redis:
...
networks:
webnet:
I would like ask you what are the best patterns to achieve these goals:
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
Thank you in advice!
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
use -v volume option to share the host volume inside the docker container
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
same as above
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
docker-compose.yml define these for interactive mode
stdin_open: true
tty: true
Then run the container with the command docker exec -it