This is how my docker file looks like
version: "3"
services:
controller:
build: ./cd_controller
image: cd_controller:latest
env_file: $PWD/robot-configurations/.env
cap_add:
- ALL
links:
- test_fixture
volumes:
- dependencypackages:/root/dependency-packages
- robotconfigurations:/root/robot-configurations
container_name: controller_g5
test_fixture:
image: docker-standard-all.ur-update.dk/ur/pytest-ur:0.7.1
volumes:
- pytestfixture:/workspace
- controllertests:/workspace/controller-tests
entrypoint: /workspace/entrypoint.sh
container_name: test_fixture
stdin_open: true # docker run -i
tty: true # docker run -t
volumes:
robotconfigurations:
driver_opts:
type: none
device: $PWD/robot-configurations
o: bind
...
Basically it has two two services/containers controller&test_fixture. controller has source code running and test_fixture contains all the test_cases. test_fixture needs to talk to controller through a socket. Since docker-compose creates a network among its containers, in my py-test cases I am simply using
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("controller_g5", 30001)) # controller_g5 is name of the controller container
So far every thing look fine. But I realized that I have multiple versions/features of the source code. So I would like to create a multiple instances of a same thing each have their own network. Does naming the container blocks the container creation with same name in different network ? Also I am not sure how to spin up one more network with similar container on the same host machine. I came across Multiple isolated environments on a single host But couldn't manage to get a sample example.
Any help is greatly appreciated.
You just need to use a different project name. By default compose uses the directory name in which your project is as the project name. docker-compose takes care of setting up the isolated network appropriately for each project.
To create two instances of your project namespaced as "dev" and "test", you can run it as follows:
docker-compose -p dev up
docker-compose -p test up
From https://docs.docker.com/compose/reference/overview/
-p, --project-name NAME Specify an alternate project name
(default: directory name)
You need to remove container_name field for multiple project instances to work. Compose prepends the container names with project name automatically but it won't do it if container_name is specified. You would get container name conflict when starting another project from the compose file if container_name is used. After removing container_name, your services will get service names as hostnames.
Related
I am trying to create a composition where two or more docker service can connect to each other in some way.
Here is my composition.
# docker-compose.yaml
version: "3.9"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
command: sh -c "yarn start"
restart: always
ports:
- "1337:1337"
env_file: ".env.project"
depends_on:
- "database"
links:
- "database"
Services
database
This is using a Image that is made with of Official Postgres Image.
Here is Dockerfile
FROM postgres:alpine
ENV POSTGRES_USER="root"
ENV POSTGRES_PASSWORD="password"
ENV POSTGRES_DB="strapi-postgres"
and using the default exposed port 5432 and forwarding to 5435 as defined in the Composition.
So the database service starts at some IPAddress that can be found using docker inspect.
project
This is a Image running a node application(strapi project configured to use postgres database).
Here is Dockerfile
FROM node:lts-alpine
WORKDIR /project
ADD package*.json .
ADD yarn.lock .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
and I am builing the Image using docker build. That gives me an Image with No Foreground Process.
Problems
When I was running the composition, the strapi-project container Exits with Error Code(0).
Solution: So I added command yarn start to run the Foreground Process.
As the project Starts it could not connect to database since it is trying to connect to 127.0.0.1:5432 (5432 since it should try to connect to the container port of database service and not 5435). This is not possible since this tries to connect to port 5432 inside the container strapi-project, which is not open for any process.
Solution: So I used the IPAddress that is found from the docker inspect and used that in a .env.project and passed this file to the project service of the Composition.
For Every docker compose up there is a incremental pattern(n'th time 172.17.0.2, n+1'th time 172.18.0.2, and so on) for the IPAddress of the Composition. So Everytime I run composition I need to edit the .env.project.
All of these are some hacky way to patch them together. I want some way to Create the Postgres database service to start first and then project to configure, connect, and to the database, start automatically.
Suggest me any edits, or other ways to configure them.
You've forgotten to put the CMD in your Dockerfile, which is why you get the "exited (0)" status when you try to run the container.
FROM node:lts-alpine
...
CMD yarn start
Compose automatically creates a Docker network and each service is accessible using its Compose container name as a host name. You never need to know the container-internal IP addresses and you pretty much never need to run docker inspect. (Other answers might suggest manually creating networks: or overriding container_name: and these are also unnecessary.)
You don't show where you set the database host name for your application, but an environment: variable is a common choice. If your database library doesn't already honor the standard PostgreSQL environment variables then you can reference them in code like process.env.PGHOST. Note that the host name will be different running inside a container vs. in your normal plain-Node development environment.
A complete Compose file might look like
version: "3.8"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
restart: always
ports:
- "1337:1337"
environment:
- PGHOST=database
env_file: ".env.project"
depends_on:
- "database"
So I'm trying to deploy a composed set of images, (one is local and is being built the other is being pulled in from a container registry I control) to a docker container instance on Azure.
I login to azure with docker, set the container group as my context then run
docker compose --env-file ./config/compose/.env.local up
My docker compose file looks like this
# version: "3.9" # optional since v1.27.0
services:
consumer:
build:
context: .
args:
PORTS: 2222 8080 9229
ENVNAME: $ENVNAME
BASEIMAGE: $BASEIMAGE
ports:
- "8080:8080"
image: th3docker.azurecr.io/<imagename>
producer:
image: th3docker.azurecr.io/<imagename>:latest
ports:
- "5001:5001"
container_name: jobmanager
environment:
- ASPNETCORE_ENVIRONMENT=$ASPNET_ENV
depends_on:
- consumer
Looking at the docker documentation, labels seem to be a field of its own under each service, but I don't have any in this file. I've tried removing container names, and as much as I can from this file but I just don't understand why I'm getting this error.
I took a look at the docker compose source code and this seems to be the offending if statement in the source line 91.
for _, s := range project.Services {
service := serviceConfigAciHelper(s)
containerDefinition, err := service.getAciContainer()
...
if service.Labels != nil && len(service.Labels) > 0 {
return containerinstance.ContainerGroup{}, errors.New("ACI integration does not support labels in compose applications")
}
...
}
Still seems like I am not defining any labels unless some other field is implicitly being consumed as a label. Any idea what's going on here or alternate path to get around this issue would be appreciated.
It seems to be related to using --env-file. https://github.com/docker/compose-cli/issues/2167.
The only workaround I know is to either export all variables through the command line one by one, or to use this:
export $(cat docker/.env | xargs) && docker compose up which will export all variables.
Be sure to remove any # comments from the .env file as export will fail otherwise
I'm exploring the Docker capabilities for ShinyProxy on Azure and I want to set it up in a simple, expandable, and affordable way. As far as I understand there are five ways to set up Docker-based services on Azure.
Questions
My question is two-fold,
on the general experience with deploying ShinyProxy based containers that spawn and destroy other containers based on connected user sessions;
how to correct my approach?
Approaches
(Listed from most to least desirable; tested all except for the virtual machine approach.)
A) App Service Docker or Docker Compose setup
Most experience with, complexity is abstracted away.
With this approach, I found out that the current implementation of Docker and Docker Compose for Azure App Services does not support (ignores) the networks which are required (as far as I understand) to let ShinyProxy communicate with other containers on the internal network. In a Docker Compose file I've specified the following (and verified that it works locally):
networks:
app_default:
driver: bridge
external: false
name: app_default
If I understand the documentation properly you are just unable to create any custom networks for your Containers. It's not clear if you can create a custom Azure vnet that could be used for this or not (I'm not experienced with creating those).
The second important part of this ShinyProxy setup is to map the docker.sock file in the host and container together. Again this can be done through the Docker Compose file or parameters for a single Docker file. This is how I've specified it in my Docker Compose file (and verified that it works):
volumes:
# The `//` path prefix only works on Windows host machines, it's because
# Windows uses the Windows Pipeline system as a workaround for these kinds
# of Unix filesystem paths. On a Linux host machine, the path prefix
# needs to only contain a single forward slash, `/`.
# Windows Host volume
# docker_sock: //var/run/docker.sock
# Linux Host volume
docker_sock: /var/run/docker.sock
And then use the docker_sock named volume to map with the containers /var/run/docker.sock file, docker_sock:/var/run/docker.sock.
Because of those two problems, trying to visit any specs defined in the ShinyProxy application.yml file will just result in a Connection refused or File could not be found Java errors. Both correspond to the communication over network and docker.sock mapping.
B) Container Instances
New type of service, seems nice and easy
Pretty much the same problems as the App Service approach.
C) Container Apps
New type of service, seems nice and easy
Pretty much the same problems as the App Service approach.
D) Kubernetes Service
Requires a lot of additional configuration.
Tried, but abandoned this approach because I don't want to deal with an additional configuration layer and I doubt that I need this much control for my desired goal.
E) Virtual Machine
Requires a lot of setup and self-management for a production environment.
Haven't tried yet. There seem to be a couple of articles that go over how to approach this.
To Reproduce Locally
Here are some modified examples of my configuration files. I've left a couple of comments and also commented out properties in there.
ShinyProxy application.yml:
# ShinyProxy Configuration
proxy:
title: ShinyProxy Apps
landing-page: /
heartbeat-enabled: true
heartbeat-rate: 10000 # 10 seconds
heartbeat-timeout: 60000 # 60 seconds
# Timeout for the container to be available to ShinyProxy
container-wait-time: 20000 # 20 seconds
port: 8080
authentication: none
docker:
# url: http://localhost:2375
privileged: true
internal-networking: true
container-network: "app_default"
specs:
- id: hello_demo
container-image: openanalytics/shinyproxy-demo
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app.
container-network: "${proxy.docker.container-network}"
# container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
logging:
file:
name: shinyproxy.log
server:
servlet:
context=path: /
sprint:
application:
name: "ShinyProxy Apps"
ShinyProxy Dockerfile:
FROM openjdk:8-jre
USER root
RUN mkdir -p "/opt/shinyproxy"
# Download shinyproxy version from the official source
RUN wget https://www.shinyproxy.io/downloads/shinyproxy-2.6.0.jar -O "/opt/shinyproxy/shinyproxy.jar"
# Or, Copy local shinyproxy jar file
# COPY shinyproxy.jar "/opt/shinyproxy/shinyproxy.jar"
COPY application.yml "/opt/shinyproxy/application.yml"
WORKDIR /opt/shinyproxy/
CMD ["java", "-jar", "/opt/shinyproxy/shinyproxy.jar"]
docker-compose.yml:
networks:
app_default:
driver: bridge
external: false
name: app_default
# volumes:
# The `//` path prefix only works on Windows host machines, it's because
# Windows uses the Windows Pipeline system as a workaround for these kinds
# of Unix filesystem paths. On a Linux host machine, the path prefix
# needs to only contain a single forward slash, `/`.
# Windows only volume
# docker_sock: //var/run/docker.sock
# Linux only volume
# docker_sock: /var/run/docker.sock
services:
# Can be used to test out other images than the shinyproxy one
# hello_demo:
# image: openanalytics/shinyproxy-demo
# container_name: hello_demo
# ports:
# - 3838:3838
# networks:
# - app_default
# volumes:
# - //var/run/docker.sock:/var/run/docker.sock
shinyproxy:
build: ./shinyproxy
container_name: app_shinyproxy
# Change the image to what you've called your own image to
image: shinyproxy:latest
# privileged: true
restart: OnFailure
networks:
- app_default
ports:
- 8080:8080
volumes:
- //var/run/docker.sock:/var/run/docker.sock
With all the files in place, just run docker compose build && docker compose up.
I'm new to MEAN stack development and was wondering whats the ideal way to spin an mongo+express environment.
Running synchronous bash script commands make the mongo server stop further execution and listen for connections. What would be a local + docker compatible script to initiate the environment ?
Many people use docker-compose for a situation like this. You can set up a docker-compose configuration file where you describe services that you would like to run. Each service defines a docker image. In your case, you could have mongodb, your express app and your angular app defined as services. Then, you can launch the whole stack with docker-compose up.
A sample docker-compose config file would look something like:
version: '2' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: angular-client # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forewarding
express: #name of the second service
build: express-server # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forewarding
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forewarding
which comes from an article here: https://scotch.io/tutorials/create-a-mean-app-with-angular-2-and-docker-compose
The below example is from the docker-compose docs.
From my understanding they want to have redis port 6379 available in the web container.
Why don't they have
expose:
- "6379"
in the redis container?
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
networks:
- front-tier
- back-tier
redis:
image: redis
volumes:
- redis-data:/var/lib/redis
networks:
- back-tier
From the official Redis image:
This image includes EXPOSE 6379 (the redis port), so standard
container linking will make it automatically available to the linked
containers (as the following examples illustrate).
which is pretty much the typical way of doing things.
Redis Dockerfile.
You don't need links anymore now that we assign containers to docker networks. And without linking, unless you publish all ports with a docker run -P, there's no value to exposing a port on the container. Containers can talk to any port opened on any other container if they are on the same network (assuming default settings for ICC), so exposing a port becomes a noop.
Typically, you only expose a port via the Dockerfile as an indicator to those running your image, or to use the -P flag. There are also some projects that look at exposed ports of other containers to know how to talk to them, specifically I'm thinking of nginx-proxy, but that's a unique case.
However, publishing a port makes that port available from the docker host, which always needs to be done from the docker-compose.yml or run command (you don't want image authors able to affect the docker host without some form of local admin acknowledgement). When you publish a specific port, it doesn't need to be exposed first.