How to expose in a network? - linux

The below example is from the docker-compose docs.
From my understanding they want to have redis port 6379 available in the web container.
Why don't they have
expose:
- "6379"
in the redis container?
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
networks:
- front-tier
- back-tier
redis:
image: redis
volumes:
- redis-data:/var/lib/redis
networks:
- back-tier

From the official Redis image:
This image includes EXPOSE 6379 (the redis port), so standard
container linking will make it automatically available to the linked
containers (as the following examples illustrate).
which is pretty much the typical way of doing things.
Redis Dockerfile.

You don't need links anymore now that we assign containers to docker networks. And without linking, unless you publish all ports with a docker run -P, there's no value to exposing a port on the container. Containers can talk to any port opened on any other container if they are on the same network (assuming default settings for ICC), so exposing a port becomes a noop.
Typically, you only expose a port via the Dockerfile as an indicator to those running your image, or to use the -P flag. There are also some projects that look at exposed ports of other containers to know how to talk to them, specifically I'm thinking of nginx-proxy, but that's a unique case.
However, publishing a port makes that port available from the docker host, which always needs to be done from the docker-compose.yml or run command (you don't want image authors able to affect the docker host without some form of local admin acknowledgement). When you publish a specific port, it doesn't need to be exposed first.

Related

Docker Multi-container connection with docker compose

I am trying to create a composition where two or more docker service can connect to each other in some way.
Here is my composition.
# docker-compose.yaml
version: "3.9"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
command: sh -c "yarn start"
restart: always
ports:
- "1337:1337"
env_file: ".env.project"
depends_on:
- "database"
links:
- "database"
Services
database
This is using a Image that is made with of Official Postgres Image.
Here is Dockerfile
FROM postgres:alpine
ENV POSTGRES_USER="root"
ENV POSTGRES_PASSWORD="password"
ENV POSTGRES_DB="strapi-postgres"
and using the default exposed port 5432 and forwarding to 5435 as defined in the Composition.
So the database service starts at some IPAddress that can be found using docker inspect.
project
This is a Image running a node application(strapi project configured to use postgres database).
Here is Dockerfile
FROM node:lts-alpine
WORKDIR /project
ADD package*.json .
ADD yarn.lock .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
and I am builing the Image using docker build. That gives me an Image with No Foreground Process.
Problems
When I was running the composition, the strapi-project container Exits with Error Code(0).
Solution: So I added command yarn start to run the Foreground Process.
As the project Starts it could not connect to database since it is trying to connect to 127.0.0.1:5432 (5432 since it should try to connect to the container port of database service and not 5435). This is not possible since this tries to connect to port 5432 inside the container strapi-project, which is not open for any process.
Solution: So I used the IPAddress that is found from the docker inspect and used that in a .env.project and passed this file to the project service of the Composition.
For Every docker compose up there is a incremental pattern(n'th time 172.17.0.2, n+1'th time 172.18.0.2, and so on) for the IPAddress of the Composition. So Everytime I run composition I need to edit the .env.project.
All of these are some hacky way to patch them together. I want some way to Create the Postgres database service to start first and then project to configure, connect, and to the database, start automatically.
Suggest me any edits, or other ways to configure them.
You've forgotten to put the CMD in your Dockerfile, which is why you get the "exited (0)" status when you try to run the container.
FROM node:lts-alpine
...
CMD yarn start
Compose automatically creates a Docker network and each service is accessible using its Compose container name as a host name. You never need to know the container-internal IP addresses and you pretty much never need to run docker inspect. (Other answers might suggest manually creating networks: or overriding container_name: and these are also unnecessary.)
You don't show where you set the database host name for your application, but an environment: variable is a common choice. If your database library doesn't already honor the standard PostgreSQL environment variables then you can reference them in code like process.env.PGHOST. Note that the host name will be different running inside a container vs. in your normal plain-Node development environment.
A complete Compose file might look like
version: "3.8"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
restart: always
ports:
- "1337:1337"
environment:
- PGHOST=database
env_file: ".env.project"
depends_on:
- "database"

How to make docker-compose services accesible with each other?

I'm trying to make a frontend app accesible to the outside. It depends on several other modules, serving as services/backend. This other services also rely on things like Kafka and OpenLink Virtuoso (Database).
How can I make all of them all accesible with each other and how should I expose my frontend to outside internet? Should I also remove any "localhost/port" in my code, and replace it with the service name? Should I also replace every port in the code for the equivalent port of docker?
Here is an extraction of my docker-compose.yml file.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
frontend:
build:
context: ./Frontend
dockerfile: ./Dockerfile
image: "jcpbr/node-frontend-app"
ports:
- "3000:3000"
# Should I use links to connect to every module the frontend access and for the other modules as well?
links:
- "auth:auth"
auth:
build:
context: ./Auth
dockerfile: ./Dockerfile
image: "jcpbr/node-auth-app"
ports:
- "3003:3003"
(...)
How can I make all of [my services] all accesible with each other?
Do absolutely nothing. Delete the obsolete links: block you have. Compose automatically creates a network named default that you can use to communicate between the containers, and they can use the other Compose service names as host names; for example, your auth container could connect to kafka:9092. Also see Networking in Compose in the Docker documentation.
(Some other setups will advocate manually creating Compose networks: and overriding the container_name:, but this isn't necessary. I'd delete these lines in the name of simplicity.)
How should I expose my frontend to outside internet?
That's what the ports: ['3000:3000'] line does. Anyone who can reach your host system on port 3000 (the first port number) will be able to access the frontend container. As far as an outside caller is concerned, they have no idea whether things are running in Docker or not, just that your host is running an HTTP server on port 3000.
Setting up a reverse proxy, maybe based on Nginx, is a little more complicated, but addresses some problems around communication from the browser application to the back-end container(s).
Should I also remove any "localhost/port" in my code?
Yes, absolutely.
...and replace it with the service name? every port?
No, because those settings will be incorrect in your non-container development environment, and will probably be incorrect again if you have a production deployment to a cloud environment.
The easiest right answer here is to use environment variables. In Node code, you might try
const kafkaHost = process.env.KAFKA_HOST || 'localhost';
const kafkaPort = process.env.KAFKA_PORT || '9092';
If you're running this locally without those environment variables set, you'll get the usually-correct developer defaults. But in your Docker-based setup, you can set those environment variables
services:
kafka:
image: confluentinc/cp-kafka:latest
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 # must match the Docker service name
app:
build: .
environment:
KAFKA_HOST: kafka
# default KAFKA_PORT is still correct

communicating between docker instances of neo4j and express on local machine

ISSUE: I have a docker image running for neo4j and one for express.js. I cant get the docker images to communicate between eachother.
I can run neo4j desktop, start a nodemon server and they will communicate.
SETUP:
NEO4J official docker image
NEO4J_AUTH none
PORTS localhost:7474 localhost:7687
Version neo4j-community-4.3.3-unix.tar.gz
NODEJS Image
PORTS 0.0.0.0:3000 :::3000
Version 14.17.5
Express conf
DEV_DB_USER_NAME="neo4j"
DEV_DB_PASSWORD="test"
DEV_DB_URI="neo4j://localhost" //for image purpose for local its bolt://localhost:7687
DEV_DB_SECRET_KEY=""
let driver = neo4j.driver(
envConf.dbUri,
neo4j.auth.basic(envConf.dbUserName, envConf.dbUserName)
);
package.json
"#babel/node": "^7.13.10",
"neo4j-driver": "^4.2.3",
I can remote into the neo4j image through http://localhost:7474/browser/ so its running.
I cannot use the server image to call a local neo4j instance.
when i call the apis in the server image i get these errors
If i use neo4j protocal:
Neo4jError: Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1629484043610, routers=[], readers=[], writers=[]]
If i use bolt protocal:
Neo4jError: Failed to connect to server. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0. Caused by: connect ECONNREFUSED 127.0.0.1:7687
Ive been scouring the documentation for a while any ideas would be most welcome!
I was able to achieve the communication by using docker-compose. The problem is that both containers acted as separate networks and i could not find a way to allow the server to communicate with the database. Running docker-compose and building both containers within a single compose network allows communication using the service names.
take note this is tab sensitive!!
docker-compose.yml
version: '3.7'
networks:
lan:
# The different services that make up our "network" of containers
services:
# Express is our first service
express:
container_name: exp_server
networks:
- lan
# The location of dockerfile to build this service
build: <location of dockerfile>
# Command to run once the Dockerfile completes building
command: npm run startdev
# Volumes, mounting our files to parts of the container
volumes:
- .:/src
# Ports to map, mapping our port 3000, to the port 3000 on the container
ports:
- 3000:3000
# designating a file with environment variables
env_file:
- ./.env.express
## Defining the Neo4j Database Service
neo:
container_name: neo4j_server
networks:
- lan
# The image to use
image: neo4j:latest
# map the ports so we can check the db server is up
ports:
- "7687:7687"
- "7474:7474"
# mounting a named volume to the container to track db data
volumes:
- $HOME/neo4j/conf:/conf
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/plugins:/plugins
env_file:
- .env.neo4j
with this you can use docker to run both the server and database and anything else while still using change detection rebuilding to develop and even build multiple environment images at the same time. NEAT

How to run node js docker container using docker-composer to manage php app assets

Lets say we have three services
- php+ apache
- mysql
- nodejs
I know how to use docker-compose to setup application to link mysql with php apache service. I was wondering how we can add node.js service just to manage
js/css assets. The purpose of node.js service is to just manage javascript/css resources. Since docker provides this flexibility I was wondering to use docker service instead of setting up node.js on my host computer.
version: '3.2'
services:
web:
build: .
image: lap
volumes:
- ./webroot:/var/www/app
- ./configs/php.ini:/usr/local/etc/php/php.ini
- ./configs/vhost.conf:/etc/apache2/sites-available/000-default.conf
links:
- dbs:mysql
dbs:
image: mysql
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=rest
- MYSQL_DATABASE=symfony_rest
- MYSQL_USER=restman
volumes:
- /var/mysql:/var/lib/mysql
- ./configs/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
node:
image: node
volumes:
- ./webroot:/var/app
working_dir: /var/app
I am not sure this is correct strategy , I am sharing ./webroot with both web and node service. docker-compose up -d only starts mysql and web and fails to start node container , probably there is not valid entrypoint set.
if you want to use node js separate from PHP service you must set two more options to make node stay up, one is stdin_open and the other one is tty like bellow
stdin_open: true
tty: true
this is equivalent to CLI command -it like bellow
docker container run --name nodeapp -it node:latest
if you have a separate port to run your node app (e.g. your frontend is completely separate from your backend and you must run it independently from your backend like you must run npm run start command in order to run the frontend app) you must publish your port like bellow
ports:
- 3000:3000
ports structure is systemPort:containerInnerPort.
this means publish port 3000 from inside node container to port 3000 on the system, in another way your make port 3000 inside your container accessible on your system and you can access this port like localhost:3000.
in the end, your node service would be like bellow
node:
image: node
stdin_open: true
tty: true
volumes:
- ./webroot:/var/app
working_dir: /var/app
You can also add nginx service to docker-compose, and nginx can take care of forwarding requests to php container or node.js container. You need some server that binds to 80 port and redirect requests to designated container.

How can I specify an Alternate Exposed Port for Redis / RethinkDB (using Docker Compose)?

Working on Docker-izing an nodejs App and I am trying to set it up so that it will respond from a non-standard port therby avoiding potential conflicts for team members that are already running a local Redis container or service.
Redis usually runs on 6379 (regardless of docker or no). I want it to listen on 6380. Even though I don't have it in the docker-compose file I want to do the same thing with RethinkDB.
I do not want to have to create a new Dockerfile for EITHER Redis or RethinkDB.
Here is my Docker-Compose file.
nodejsapp:
image: some-node-container
container_name: nodejsapp
ports:
- "5200:5200" #first number must match CHEAPOTLE_PORT env variable for the cheapotle service
depends_on:
- redis
- rethinkdb
volumes:
- ./:/app
environment:
- NODEJSAPP_PORT=5200 #must match first port number for cheapotle service above.
- REDIS_PORT=6380 #must match first number in redis service ->ports below.
- RETHINKDB_PORT=28016 #must match first number in redis service ->ports below.
redis:
image: redis:3.2-alpine
container_name: redis_cheapotle
ports:
- "6380:6379"
expose:
- "6380" # must match alternate "first" port above to avoid collisions
rethinkdb:
image: rethinkdb
container_name: rethinkdb_cheapotle
ports:
- "28016:28015" #The first number needs to match the RETHINKDB_PORT in environment variables for cheapotle above. You must change both or none, they must always be the same.
- "8090:8080" #this is where you will access the RethinkDB admin. If you have something on 8090, change the port to 8091:8080
expose:
- "28016" # must match alternate "first" port above to avoid collisions
After doing a few dockerizations I thought this would be easy. I would set my environment variables, use proces.env.whatever in my JS files and be out the door and on to the next.
Wrong.
While I can get to the RethinkDB admin area at 0.0.0.0:8090 (notice
the 8090 alternate port), none of my containers can talk to each other
over their specified ports.
At first I tried the above WITHOUT the 'expose' portion of the YAML but I had the same result I get WITH the 'expose' YAML added.
It seems like docker / the containers are refusing to forward the traffic coming into the Host->Alternate Port to the Container->Standard Port. I did some googling around and did not find anything in the first 20 minutes so I figured I would post this while I continue my search.
Will post an answer if I find it myself in the process.
Ok. So I was able to solve this, it seems like there may be a bug with how official rethinkDB and Redis containers handle port forwarding since the normal port:"XXXXX:YYYYY" YAML specification is disregarded and the traffic is not sent from the modified host port to the standard docker port.
The solution was to modify the Command used to start Redis / RethinkDB containers to use the command line ports flag (which differs for each system) to change to my alternate port.
At first I tried using an environment variables Env file (but apparently those are not available immediately at run-time). I also wanted users to be able to see ALL ports / settings for their stack directly in the Docker-Compose file so the above Port flag solution seems to make sense.
I still don't know why docker wont forward alternate host port traffic
to the standard container port for these two services while it WILL
forward an alternate host port for the rethinkDB admin page
(8090:8080).
Here is the docker-compose file I ended up with:
version: "2"
services:
nodejsapp:
image: some-node-container
container_name: cheapotle
ports:
- "5200:5200" #both numbers must match CHEAPOTLE_PORT env variable for the cheapotle service
depends_on:
- redis
- rethinkdb
volumes:
- ./:/app
environment:
- CHEAPOTLE_PORT=5200 #must match cheapotle service ports above.
- RETHINKDB_PORT=28016 #must match rethinkdb service->ports below.
- REDIS_PORT=6380 #must match redis service ->ports below.
- RESQUE_PORT=9292
entrypoint: foreman start -f /app/src/Procfile
redis:
image: redis:3.2-alpine
container_name: redis_cheapotle
ports:
- "6380:6380" #both numbers must match port in command below AND REDIS_PORT cheapotle service variable
command: redis-server --port 6380 #must match above ports AND REDIS_PORT cheapotle service variable
rethinkdb:
image: rethinkdb
container_name: rethinkdb_cheapotle
ports:
- "28016:28016" #The both numbers must match the RETHINKDB_PORT in environment variables for cheapotle above + command below. You must change allor none, they must always be the same.
- "8090:8080" #this is where you will access the RethinkDB admin. If you have something on 8090, change the port to 8091:8080
command: rethinkdb --driver-port 28016 --bind all #must match above ports AND REDIS_PORT cheapotle service variable
The docs for the command line utilities for RethinkDB can be found here:
https://www.rethinkdb.com/docs/cli-options/ While the docs for the Redis command line can be found here: https://redis.io/topics/config
With the above I can start everything up on alternate ports that will not collide in the likely hood that other devs on the team are already running rethinkDB and / or Redis.
This is not a production grade setup so use at your own risk for the moment. Obviously RethinkDB would require additional configuration to allow other nodes to join the cluster at some other port than 29015.
Also! As a warning before anyone working with rethinkDB command line
flags: for rethinkDB to accept a change in port the "--driver-port
28016" MUST be before the "--bind all" otherwise it's as if it is not
even there and ignored.
You should link them together:)
nodejsapp:
.
.
ports:
- "5200:5200"
.
.
links:
- redis
- rethinkdb
redis:
.
.
ports:
- "6380:6379"
expose:
- "6380"
rethinkdb:
.
.
ports:
- "28016:28015"
- "8090:8080"
expose:
- "28016"

Resources