I've built two NEST app which shares a common database. I've been trying to figure out a way to make these two apps to get data from the Mongo db container. I'm stuck with this process.
docker-compose.yml with app 1
version: '3'
services:
backend:
image: 'example-image_1:1.0.0'
working_dir: /app/example-app-1/backend/example-app-1-api
environment:
- DB_URL=mongodb://0.0.0.0:27017/example_app_1
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
network_mode: "host"
ports:
- '3333:3333'
command: ['node', 'main.js']
depends_on:
- mongodb
expose:
- 3333
mongodb:
image: 'mongo:latest'
environment:
- 'MONGODB_DATABASE="example_app_1"'
ports:
- '27017:27017'
expose:
- 27017
docker-compose.yml with app-2
version: '3'
services:
backend:
image: 'example_app_2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://0.0.0.0:27017/example_app_2
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
expose:
- 8888
mongodb:
image: 'mongo:latest'
environment:
- 'MONGODB_DATABASE="example_app_2"'
ports:
- '27017:27017'
expose:
- 27017
I need help in making these app communicate with common container - mongodb
Click link for Architecture Setup
Since you're using 2 different docker-compose.yml you're actually running 2 backend and 2 mongodb on 2 docker-networks
One of the 2 mongo won't start cause the port is already occupied.
Option 1 (nicer):
services:
backend_1:
...
ports:
- '8888:8888'
backend_2:
...
ports:
- '8899:8899'
mongodb:
ports:
- '27017:27017'
This setup provides 3 container on the same network.
Now you can access at mongo from both backends at <mongo_ip>:27017
Option 2 (ugly):
services:
backend:
...
ports:
- '8888:8888'
mongodb:
ports:
- '27017:27017'
And in another docker-compose
services:
backend:
...
ports:
- '8888:8888'
This setup provides 3 container on 2 different network.
In this setup each docker-compose.yml has it's own docker network, so from the second backend service you have to connect to another docker network to access the container.
Related
I'm running my containers via a docker compose file. They are in the same network and I can ping from my backend container to my database container. I use the database name as the hostname in the connection string and it doesn't bring any errors that it couldn't find the host. Instead, it just hangs up and times out.
I have a test endpoint which is just suppose to test the connection. When you use that endpoint, database container logs "invalid packet length", and on the frontend, nothing happens, then it times out. I have no idea whats wrong. Any help?
version: '3.2'
services:
server:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
depends_on:
- backend
- frontend
- database
ports:
- '5000:80'
networks:
- app_network
database:
image: postgres:latest
container_name: database
ports:
- "5432:5432"
restart: always
hostname: database
environment:
POSTGRES_PASSWORD: 1234
POSTGRES_USER: postgres
backend:
build:
context: ./backend
dockerfile: ./Dockerfile
image: kalendae:backend
hostname: backend
container_name: backend
environment:
- WAIT_HOSTS=database:5432
- DATABASE_HOST=database
- DATABASE_PORT=5432
- PORT=5051
frontend:
build:
context: ./frontend
dockerfile: ./Dockerfile
image: kalendae:frontend
hostname: frontend
container_name: frontend
environment:
- WAIT_HOSTS=backend:5051
- REACT_APP_BACKEND_HOST=localhost
- REACT_APP_BACKEND_PORT=5051
I'm trying to make requests to my server container from my client app in another container. The Docker Compose docs state that the network is setup automatically, so shouldn't all ports be accessible from all containers? When I make a curl request to port 4000 from outside of the container (in a fresh terminal), it works. However when I enter the client container (selektor-client) and try the same request, it fails.
curl --request POST http://localhost:4000/api/music
What am I doing wrong?
docker-compose.yaml:
version: "3"
services:
client:
container_name: selektor-client
restart: always
build: ./client
ports:
- "3000:3000"
volumes:
- ./client/:/client/
- /client/node_modules/
command: ["yarn", "start"]
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
mongo:
container_name: selektor-mongo
command: mongod --noauth
build: .
restart: always
volumes:
# - ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- data-volume:/data/db
ports:
- "27017:27017"
volumes:
data-volume:
When you call containers of the same stack each other, the host name is, by default, the service name, and the port is the internal port: <servicename>:<internal_port>. So, based on this part of your example:
version: "3"
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
The url you client have to use to reach the server is http://server:4000
I had the same problem . my solution was registering network and giving subnet mast to the network and register the static ip to each container :
version: "3"
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
services:
mongodb:
image: mongo:4.2.1
container_name: mongo
command: mongod --auth
hostname: mongo
networks:
my_network:
ipv4_address: 172.20.0.5
volumes:
- /data/db/mongo:/data/db
ports:
- "27017:27017"
rabitmq:
hostname: rabbitmq
container_name: rabbitmq
image: rabbitmq:latest
networks:
my_network:
ipv4_address: 172.20.0.3
volumes:
- /var/lib/rabbitmq:/data/db
ports:
- "5672:5672"
- "15672:15672"
restart: always
you can access through IP addresses
i'm kinda new to docker so sorry if my terminology is a little wrong. I'm in the process of getting my app to run in docker. Everything is starting up and running correctly but i'm unable to set the ip address that the services are running on. I need to do so since i'm making api calls that previously referenced a static variable in my js code. The spark service especially is important for me to have a knowable ip, as of now its randomly assigned.
docker-compose.yml
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"```
When you're running containers using docker-compose it creates a user-defined network for you and docker provides an embedded DNS servers, each container will have a record resolvable only within the containers of the network.
This makes it easy for you to know how to contact each service by just calling them by the name you specified on your docker-compose.yml.
You can try this:
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
networks:
my_net:
ipv4_address: 172.26.0.3
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16
But yours spark port is localhost:8080, if you need to expose other port with the ip 172.26.0.0, you can do - "7077" or with the localhost: -"7077:7077" this is an example with the port 7077 expose:
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
- "7077"
networks:
my_net:
ipv4_address: 172.26.0.3
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16
I have two docker-compose.yml files. In the first one, I run a mongodb instance:
version: '3'
services:
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
In the second one, I run my web app and I want to link it with mongodb container:
version: '3'
services:
webapp:
build:
context: .
external_links:
- mongodb
ports:
- 8080:8080
But, when I run my webapp, I get a connection error. I'm connecting to this URI:
const DB_URI = 'mongodb://mongodb:27017/mydb';
but I get MongoNetworkError. What am I missing?
I would recommend you to use those two services in the same docker-compose file and replace external_links with links
version: '3'
services:
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
webapp:
build:
context: .
links:
- mongodb
depends_on:
- mongodb
ports:
- 8080:8080
How could I ensure mongodb is up and running before webapp is started?
AFAIK, the links will take care of the the situation.
or
You could use depends_on
Solved. Hosts were not accessible because they were in different networks. Creating a network (https://docs.docker.com/compose/networking/) and running both services in that network solved the problem.
I would like to find out how to access the environment variables from a linked docker container. I would like to access the host/port in my node app from a linked rethinkdb container. Using docker compose (bluemixservice and rethinkdb):
version: '2'
services:
twitterservice:
build: ./workerTwitter
links:
- mongodb:mongolink
- rabbitmq:rabbitlink
ports:
- "8082:8082"
depends_on:
- mongodb
- rabbitmq
bluemixservice:
build: ./workerBluemix
links:
- rabbitmq:rabbitlink
- rethinkdb:rethinkdb
ports:
- "8083:8083"
depends_on:
- rabbitmq
- rethinkdb
mongodb:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- mongo-data:/var/lib/mongo
command: mongod
rabbitmq:
image: rabbitmq:management
ports:
- '15672:15672'
- '5672:5672'
rethinkdb:
image: rethinkdb:latest
ports:
- "8080:8080"
- "28015:28015"
volumes:
mongo-data:
driver: local
rethink-data:
driver: local
I would like to access them in my pm2 processes.json:
{
"apps": [
{
"name": "sentiment-service",
"script": "./src",
"merge_logs": true,
"max_restarts": 40,
"restart_delay": 10000,
"instances": 1,
"max_memory_restart": "200M",
"env": {
"PORT": 8080,
"NODE_ENV": "production",
"RABBIT_MQ": "amqp://rabbitlink:5672/",
"ALCHEMY_KEY": "xxxxxxx",
"RETHINK_DB_HOST": "Rethink DB Container Hostname?",
"RETHINK_DB_PORT": "Rethink DB Container Port?",
"RETHINK_DB_AUTHKEY": ""
}
}
]
}
This used to be possible (see here), but now the suggestion is to just use the linked service name as hostname, as you are already doing in your example with rabbitmq. Regarding port numbers, I don't think it adds much to use variables for that; I'd just go with the plain number. You can however parameterize the whole docker-compose.yml using variables in case you want to be able to quickly change a value from outside.
Note that there is no need to alias links, I find it much clearer to just use the service name.
Also, a links already implies depends_on.
I solved it by using consul to and registrator to detect all my containers.
version: '2'
services:
consul:
command: -server -bootstrap -advertise 192.168.99.101
image: progrium/consul:latest
ports:
- 8300:8300
- 8400:8400 # rpc/rest
- 8500:8500 # ui
- 8600:53/udp # dns
registrator:
command: -ip=192.168.99.101 consul://consul:8500
image: gliderlabs/registrator:latest
volumes:
- "/var/run/docker.sock:/tmp/docker.sock"
links:
- consul
twitterservice:
build: ./workerTwitter
container_name: twitterservice
links:
- mongodb:mongolink
- rabbitmq:rabbitlink
- consul
ports:
- "8082:8082"
depends_on:
- mongodb
- rabbitmq
- consul
bluemixservice:
build: ./workerBluemix
container_name: bluemixservice
links:
- rabbitmq:rabbitlink
- rethinkdb:rethinkdb
- consul
ports:
- "8083:8083"
depends_on:
- rabbitmq
- rethinkdb
- consul
mongodb:
image: mongo:latest
container_name: mongo
ports:
- "27017:27017"
links:
- consul
volumes:
- mongo-data:/var/lib/mongo
command: mongod
rabbitmq:
image: rabbitmq:management
container_name: rabbitmq
ports:
- '15672:15672'
- '5672:5672'
links:
- consul
depends_on:
- consul
rethinkdb:
image: rethinkdb:latest
container_name: rethinkdb
ports:
- "8080:8080"
- "28015:28015"
links:
- consul
depends_on:
- consul
volumes:
mongo-data:
driver: local
rethink-data:
driver: local