I would like to find out how to access the environment variables from a linked docker container. I would like to access the host/port in my node app from a linked rethinkdb container. Using docker compose (bluemixservice and rethinkdb):
version: '2'
services:
twitterservice:
build: ./workerTwitter
links:
- mongodb:mongolink
- rabbitmq:rabbitlink
ports:
- "8082:8082"
depends_on:
- mongodb
- rabbitmq
bluemixservice:
build: ./workerBluemix
links:
- rabbitmq:rabbitlink
- rethinkdb:rethinkdb
ports:
- "8083:8083"
depends_on:
- rabbitmq
- rethinkdb
mongodb:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- mongo-data:/var/lib/mongo
command: mongod
rabbitmq:
image: rabbitmq:management
ports:
- '15672:15672'
- '5672:5672'
rethinkdb:
image: rethinkdb:latest
ports:
- "8080:8080"
- "28015:28015"
volumes:
mongo-data:
driver: local
rethink-data:
driver: local
I would like to access them in my pm2 processes.json:
{
"apps": [
{
"name": "sentiment-service",
"script": "./src",
"merge_logs": true,
"max_restarts": 40,
"restart_delay": 10000,
"instances": 1,
"max_memory_restart": "200M",
"env": {
"PORT": 8080,
"NODE_ENV": "production",
"RABBIT_MQ": "amqp://rabbitlink:5672/",
"ALCHEMY_KEY": "xxxxxxx",
"RETHINK_DB_HOST": "Rethink DB Container Hostname?",
"RETHINK_DB_PORT": "Rethink DB Container Port?",
"RETHINK_DB_AUTHKEY": ""
}
}
]
}
This used to be possible (see here), but now the suggestion is to just use the linked service name as hostname, as you are already doing in your example with rabbitmq. Regarding port numbers, I don't think it adds much to use variables for that; I'd just go with the plain number. You can however parameterize the whole docker-compose.yml using variables in case you want to be able to quickly change a value from outside.
Note that there is no need to alias links, I find it much clearer to just use the service name.
Also, a links already implies depends_on.
I solved it by using consul to and registrator to detect all my containers.
version: '2'
services:
consul:
command: -server -bootstrap -advertise 192.168.99.101
image: progrium/consul:latest
ports:
- 8300:8300
- 8400:8400 # rpc/rest
- 8500:8500 # ui
- 8600:53/udp # dns
registrator:
command: -ip=192.168.99.101 consul://consul:8500
image: gliderlabs/registrator:latest
volumes:
- "/var/run/docker.sock:/tmp/docker.sock"
links:
- consul
twitterservice:
build: ./workerTwitter
container_name: twitterservice
links:
- mongodb:mongolink
- rabbitmq:rabbitlink
- consul
ports:
- "8082:8082"
depends_on:
- mongodb
- rabbitmq
- consul
bluemixservice:
build: ./workerBluemix
container_name: bluemixservice
links:
- rabbitmq:rabbitlink
- rethinkdb:rethinkdb
- consul
ports:
- "8083:8083"
depends_on:
- rabbitmq
- rethinkdb
- consul
mongodb:
image: mongo:latest
container_name: mongo
ports:
- "27017:27017"
links:
- consul
volumes:
- mongo-data:/var/lib/mongo
command: mongod
rabbitmq:
image: rabbitmq:management
container_name: rabbitmq
ports:
- '15672:15672'
- '5672:5672'
links:
- consul
depends_on:
- consul
rethinkdb:
image: rethinkdb:latest
container_name: rethinkdb
ports:
- "8080:8080"
- "28015:28015"
links:
- consul
depends_on:
- consul
volumes:
mongo-data:
driver: local
rethink-data:
driver: local
Related
I m trying to connect mongo express and my node app to mongo db with docker compose but I' m unable to authenticate.
Can not connect those service with docker-compose, continusly getting this error:
Could not connect to database using connectionString: mongodb://admin:password#mongo:27017"
The following is my docker-compose.yml
version: '3'
services:
app:
build: .
env_file: .env
restart: always
environment:
- NODE_ENV=production
ports:
- "8080:8080"
volumes:
- .:/usr/src/app
command: yarn dev
depends_on:
- mongodb
networks:
- mongo-compose-network
links:
- mongodb
mongodb:
container_name: mongodb
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- "27017:27017"
volumes:
- mongodbdata:/data/db
networks:
- mongo-compose-network
mongo-express:
image: mongo-express
depends_on:
- mongodb
ports:
- "8888:8081"
links:
- mongodb
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_URL=mongodb://admin:password#mongo:27017
networks:
- mongo-compose-network
redis:
image: redis:latest
ports:
- "6379:6379"
networks:
- mongo-compose-network
volumes:
- redisdata:/data
celery:
build: .
command: node celery.js worker --loglevel=info
depends_on:
- app
- mongodb
- redis
networks:
- mongo-compose-network
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
volumes:
mongodbdata:
redisdata:
networks:
mongo-compose-network:
driver: bridge
Resolved by creating an admin user from inside the mongodb container.
How can i add multiple ports
I have nodejs and django application
webapp:
image: "python-node-buster"
ports:
- "8001:8000"
command:
- python manage.py runserver --noreload 0.0.0.0:8000
networks:
- django_network
node:
image: "python-node-buster"
ports:
- "3000:3000"
stdin_open: true
networks:
- node_network
ngrok:
image: wernight/ngrok:latest
ports:
- 4040:4040
environment:
NGROK_PROTOCOL: http
NGROK_PORT: node:3000
NGROK_AUTH: ""
depends_on:
- node
- webapp
networks:
- node_network
- django_network
networks:
django_network:
driver: bridge
node_network:
driver: bridge
Assuming i get the url for node at http://localhost:4040/inspect/http
http://xxxx.ngrok.io
and in my nodejs application, i want to access
http://xxxx.ngrok.io:8001
for apis
How to configure this
I'm trying to make requests to my server container from my client app in another container. The Docker Compose docs state that the network is setup automatically, so shouldn't all ports be accessible from all containers? When I make a curl request to port 4000 from outside of the container (in a fresh terminal), it works. However when I enter the client container (selektor-client) and try the same request, it fails.
curl --request POST http://localhost:4000/api/music
What am I doing wrong?
docker-compose.yaml:
version: "3"
services:
client:
container_name: selektor-client
restart: always
build: ./client
ports:
- "3000:3000"
volumes:
- ./client/:/client/
- /client/node_modules/
command: ["yarn", "start"]
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
mongo:
container_name: selektor-mongo
command: mongod --noauth
build: .
restart: always
volumes:
# - ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- data-volume:/data/db
ports:
- "27017:27017"
volumes:
data-volume:
When you call containers of the same stack each other, the host name is, by default, the service name, and the port is the internal port: <servicename>:<internal_port>. So, based on this part of your example:
version: "3"
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
The url you client have to use to reach the server is http://server:4000
I had the same problem . my solution was registering network and giving subnet mast to the network and register the static ip to each container :
version: "3"
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
services:
mongodb:
image: mongo:4.2.1
container_name: mongo
command: mongod --auth
hostname: mongo
networks:
my_network:
ipv4_address: 172.20.0.5
volumes:
- /data/db/mongo:/data/db
ports:
- "27017:27017"
rabitmq:
hostname: rabbitmq
container_name: rabbitmq
image: rabbitmq:latest
networks:
my_network:
ipv4_address: 172.20.0.3
volumes:
- /var/lib/rabbitmq:/data/db
ports:
- "5672:5672"
- "15672:15672"
restart: always
you can access through IP addresses
I've built two NEST app which shares a common database. I've been trying to figure out a way to make these two apps to get data from the Mongo db container. I'm stuck with this process.
docker-compose.yml with app 1
version: '3'
services:
backend:
image: 'example-image_1:1.0.0'
working_dir: /app/example-app-1/backend/example-app-1-api
environment:
- DB_URL=mongodb://0.0.0.0:27017/example_app_1
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
network_mode: "host"
ports:
- '3333:3333'
command: ['node', 'main.js']
depends_on:
- mongodb
expose:
- 3333
mongodb:
image: 'mongo:latest'
environment:
- 'MONGODB_DATABASE="example_app_1"'
ports:
- '27017:27017'
expose:
- 27017
docker-compose.yml with app-2
version: '3'
services:
backend:
image: 'example_app_2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://0.0.0.0:27017/example_app_2
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
expose:
- 8888
mongodb:
image: 'mongo:latest'
environment:
- 'MONGODB_DATABASE="example_app_2"'
ports:
- '27017:27017'
expose:
- 27017
I need help in making these app communicate with common container - mongodb
Click link for Architecture Setup
Since you're using 2 different docker-compose.yml you're actually running 2 backend and 2 mongodb on 2 docker-networks
One of the 2 mongo won't start cause the port is already occupied.
Option 1 (nicer):
services:
backend_1:
...
ports:
- '8888:8888'
backend_2:
...
ports:
- '8899:8899'
mongodb:
ports:
- '27017:27017'
This setup provides 3 container on the same network.
Now you can access at mongo from both backends at <mongo_ip>:27017
Option 2 (ugly):
services:
backend:
...
ports:
- '8888:8888'
mongodb:
ports:
- '27017:27017'
And in another docker-compose
services:
backend:
...
ports:
- '8888:8888'
This setup provides 3 container on 2 different network.
In this setup each docker-compose.yml has it's own docker network, so from the second backend service you have to connect to another docker network to access the container.
i'm kinda new to docker so sorry if my terminology is a little wrong. I'm in the process of getting my app to run in docker. Everything is starting up and running correctly but i'm unable to set the ip address that the services are running on. I need to do so since i'm making api calls that previously referenced a static variable in my js code. The spark service especially is important for me to have a knowable ip, as of now its randomly assigned.
docker-compose.yml
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"```
When you're running containers using docker-compose it creates a user-defined network for you and docker provides an embedded DNS servers, each container will have a record resolvable only within the containers of the network.
This makes it easy for you to know how to contact each service by just calling them by the name you specified on your docker-compose.yml.
You can try this:
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
networks:
my_net:
ipv4_address: 172.26.0.3
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16
But yours spark port is localhost:8080, if you need to expose other port with the ip 172.26.0.0, you can do - "7077" or with the localhost: -"7077:7077" this is an example with the port 7077 expose:
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
- "7077"
networks:
my_net:
ipv4_address: 172.26.0.3
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16