How to access containers in host network using docker compose? - node.js

I have the following docker-compose file that I am trying to build on host network. The build is successful but I am unable to see any result after the build.
version: "3.3"
services:
mycontainer_3000:
container_name: mycontainer_3000
build:
context: src/mycontainer_3000
dockerfile: ./../../Dockerfile.dev
args:
- T_ENV
image: myproject/mycontainer_3000:latest
restart: unless-stopped
network_mode: host
volumes:
- "./src/mycontainer_3000:/usr/src/app"
depends_on:
- mycontainer_3001
mycontainer_3001:
container_name: mycontainer_3001
build:
context: src/mycontainer_3001
dockerfile: ./../../Dockerfile.dev
args:
- T_ENV
image: myproject/mycontainer_3001:latest
volumes:
- "./src/mycontainer_3001:/usr/src/app"
depends_on:
- mongo
mycontainer_3003:
container_name: mycontainer_3003
build:
context: src/mycontainer_3003
dockerfile: ./../../Dockerfile.dev
args:
- T_ENV
image: myproject/mycontainer_3003:latest
network_mode: host
volumes:
- "./src/mycontainer_3003:/usr/src/app"
depends_on:
- mongo
mycontainer_3002:
container_name: mycontainer_3002
build:
context: src/mycontainer_3002
dockerfile: ./../../Dockerfile.dev
args:
- T_ENV
image: myproject/mycontainer_3002:latest
network_mode: host
volumes:
- "./src/mycontainer_3002:/usr/src/app"
mongo:
image: mongo
restart: always
network_mode: host
volumes:
- "./db/:/data/db"
mongo-express:
image: mongo-express
restart: always
network_mode: host
environment:
ME_CONFIG_MONGODB_URL: mongodb://mongo:27017/
When I compile using docker-compose up , all the containers are created successfully.
But when I try to access any of the containers using http://localhost:3000 or any other port, it shows nothing is running on that port!

Try adding ports to explicitly expose them
version: "3.3"
services:
mycontainer_3000:
container_name: mycontainer_3000
build:
context: src/mycontainer_3000
dockerfile: ./../../Dockerfile.dev
args:
- T_ENV
image: myproject/mycontainer_3000:latest
restart: unless-stopped
network_mode: host
#HERE
ports: '3000:3000'
volumes:
- "./src/mycontainer_3000:/usr/src/app"
depends_on:
- mycontainer_3001
mycontainer_3001:
container_name: mycontainer_3001
build:
context: src/mycontainer_3001
dockerfile: ./../../Dockerfile.dev
args:
- T_ENV
image: myproject/mycontainer_3001:latest
#HERE
ports: '3001:3001'
volumes:
- "./src/mycontainer_3001:/usr/src/app"
depends_on:
- mongo
mycontainer_3003:
container_name: mycontainer_3003
build:
context: src/mycontainer_3003
dockerfile: ./../../Dockerfile.dev
args:
- T_ENV
image: myproject/mycontainer_3003:latest
network_mode: host
#HERE
ports: '3003:3003'
volumes:
- "./src/mycontainer_3003:/usr/src/app"
depends_on:
- mongo
mycontainer_3002:
container_name: mycontainer_3002
build:
context: src/mycontainer_3002
dockerfile: ./../../Dockerfile.dev
args:
- T_ENV
image: myproject/mycontainer_3002:latest
network_mode: host
#HERE
ports: '3002:3002'
volumes:
- "./src/mycontainer_3002:/usr/src/app"
mongo:
image: mongo
restart: always
network_mode: host
#HERE
ports: '27017:27017'
volumes:
- "./db/:/data/db"
mongo-express:
image: mongo-express
restart: always
network_mode: host
#HERE
ports: '8080:8080'
environment:
ME_CONFIG_MONGODB_URL: mongodb://mongo:27017/

Related

How to docker compose to run azure container instance

I have a docker-compose file with multiple services like prometheus, grafana, spring boot ms and elk. I am able to start containers on my local machine. But after modifying the file to deploy to azure container instance its failing with errors like:
service "prometheus" refers to undefined volume fsefileshare: invalid compose project
error looking up volume plugin azure_file: plugin "azure_file" not found
Sample docker compose to run in local
version: '3.9'
services:
setup:
build:
context: ./config/setup/
dockerfile: Dockerfile
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
init: true
volumes:
- setup:/state:Z
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
networks:
- fse_net
database_mysql:
image: mysql:8.0
restart: always
volumes:
- mysql_data:/var/lib/mysql
- ./fse_auth.sql:/docker-entrypoint-initdb.d/fse_auth.sql:ro
environment:
MYSQL_ROOT_PASSWORD: root
networks:
- fse_net
#************Mongo DB - 1***************
database_mongo:
restart: always
container_name: database_mongo
image: mongo:latest
volumes:
- mongo_data:/data/db
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
ports:
- 27017:27017
networks:
- fse_net
#************prometheus***************
prometheus:
image: prom/prometheus
container_name: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
depends_on:
- database_mongo
- registery
- company
- stock
- gateway
networks:
- fse_net
#************company***************
company:
container_name: company
restart: always
environment:
- EUREKA_REGISTERY=registery
- DATABASE_HOST=database_mongo
build:
context: ./company
dockerfile: Dockerfile
ports:
- 8086:8086
depends_on:
- database_mongo
- database_mysql
networks:
- fse_net
#************stock***************
stock:
container_name: stock
environment:
- EUREKA_REGISTERY=registery
- DATABASE_HOST=database_mongo
build:
context: ./stock
dockerfile: Dockerfile
ports:
- 8081:8081
depends_on:
- database_mongo
networks:
- fse_net
volumes:
setup:
mysql_data:
mongo_data:
grafana-storage:
zookeeper_data:
zookeeper_log:
kafka_data:
elasticsearch:
networks:
fse_net:
driver: bridge
Docker compose after modification for azure
version: '3.9'
services:
setup:
build:
context: ./config/setup/
dockerfile: Dockerfile
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
image: myazureacr.azurecr.io/setup
init: true
volumes:
- setup:/state:Z
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
networks:
- fse_net
database_mysql:
image: mysql:8.0
restart: always
volumes:
- fse_data:/var/lib/mysql
- fsefileshare/fse_auth.sql:/docker-entrypoint-initdb.d/fse_auth.sql:ro
environment:
MYSQL_ROOT_PASSWORD: root
networks:
- fse_net
#************Mongo DB - 1***************
database_mongo:
restart: always
container_name: database_mongo
image: mongo:latest
volumes:
- fse_data:/data/db
- fsefileshare/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
ports:
- 27017:27017
networks:
- fse_net
#************prometheus***************
prometheus:
image: prom/prometheus
container_name: prometheus
# environment:
# - APP_GATEWAY=gateway
# - REGISTERY_APP=registery
volumes:
- fsefileshare/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
depends_on:
- database_mongo
- company
- stock
networks:
- fse_net
#************company***************
company:
container_name: company
restart: always
environment:
- EUREKA_REGISTERY=registery
- DATABASE_HOST=database_mongo
build:
context: ./company
dockerfile: Dockerfile
image: myazureacr.azurecr.io/company
ports:
- 8086:8086
depends_on:
- database_mongo
- database_mysql
networks:
- fse_net
#************stock***************
stock:
container_name: stock
environment:
- EUREKA_REGISTERY=registery
- DATABASE_HOST=database_mongo
build:
context: ./stock
dockerfile: Dockerfile
image: myazureacr.azurecr.io/stock
ports:
- 8081:8081
depends_on:
- database_mongo
networks:
- fse_net
volumes:
fse_data:
driver: azure_file
driver_opts:
share_name: fsefileshare
storage_account_name: fsestorageaccount
networks:
fse_net:
driver: bridge
Your YAML looks wrong. According to the documentation, this is how you should define the volume:
volumes: # Array of volumes available to the instances
- name: string
azureFile:
shareName: string
readOnly: boolean
storageAccountName: string
storageAccountKey: string

docker-compose.yml spark/hadoop/hive for three data nodes

This docker-compose.yml with one datanode seems to work ok:
version: "3"
services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
container_name: namenode
restart: always
ports:
- 9870:9870
- 9010:9000
volumes:
- hadoop_namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
env_file:
- ./hadoop.env
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode
restart: always
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
CORE_CONF_fs_defaultFS: hdfs://namenode:9000
ports:
- "9864:9864"
env_file:
- ./hadoop.env
resourcemanager:
image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8
container_name: resourcemanager
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864"
env_file:
- ./hadoop.env
nodemanager1:
image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8
container_name: nodemanager
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088"
env_file:
- ./hadoop.env
historyserver:
image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8
container_name: historyserver
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088"
volumes:
- hadoop_historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
spark-master:
image: bde2020/spark-master:3.0.0-hadoop3.2
container_name: spark-master
depends_on:
- namenode
- datanode
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
spark-worker-1:
image: bde2020/spark-worker:3.0.0-hadoop3.2
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-server
depends_on:
- namenode
- datanode
env_file:
- ./hadoop-hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
ports:
- "10000:10000"
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-metastore
env_file:
- ./hadoop-hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode:9864 hive-metastore-postgresql:5432"
ports:
- "9083:9083"
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
container_name: hive-metastore-postgresql
presto-coordinator:
image: shawnzhu/prestodb:0.181
container_name: presto-coordinator
ports:
- "8089:8089"
volumes:
hadoop_namenode:
hadoop_datanode:
hadoop_historyserver:
I want to modify it so that it uses three datanodes. I tried adding this right below the original datanode section, but it seems to not like it. It basically adds new names, and new ports:
datanode1:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode1
restart: always
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
CORE_CONF_fs_defaultFS: hdfs://namenode:9000
ports:
- "9865:9865"
env_file:
- ./hadoop.env
datanode2:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode2
restart: always
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
CORE_CONF_fs_defaultFS: hdfs://namenode:9000
ports:
- "9866:9866"
env_file:
- ./hadoop.env
Should this work, and if not, what do I need to change to get three datanodes ?
Check your ports setting. It seems that the port mapping is faulty. You have "9865:9865" (datanode1) and "9866:9866" (datanode2).
Try setting it to "9865:9864" and "9866:9864" respectively, as 9864 is the default port that the datanode is using, and the first port number defines how the datanode shall be reachable outside the docker network.
With the suggested configuration, your datanodes will be reachable on datanode:9864 (datanode1:9864, datanode2:9864) from within the network, and on :9864 (and :9865, :9866) from outside the docker network.

Changing the source code does not live update using docker-compose and volumes on mern stack

I would like to implement a hot reloading functionality from development evinronement such that when i change anything in the source code it will reflect the changes up to the docker container by mounting the volume and hence see the changes live in my localhost.
Below is my docker-compose file
version: '3.9'
services:
server:
restart: always
build:
context: ./server
dockerfile: Dockerfile
volumes:
# don't overwrite this folder in container with the local one
- ./app/node_modules
# map current local directory to the /app inside the container
#This is a must for development in order to update our container whenever a change to the source code is made. Without this, you would have to rebuild the image each time you make a change to source code.
- ./server:/app
# ports:
# - 3001:3001
depends_on:
- mongodb
environment:
NODE_ENV: ${NODE_ENV}
MONGO_URI: mongodb://${MONGO_ROOT_USERNAME}:${MONGO_ROOT_PASSWORD}#mongodb
networks:
- anfel-network
client:
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./app/node_modules
- ./client:/app
# ports:
# - 3000:3000
depends_on:
- server
networks:
- anfel-network
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD}
volumes:
# for persistence storage
- mongodb-data:/data/db
networks:
- anfel-network
# mongo express used during development
mongo-express:
image: mongo-express
depends_on:
- mongodb
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ROOT_USERNAME}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ROOT_PASSWORD}
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_SERVER: mongodb
ME_CONFIG_BASICAUTH_USERNAME: root
ME_CONFIG_BASICAUTH_PASSWORD: root
volumes:
- mongodb-data
networks:
- anfel-network
nginx:
restart: always
depends_on:
- server
- client
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- '8080:80'
networks:
- anfel-network
# volumes:
# - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
anfel-network:
driver: bridge
volumes:
mongodb-data:
driver: local
Any suggestions would be appreciated.
You have to create a bind mount, this can help you

Docker compose error to connect postgres database with Node API

I am having the connection error with the docker-composer when I try to access a postgres database through an API in Node.
I'm using Sequelize as ORM to acess the database. But I'dont know what happened.
docker-compose.yml:
version: '3.5'
services:
api-service:
build:
context: .
dockerfile: ./api-docker.dockerfile
image: api-service
container_name: api-service
restart: always
env_file: .env
environment:
- NODE_ENV=$NODE_ENV
ports:
- ${PORT}:3000
volumes:
- .:/home/node/api
- node_modules:/home/node/api/node_modules
depends_on:
- postgres-db
networks:
- api-network
command: npm run start:dev
postgres-db:
expose:
- ${PORT_SERVICE}
ports:
- ${PORT_SERVICE}:5432
restart: always
env_file: .env
volumes:
- pgReportData:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${USER_SERVICE}
POSTGRES_PASSWORD: ${PASSWORD_SERVICE}
POSTGRES_DB: ${DATABASE_SERVICE}
networks:
- api-network
container_name: postgres-db
image: postgres:10
networks:
api-network:
driver: bridge
volumes:
pgReportData:
driver: local
node_modules:
.env:
NODE_ENV=development
PORT=30780
HOST_SERVICE=postgres-db
DATABASE_SERVICE=base
USER_SERVICE=user
PASSWORD_SERVICE=password
DIALECT=postgres
PORT_SERVICE=5444
api-docker.dockerfile:
FROM node:12
WORKDIR /src
COPY . .
COPY --chown=node:node . .
USER node
RUN npm install
EXPOSE $PORT
ENTRYPOINT ["npm", "run", "start:dev"]
And when I run:
docker-compose up
I'm getting this error:
Any ideia?
Can someone help me ??
If node is the only app that connects to postgre-db you can remove the networks, and expose the postgredb running port (5432). To connect to the db you can simply use the container name as host.
Connection String: "postgres://YourUserName:YourPassword#postgres-db:5432/YourDatabase";
version: '3.5'
services:
api-service:
build:
context: .
dockerfile: ./api-docker.dockerfile
image: api-service
container_name: api-service
restart: always
env_file: .env
environment:
- NODE_ENV=$NODE_ENV
ports:
- ${PORT}:3000
volumes:
- .:/home/node/api
- node_modules:/home/node/api/node_modules
depends_on:
- postgres-db
command: npm run start:dev
postgres-db:
expose:
- 5432
restart: always
env_file: .env
volumes:
- pgReportData:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${USER_SERVICE}
POSTGRES_PASSWORD: ${PASSWORD_SERVICE}
POSTGRES_DB: ${DATABASE_SERVICE}
container_name: postgres-db
image: postgres:10
volumes:
pgReportData:
driver: local
node_modules:

Docker-Compose: Service xxx depends on service xxx which is undefined

I'm having this error:
ERROR: Service 'db' depends on service 'apache' which is undefined.
Why is it saying that apache is undefined? I check the indentation. Should be the right one.
version: '3.5'
services:
apache:
build: ./Docker
image: apache:latest
ports:
- "80:80"
restart: always
networks:
default:
name: frontend-network
services:
db:
image: mariadb:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
depends_on:
- "apache"
adminer:
image: adminer
restart: always
ports:
- "8080:8080"
depends_on:
- "db"
networks:
default:
name: frontend-network
No, it's not defined. You have overwritten one services with the other one.
You should fix the configuration:
version: '3.5'
services:
apache:
build: ./Docker
image: apache:latest
ports:
- "80:80"
restart: always
db:
image: mariadb:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
depends_on:
- "apache"
adminer:
image: adminer
restart: always
ports:
- "8080:8080"
depends_on:
- "db"
networks:
default:
name: frontend-network

Resources