I'm trying to run neo4J in causal cluster mode. All is meant to be run in docker with a config inside docker-compose.yml. All instances of the cluster are running, however when I'm trying to connect to neo4J thru Node.js (which of course is also run by the same docker-compose.yml) I'm getting: Neo4j :: executeQuery :: Error Neo4jError: getaddrinfo ENOTFOUND neo4j. How can I make it work, ie. connect from node to neo4J in causal cluster mode inside docker container. Here's my docker-compose.yml:
version: '3'
networks:
lan:
services:
app:
build:
dockerfile: Dockerfile.dev
context: ./
links:
- core1
- core2
- core3
- read1
volumes:
- /app/node_modules
- ./:/app
ports:
- '3000:3000'
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
core1:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7474:7474
- 6477:6477
- 7687:7687
volumes:
- $HOME/neo4j/neo4j-core1/conf:/conf
- $HOME/neo4j/neo4j-core1/data:/data
- $HOME/neo4j/neo4j-core1/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_https_listen__address=:6477
- NEO4J_dbms_connector_bolt_listen__address=:7687
core2:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7475:7475
- 6478:6478
- 7688:7688
volumes:
- $HOME/neo4j/neo4j-core2/conf:/conf
- $HOME/neo4j/neo4j-core2/data:/data
- $HOME/neo4j/neo4j-core2/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7475
- NEO4J_dbms_connector_https_listen__address=:6478
- NEO4J_dbms_connector_bolt_listen__address=:7688
core3:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7476:7476
- 6479:6479
- 7689:7689
volumes:
- $HOME/neo4j/neo4j-core3/conf:/conf
- $HOME/neo4j/neo4j-core3/data:/data
- $HOME/neo4j/neo4j-core3/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causal__clustering_minimum__core__cluster__size__at__formation=3
- NEO4J_causal__clustering_minimum__core__cluster__size__at__runtime=3
- NEO4J_causal__clustering_initial__discovery__members=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7476
- NEO4J_dbms_connector_https_listen__address=:6479
- NEO4J_dbms_connector_bolt_listen__address=:7689
read1:
image: neo4j:3.5.11-enterprise
networks:
- lan
ports:
- 7477:7477
- 6480:6480
- 7690:7690
volumes:
- $HOME/neo4j/neo4j-read1/conf:/conf
- $HOME/neo4j/neo4j-read1/data:/data
- $HOME/neo4j/neo4j-read1/logs:/logs
- $HOME/neo4j/neo4j-core1/plugins:/plugins
environment:
- REACT_APP_NEO4J_HOST=bolt://neo4j
- NEO4J_AUTH=neo4j/changeme
- NEO4J_dbms_mode=READ_REPLICA
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_causalClustering_initialDiscoveryMembers=core1:5000,core2:5000,core3:5000
- NEO4J_dbms_connector_http_listen__address=:7477
- NEO4J_dbms_connector_https_listen__address=:6480
- NEO4J_dbms_connector_bolt_listen__address=:7690
And my DockerFile.dev:
FROM node:alpine
WORKDIR '/app'
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
1) The application should be attached to same network.
app:
networks:
- lan
2) Assuming the network already exists with name "lan" or create a new network
networks:
lan:
driver: bridge
3) deprecated "links" in docker-
app:
build:
dockerfile: Dockerfile.dev
context: ./
links:
- core1
- core2
- core3
- read1
instead of this use - depends_on if required to maintain loading sequence
=======================
Edit Please read comments for the followup questions
Related
Here is my docker-compose.yml, when I comment the volumes code of "khaothi-manager" my services work correctly. But when uncomment it, my Node service throw an error that it can not connect to Mongo
version: "3.8"
services:
mongo:
image: mongo
restart: always
env_file: ./.env
ports:
- $MONGO_LOCAL_PORT:$DB_PORT
volumes:
- ./data:/data/db
networks:
- hm_khaothi
khaothi-manager:
container_name: khaothi-manager
image: khaothi-manager
restart: always
volumes:
- ./admin:/app
build: ./admin
env_file: ./.env
links:
- mongo
- khaothi-resource
ports:
- $MANAGER_PORT:$MANAGER_PORT
environment:
- MANAGER_HOST=$MANAGER_HOST
- MANAGER_PORT=$MANAGER_PORT
- RESOURCE_HOST=khaothi-resource
- RESOURCE_PORT:$RESOURCE_PORT
- DB_HOST=mongo
- DB_NAME=$DB_NAME
- DB_PORT=$DB_PORT
networks:
- hm_khaothi
My Dockerfile
# syntax=docker/dockerfile:1
FROM node:14-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]
This is the error
(node:37) UnhandledPromiseRejectionWarning: MongooseServerSelectionError: connection timed out
at NativeConnection.Connection.openUri (/app/node_modules/mongoose/lib/connection.js:807:32)
at /app/node_modules/mongoose/lib/index.js:342:10
...
It worked correctly when I add another volume /app/node_modules
khaothi-manager:
container_name: khaothi-manager
image: khaothi-manager
restart: always
volumes:
- ./admin:/app
- /app/node_modules
I would like to implement a hot reloading functionality from development evinronement such that when i change anything in the source code it will reflect the changes up to the docker container by mounting the volume and hence see the changes live in my localhost.
Below is my docker-compose file
version: '3.9'
services:
server:
restart: always
build:
context: ./server
dockerfile: Dockerfile
volumes:
# don't overwrite this folder in container with the local one
- ./app/node_modules
# map current local directory to the /app inside the container
#This is a must for development in order to update our container whenever a change to the source code is made. Without this, you would have to rebuild the image each time you make a change to source code.
- ./server:/app
# ports:
# - 3001:3001
depends_on:
- mongodb
environment:
NODE_ENV: ${NODE_ENV}
MONGO_URI: mongodb://${MONGO_ROOT_USERNAME}:${MONGO_ROOT_PASSWORD}#mongodb
networks:
- anfel-network
client:
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./app/node_modules
- ./client:/app
# ports:
# - 3000:3000
depends_on:
- server
networks:
- anfel-network
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD}
volumes:
# for persistence storage
- mongodb-data:/data/db
networks:
- anfel-network
# mongo express used during development
mongo-express:
image: mongo-express
depends_on:
- mongodb
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ROOT_USERNAME}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ROOT_PASSWORD}
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_SERVER: mongodb
ME_CONFIG_BASICAUTH_USERNAME: root
ME_CONFIG_BASICAUTH_PASSWORD: root
volumes:
- mongodb-data
networks:
- anfel-network
nginx:
restart: always
depends_on:
- server
- client
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- '8080:80'
networks:
- anfel-network
# volumes:
# - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
anfel-network:
driver: bridge
volumes:
mongodb-data:
driver: local
Any suggestions would be appreciated.
You have to create a bind mount, this can help you
maybe someone can help me.
I have keycloak, my nodejs-server, and traefik all installed with docker-compose. Everything seemed to be fine until I called a route from my frontend to the nodejs API. No matter what I tried I get a 403 all the time. When the nodejs server is running not in a docker it works. Strange in my opinion.
Here my Docker Compose if it helps:
version: '3.8'
services:
mariadb:
image: mariadb:latest
container_name: mariadb
labels:
- "traefik.enable=false"
networks:
- keycloak-network
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
command: mysqld --lower_case_table_names=1
volumes:
- ./:/docker-entrypoint-initdb.d
keycloak:
image: jboss/keycloak
container_name: keycloak
labels:
- "traefik.http.routers.keycloak.rule=Host(`keycloak.localhost`)"
- "traefik.http.routers.keycloak.tls=true"
networks:
- keycloak-network
environment:
- DB_DATABASE=
- DB_USER=
- DB_PASSWORD=
- KEYCLOAK_USER=
- KEYCLOAK_PASSWORD=
- KEYCLOAK_IMPORT=/tmp/example-realm.json
- PROXY_ADDRESS_FORWARDING=true
ports:
- 8443:8443
volumes:
- ./realm-export.json:/tmp/example-realm.json
depends_on:
- mariadb
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
labels:
- "traefik.http.routers.phpmyadmin.rule=Host(`phpmyadmin.localhost`)"
networks:
- keycloak-network
links:
- mariadb:db
ports:
- 8081:80
depends_on:
- mariadb
spectory-backend:
image: spectory-backend
container_name: spectory-backend
labels:
- "traefik.http.routers.spectory-backend.rule=Host(`api.localhost`)"
- "traefik.port=4000"
ports:
- 4000:4000
networks:
- keycloak-network
depends_on:
- mariadb
- keycloak
spectory-frontend:
image: spectory-frontend
container_name: spectory-frontend
labels:
- "traefik.http.routers.spectory-frontend.rule=Host(`spectory.localhost`)"
ports:
- 4200:80
depends_on:
- mariadb
- keycloak
- spectory-backend
traefik-reverse-proxy:
image: traefik:v2.2
command:
- --api.insecure=true
- --providers.docker
- --entrypoints.web-secure.address=:443
- --entrypoints.web.address=:80
- --providers.file.directory=/configuration/
- --providers.file.watch=true
labels:
- "traefik.http.routers.traefik-reverse-proxy.rule=Host(`traefik.localhost`)"
ports:
- "80:80"
- "443:443"
- "8082:8080"
networks:
- keycloak-network
volumes:
- ./traefik.toml:/configuration/traefik.toml
- /var/run/docker.sock:/var/run/docker.sock
- ./ssl/tls.key:/etc/https/tls.key
- ./ssl/tls.crt:/etc/https/tls.crt
networks:
keycloak-network:
name: keycloak-network
I also tried static ip addresses for nodejs and keycloak -> didn't work.
Here on StackOverflow someone mentioned using https would help -> didn't work
Pretty much my situation: Link . The goal for me is that even the API is reachable through traefik
Btw my angular frontend can communicate with keycloak. Also in a docker. I can also ping the keycloak docker from the nodejs docker. Nodejs configuration parameters directly form keycloak.
I really don't know what to do next.
Did someone tried something similar?
I wanted to create containers for tomcat, documentum content server and documentum xplore by using a single compose file. Am facing issues due to the volumes mentioned in the docker-compose.yml file. Am able to bring up the services by executing the compose files separately. Problem is when i try to merge the compose files together. Wanted to know how to run multiple containers with volumes using docker compose.
Below is the single compose file :
version: '2'
networks:
default:
external:
name: dctmcs_default
services:
dsearch:
image: xplore_ubuntu:1.6.0070.0058
container_name: dsearch
hostname: dsearch
ports:
- "9300:9300"
volumes:
- xplore:/root/xPlore/rtdata
indexagent:
image: indexagent_ubuntu:1.6.0070.0058
container_name: indexagent_1
hostname: indexagent_1
ports:
- "9200:9200"
environment:
- primary_addr=dsearch
- docbase_name=centdb
- docbase_user=dmadmin
- docbase_password=password
- broker_host=contentserver
- broker_port=1689
depends_on:
- dsearch
volumes_from:
- dsearch
volumes:
xplore: {}
tomcat_8:
image: tomcat_8.0:ccms
container_name: appserver
hostname: appserver
ports:
- "9090:8080"
contentserver:
image: contentserver_ubuntu:7.3.0000.0214
environment:
- HIGH_VOLUME_SERVER_LICENSE=
- TRUSTED_LICNESE=
- STORAGEAWARE_LICENSE=
- XMLSTORE_LICENSE=
- SNAPLOCKSTORE_LICENSE=LDNAPJEWPXQ
- RPS_LICENSE=
- FED_RECD_SERVICE_LICENSE=
- RECORD_MANAGER_LICENSE=
- PRM_LICENSE=
- ROOT_USER_PASSWORD=password
- INSTALL_OWNER_PASSWORD=password
- INSTALL_OWNER_USER=dmadmin
- REPOSITORY_PASSWORD=password
- EXTERNAL_IP=10.114.41.198
- EXTERNALDB_IP=172.17.0.1
- EXTERNALDB_ADMIN_USER=postgres
- EXTERNALDB_ADMIN_PASSWORD=password
- DB_SERVER_PORT=5432
- DOCBASE_ID=45321
- DOCBASE_NAME=centdb
- USE_EXISTING_DATABASE_ACCOUNT=false
- INDEXSPACE_NAME=dm_repo_docbase
- BOF_REGISTRY_USER_PASSWORD=password
- AEK_ALGORITHM=AES_256_CBC
- AEK_PASSPHRASE=${AEK_PASSPHRASE}
- AEK_NAME=aek.key
- ENABLE_LOCKBOX=false
- LOCKBOX_FILE_NAME=lockbox.lb
- LOCKBOX_PASSPHRASE=${LOCKBOX_PASSPHRASE}
- USE_EXISTING_AEK_LOCKBOX=false
- CONFIGURE_THUMBNAIL_SERVER=NO
- EXTDOCBROKERPORT=1689
- CONTENTSERVER_PORT=50000
- APP_SERVER_ADMIN_PASSWORD=jboss
- INSTALL_OWNER_UID=
hostname:
"contentserver"
container_name:
"contentserver"
ports:
- "1689:1689"
- "1690:1690"
- "50000:50000"
- "50001:50001"
- "9080:9080"
- "9082:9082"
- "9081:9081"
- "8081:8081"
- "8443:8443"
- "9084:9084"
volumes:
- centdb_odbc:/opt/dctm/odbc
- centdb_data:/opt/dctm/data
- centdb_dba:/opt/dctm/dba
- centdb_share:/opt/dctm/share
- centdb_dfc:/opt/dctm/config
- centdb_xhive_storage:/opt/dctm/xhive_storage
- centdb_XhiveConnector:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/deployments/XhiveConnector.ear
- centdb_mdserver_conf:/opt/dctm/mdserver_conf
- centdb_mdserver_log:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/log
- centdb_mdserver_logs:/opt/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs
- centdb_Thumbnail_Server_conf:/opt/dctm/product/7.3/thumbsrv/conf
- centdb_Thumbnail_Server_webinf:/opt/dctm/product/7.3/thumbsrv/container/webapps/thumbsrv/WEB-INF
privileged: true
volumes:
centdb_data:
driver: local
centdb_dba:
centdb_share:
driver: local
centdb_dfc:
centdb_odbc:
centdb_XhiveConnector:
centdb_mdserver_conf:
centdb_mdserver_log:
centdb_mdserver_logs:
centdb_Thumbnail_Server_conf:
centdb_Thumbnail_Server_webinf:
centdb_xhive_storage:
This error often appears when you are trying to create a volume as a subfolder of your current host folder. In that case, the syntax would have to be:
volumes:
- ./centdb_odbc:/opt/dctm/odbc
In other words: The relative path "./" is missing!
When you map a directory, the source part must be either an absolute path, or a relative part that begins with ./ or ../. Otherwise, Docker interprets it as a Named Volume.
So instead of
volumes:
- xplore:/root/xPlore/rtdata
You should write:
volumes:
- ./xplore:/root/xPlore/rtdata
Volumes Command should be the last command in docker compose include volume names of all services together and run the docker compose. It will create containers.
volumes:
xplore: {}
centdb_data:
driver: local
centdb_dba:
centdb_share:
driver: local
centdb_dfc:
centdb_odbc:
centdb_XhiveConnector:
centdb_mdserver_conf:
centdb_mdserver_log:
centdb_mdserver_logs:
centdb_Thumbnail_Server_conf:
centdb_Thumbnail_Server_webinf:
centdb_xhive_storage:
If I use command docker-compose build, I'll get error that looks like:
ERROR: Validation failed in file './docker-compose.yml', reason(s):
Service 'php' configuration key 'expose' '0' is invalid: should be of
the format 'PORT[/PROTOCOL]'
I use the last version docker and docker-compose.
My docker-compose.yml has the next code:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
db:
image: mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: symfony
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: php-fpm
expose:
- 9000:9000
volumes_from:
- application
links:
- db
nginx:
build: nginx
ports:
- 80:80
links:
- php
volumes_from:
- application
volumes:
- ./logs/nginx/:/var/log/nginx
elk:
image: willdurand/elk
ports:
- 81:80
volumes:
- ./elk/logstash:/etc/logstash
- ./elk/logstash/patterns:/opt/logstash/patterns
volumes_from:
- application
- php
- nginx
I use an ubuntu 14.04
Could you tell me how is fix it?
You need to put the port definitions in quotes for short ports (2 digits). This is a result of the nature of YAML and the used parser in docker-compose.
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
db:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: symfony
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: php-fpm
expose:
- "9000"
volumes_from:
- application
links:
- db
nginx:
build: nginx
ports:
- "80:80"
links:
- php
volumes_from:
- application
volumes:
- ./logs/nginx/:/var/log/nginx
elk:
image: willdurand/elk
ports:
- "81:80"
volumes:
- ./elk/logstash:/etc/logstash
- ./elk/logstash/patterns:/opt/logstash/patterns
volumes_from:
- application
- php
- nginx
Also the expose statement should come with a single number only and also be quoted.
Added all needed changes in the above.