unable to run kibana with docker-compose - linux

I am trying to run Kibana with opendistro elasticsearch using the following docker-compose:
version: '3'
services:
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.11.0
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node1
- discovery.seed_hosts=odfe-node1
- cluster.initial_master_nodes=odfe-node1
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms1g -Xmx1g" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.11.0
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
- odfe-net
volumes:
odfe-data1:
networks:
odfe-net:
after running the above docker-compose using
docker-compose up
i get the following error:
Starting odfe-kibana ... done
Starting odfe-node1 ... done
Attaching to odfe-kibana, odfe-node1
odfe-node1 | OpenDistro for Elasticsearch Security Demo Installer
odfe-node1 | ** Warning: Do not use on production or public reachable systems **
odfe-node1 | Basedir: /usr/share/elasticsearch
odfe-node1 | Elasticsearch install type: rpm/deb on CentOS Linux release 7.8.2003 (Core)
odfe-node1 | Elasticsearch config dir: /usr/share/elasticsearch/config
odfe-node1 | Elasticsearch config file: /usr/share/elasticsearch/config/elasticsearch.yml
odfe-node1 | Elasticsearch bin dir: /usr/share/elasticsearch/bin
odfe-node1 | Elasticsearch plugins dir: /usr/share/elasticsearch/plugins
odfe-node1 | Elasticsearch lib dir: /usr/share/elasticsearch/lib
odfe-node1 | Detected Elasticsearch Version: x-content-7.9.1
odfe-node1 | Detected Open Distro Security Version: 1.11.0.0
odfe-node1 | /usr/share/elasticsearch/config/elasticsearch.yml seems to be already configured for Security. Quit.
odfe-node1 | Unlinking stale socket /usr/share/supervisor/performance_analyzer/supervisord.sock
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:12Z","tags":["warning","plugins-discovery"],"pid":1,"message":"Expect plugin \"id\" in camelCase, but found: opendistro-notebooks-kibana"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:22Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"telemetryManagementSection\" has been disabled since the following direct or transitive dependencies are missing or disabled: [telemetry]"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:22Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"newsfeed\" is disabled."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:22Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"telemetry\" is disabled."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:22Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"visTypeXy\" is disabled."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:26Z","tags":["warning","legacy-service"],"pid":1,"message":"Some installed third party plugin(s) [opendistro-alerting, opendistro-anomaly-detection-kibana, opendistro_index_management_kibana, opendistro-query-workbench] are using the legacy plugin format and will no longer work in a future Kibana release. Please refer to https://ela.st/kibana-breaking-changes-8-0 for a list of breaking changes and https://ela.st/kibana-platform-migration for documentation on how to migrate legacy plugins."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:27Z","tags":["info","plugins-system"],"pid":1,"message":"Setting up [38] plugins: [usageCollection,telemetryCollectionManager,kibanaUsageCollection,kibanaLegacy,mapsLegacy,timelion,share,legacyExport,esUiShared,bfetch,expressions,data,home,console,apmOss,management,indexPatternManagement,advancedSettings,savedObjects,opendistroSecurity,visualizations,visualize,visTypeVega,visTypeTimelion,visTypeTable,visTypeMarkdown,tileMap,inputControlVis,regionMap,dashboard,opendistro-notebooks-kibana,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,discover,savedObjectsManagement]"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:28Z","tags":["info","savedobjects-service"],"pid":1,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:29Z","tags":["error","elasticsearch","data"],"pid":1,"message":"Request error, retrying\nGET https://odfe-node1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 172.19.0.3:9200"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:31Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:32Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:32Z","tags":["error","savedobjects-service"],"pid":1,"message":"Unable to retrieve version information from Elasticsearch nodes."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:33Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:33Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:35Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:35Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:37Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:37Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:40Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:41Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:42Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:42Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:45Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:45Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana exited with code 137
odfe-node1 exited with code 137
i donn know why i am always got this exit status with those only two services running with docker-compose
so if anyone had the same issue or can help, please feel free

Exit code 137 indicates that the container was terminated due to an out of memory issue.
Try adding
ulimits:
memlock:
soft: -1
hard: -1
to Kibana service as well.

Related

P1001: Can't reach database server at `localhost`:`5432` error [duplicate]

This question already has answers here:
Docker - Can't reach database server at `localhost`:`3306` with Prisma service
(2 answers)
ECONNREFUSED for Postgres on nodeJS with dockers
(7 answers)
Closed 17 days ago.
This post was edited and submitted for review 16 days ago and failed to reopen the post:
Original close reason(s) were not resolved
I am trying to build a docker image from my project and run it in a container,
The project is keystone6 project connecting to a postgres database, everything worked well when I normally run the project and it connects successfully to the database.
Here is my dockerfile:
FROM node:18.13.0-alpine3.16
ENV NODE_VERSION 18.13.0
ENV NODE_ENV=development
LABEL Name="di-wrapp" Version="1"
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
COPY .env .
EXPOSE 9999
CMD ["npm", "run", "dev"]
I am building an image using the command docker build -t di-wrapp:1.0 .
after that I run docker-compose file which contains the following code:
version: "3.8"
services:
postgres:
image: postgres:15-alpine
container_name: localhost
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=di_wrapp
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
dashboard:
image: di-wrapp:1.0
container_name: di-wrapp-container
restart: always
environment:
- DB_CONNECTION=postgres
- DB_PORT=5432
- DB_HOST=localhost
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_NAME=di_wrapp
tty: true
depends_on:
- postgres
ports:
- 8273:9999
links:
- postgres
command: "npm run dev"
volumes:
- /usr/src/app
volumes:
postgres-data:
And this is the connection URI used to connect my project to postgres:
DATABASE_URL=postgresql://postgres:postgres#localhost:5432/di_wrapp
which I am using to configure my db setting in keystone config file like this:
export const db: DatabaseConfig<BaseKeystoneTypeInfo> = {
provider: "postgresql",
url: String(DATABASE_URL!),
};
when I run the command docker-compose -f docker-compose.yaml up
This is what I receive:
localhost | 2023-02-03 13:43:35.034 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
localhost | 2023-02-03 13:43:35.034 UTC [1] LOG: listening on IPv6 address "::", port 5432
localhost | 2023-02-03 13:43:35.067 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
localhost | 2023-02-03 13:43:35.121 UTC [24] LOG: database system was shut down at 2023-02-03 13:43:08 UTC
localhost | 2023-02-03 13:43:35.155 UTC [1] LOG: database system is ready to accept connections
di-wrapp-container | > keystone-app#1.0.2 dev
di-wrapp-container | > keystone dev
di-wrapp-container |
di-wrapp-container | ✨ Starting Keystone
di-wrapp-container | ⭐️ Server listening on :8273 (http://localhost:8273/)
di-wrapp-container | ⭐️ GraphQL API available at /api/graphql
di-wrapp-container | ✨ Generating GraphQL and Prisma schemas
di-wrapp-container | Error: P1001: Can't reach database server at `localhost`:`5432`
di-wrapp-container |
di-wrapp-container | Please make sure your database server is running at `localhost`:`5432`.
di-wrapp-container | at Object.createDatabase (/usr/src/app/node_modules/#prisma/internals/dist/migrateEngineCommands.js:115:15)
di-wrapp-container | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
di-wrapp-container | at async ensureDatabaseExists (/usr/src/app/node_modules/#keystone-6/core/dist/migrations-e3b5740b.cjs.dev.js:262:19)
di-wrapp-container | at async Object.pushPrismaSchemaToDatabase (/usr/src/app/node_modules/#keystone-6/core/dist/migrations-e3b5740b.cjs.dev.js:68:3)
di-wrapp-container | at async Promise.all (index 1)
di-wrapp-container | at async setupInitialKeystone (/usr/src/app/node_modules/#keystone-6/core/scripts/cli/dist/keystone-6-core-scripts-cli.cjs.dev.js:984:3)
di-wrapp-container | at async initKeystone (/usr/src/app/node_modules/#keystone-6/core/scripts/cli/dist/keystone-6-core-scripts-cli.cjs.dev.js:762:35)di-wrapp-container exited with code 1
even though I receive that the database server is up on port 5432, my app container can't connect to it.
any help is appreciated.

How do I set up docker-compose.yml, Express and TypeORM to connect my Express server to a Postgres db inside the container?

I am new to writing my own docker-compose.yml files because I previously had coworkers around to write them for me. Not the case now.
I am trying to connect an Express server with TypeORM to a Postgres database inside of Docker containers.
I see there are two separate containers running: One for the express server and another for postgres.
I expect the port 5432 to be sufficient to direct container A to connect to container B. I am wrong. It's not.
I read somewhere that 0.0.0.0 refers to the host machine's actual network, not the internal network of the containers. Hence I am trying to change 0.0.0.0 in the Postgres output to something else.
Here's what I have so far:
docker-compose.yml
version: "3.1"
services:
app:
container_name: express_be
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./src:/app/src
depends_on:
- postgres_db
ports:
- "8000:8000"
networks:
- efinternal
postgres_db:
container_name: postgres_be
image: postgres:15.1
restart: always
environment:
- POSTGRES_USERNAME=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
volumes:
- postgresDB:/var/lib/postgresql/data
networks:
- efinternal
volumes:
postgresDB:
driver: local
networks:
efinternal:
driver: bridge
Here's my console log output
ostgres_be | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_be |
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: starting PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 // look here
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_be | 2022-12-13 04:09:51.636 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_be | 2022-12-13 04:09:51.644 UTC [28] LOG: database system was shut down at 2022-12-13 04:09:48 UTC
postgres_be | 2022-12-13 04:09:51.650 UTC [1] LOG: database system is ready to accept connections
express_be |
express_be | > efinternal#1.0.1 dev
express_be | > nodemon ./src/server.ts
express_be |
express_be | [nodemon] 2.0.20
express_be | [nodemon] starting `ts-node ./src/server.ts`
express_be | /task ... is running
express_be | /committee ... is running
express_be | App has started on port 8000
express_be | Database connection failed Failed to create connection with database
express_be | Error: getaddrinfo ENOTFOUND efinternal // look here
express_be | at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {
express_be | errno: -3008,
express_be | code: 'ENOTFOUND',
express_be | syscall: 'getaddrinfo',
express_be | hostname: 'efinternal'
express_be | }
Not sure what to say about it. What I'm trying to do is to change the line " listening on IPv4 address "0.0.0.0", port 5432" to say "listening on ... efinternal, port 5432" so that "Error: getaddrinfo ENOTFOUND efinternal" will have somewhere to point to.
This link seemed to have the answer but I couldn't decipher it.
Same story here, I can't decipher what I'm doing wrong.
A helpful person told me about "external_links" which sounds like the wrong tool for the job: "Link to containers started outside this docker-compose.yml or even outside of Compose"
I suspect this is The XY Problem where I'm trying to Y (create an "efinternal" network in my containers) so I can X (connect express to postgres) when in fact Y is the wrong solution to X.
In case it matters, here's my TypeORM Postgres connection settings:
export const AppDataSource = new DataSource({
type: "postgres",
host: "efinternal",
port: 5432,
username: "postgres",
password: "postgres",
database: "postgres",
synchronize: true,
logging: true,
entities: [User, Committee, Task, OnboardingStep, RefreshToken, PasswordToken],
subscribers: [],
migrations: [],
});
The special IPv4 address 0.0.0.0 means "all interfaces", in whichever context it's invoked in. In a Docker container you almost always want to listen to 0.0.0.0, which will listen to all "interfaces" in the isolated Docker container network environment (not the host network environment). So you don't need to change the PostgreSQL configuration here.
If you read through Networking in Compose in the Docker documentation, you'll find that each container is addressable by its Compose service name. The network name itself is not usable as a host name.
I'd strongly suggest making your database location configurable. Even in this setup, it will have a different hostname in your non-Docker development environment (localhost) vs. running in a container (postgres_db).
export const AppDataSource = new DataSource({
host: process.env.PGHOST || 'localhost',
...
});
Then in your Compose setup, you can specify that environment-variable setting.
version: '3.8'
services:
app:
build: . # and name the Dockerfile just "Dockerfile"
depends_on: [postgres_db]
ports: ['8000:8000']
environment: # add
PGHOST: postgres_db
postgres_db: {...}
Compose also provides you a network named default, and for simplicity you can delete all of the networks: blocks in the entire Compose file. You also do not need to manually specify container_name:, and I'd avoid overwriting the image's code with volumes:.
This setup also supports using Docker for your database but plain Node for ordinary development. (Also see near-daily SO questions about live reloading not working in Docker.) You can start only the database, and since the code defaults the database location to a developer-friendly localhost, you can just develop as normal.
docker-compose up -d postgres_db
yarn dev

Docker-compose up exited with code 1 but successful docker-compose build

When trying to docker-compose up I get errors from the frontend and backend where they exit with code 1. docker ps shows that the postgres container is still running but the frontend and backend still exit. Using npm start, there is no errors. I don't know if it this helps, but my files do not copy from my src folder to /usr/src/app/ so maybe there is an error with my docker-compose or Dockerfiles.
Docker ps shows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
509208b2243b postgres:latest "docker-entrypoint.s…" 14 hours ago Up 11 minutes 0.0.0.0:5432->5432/tcp example_db_1
docker-compose.yml
version: '3'
services:
frontend:
build: ./frontend
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- ./frontend/build:/usr/share/nginx/html
ports:
- 80:80
- 443:443
depends_on:
- backend
backend:
build: ./backend
volumes:
- ./backend/src:/usr/src/app/src
- ./data/certbot/conf:/etc/letsencrypt
ports:
- 3000:3000
depends_on:
- db
db:
image: postgres:latest
restart: always
environment:
POSTGRES_USER: example
POSTGRES_PASSWORD: example1234
POSTGRES_DB: example
ports:
- 5432:5432
certbot:
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
# Automatic certificate renewal
# entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
This is what the backend Dockerfile looks like.
FROM node:current-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app/
COPY package*.json /usr/src/app/
RUN npm install
COPY . /usr/src/app/
EXPOSE 3000
ENV NODE_ENV=production
CMD ["npm", "start"]
And the output error:
example_db_1 is up-to-date
Starting example_certbot_1 ... done
Starting example_backend_1 ... done
Starting example_frontend_1 ... done
Attaching to example_db_1, example_certbot_1, example_backend_1, example_frontend_1
backend_1 |
backend_1 | > example-backend#1.0.0 start /usr/src/app
backend_1 | > npx tsc; node ./out/
backend_1 |
certbot_1 | Saving debug log to /var/log/letsencrypt/letsencrypt.log
certbot_1 | Certbot doesn't know how to automatically configure the web server on this system. However, it can still get a certificate for you. Please run "certbot certonly" to do so. You'll need to manually configure your web server to use the resulting certificate.
frontend_1 | 2020/02/13 11:35:59 [emerg] 1#1: open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/app.conf:21
frontend_1 | nginx: [emerg] open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/app.conf:21
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... Etc/UTC
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 | initdb: warning: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | waiting for server to start....2020-02-12 21:51:40.137 UTC [43] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-02-12 21:51:40.147 UTC [43] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-02-12 21:51:40.229 UTC [44] LOG: database system was shut down at 2020-02-12 21:51:39 UTC
db_1 | 2020-02-12 21:51:40.240 UTC [43] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 | CREATE DATABASE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | 2020-02-12 21:51:40.606 UTC [43] LOG: received fast shutdown request
db_1 | waiting for server to shut down....2020-02-12 21:51:40.608 UTC [43] LOG: aborting any
active transactions
db_1 | 2020-02-12 21:51:40.614 UTC [43] LOG: background worker "logical replication launcher" (PID 50) exited with exit code 1
db_1 | 2020-02-12 21:51:40.614 UTC [45] LOG: shutting down
db_1 | 2020-02-12 21:51:40.652 UTC [43] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2020-02-12 21:51:40.728 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-02-12 21:51:40.729 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-02-12 21:51:40.729 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-02-12 21:51:40.748 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-02-12 21:51:40.788 UTC [61] LOG: database system was shut down at 2020-02-12 21:51:40 UTC
db_1 | 2020-02-12 21:51:40.799 UTC [1] LOG: database system is ready to accept connections
db_1 | 2020-02-13 09:51:41.562 UTC [787] LOG: invalid length of startup packet
db_1 | 2020-02-13 11:09:27.384 UTC [865] FATAL: password authentication failed for user "postgres"
db_1 | 2020-02-13 11:09:27.384 UTC [865] DETAIL: Role "postgres" does not exist.
db_1 | Connection matched pg_hba.conf line 95: "host all all all md5"
db_1 | 2020-02-13 11:32:18.771 UTC [1] LOG: received smart shutdown request
db_1 | 2020-02-13 11:32:18.806 UTC [1] LOG: background worker "logical replication launcher"
(PID 67) exited with exit code 1
db_1 | 2020-02-13 11:32:18.806 UTC [62] LOG: shutting down
db_1 | 2020-02-13 11:32:18.876 UTC [1] LOG: database system is shut down
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-02-13 11:33:01.343 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-02-13 11:33:01.343 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-02-13 11:33:01.343 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-02-13 11:33:01.355 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-02-13 11:33:01.427 UTC [23] LOG: database system was shut down at 2020-02-13 11:32:18 UTC
db_1 | 2020-02-13 11:33:01.466 UTC [1] LOG: database system is ready to accept connections
example_certbot_1 exited with code 1
example_frontend_1 exited with code 1
backend_1 | Authenticating with database...
backend_1 | internal/fs/utils.js:220
backend_1 | throw err;
backend_1 | ^
backend_1 |
backend_1 | Error: ENOENT: no such file or directory, open '/etc/letsencrypt/live/example.org/privkey.pem'
backend_1 | at Object.openSync (fs.js:440:3)
backend_1 | at Object.readFileSync (fs.js:342:35)
backend_1 | at Object.<anonymous> (/usr/src/app/out/index.js:68:23)
backend_1 | at Module._compile (internal/modules/cjs/loader.js:955:30)
backend_1 | at Object.Module._extensions..js (internal/modules/cjs/loader.js:991:10)
backend_1 | syscall: 'open', 811:32)
backend_1 | code: 'ENOENT', loader.js:723:14)
backend_1 | path: '/etc/letsencrypt/live/example.org/s/loader.js:1043:10)privkey.pem'
backend_1 | }
backend_1 | npm ERR! code ELIFECYCLE
backend_1 | npm ERR! errno 1
backend_1 | npm ERR! example-backend#1.0.0 start: `npx tsc; noprivkey.pem'de ./out/`
backend_1 | npm ERR! Exit status 1
backend_1 | npm ERR!
backend_1 | npm ERR! Failed at the example-backend#1.0.0 startde ./out/` script.
backend_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above. script.
backend_1 | here is likely additional logging ou
backend_1 | npm ERR! A complete log of this run can be found in:
backend_1 | npm ERR! /root/.npm/_logs/2020-02-13T11_36_10_3:30Z-debug.log 30Z-debug.log
example_backend_1 exited with code 1
There are no errors with certbot when run outside this project.
Directory structure:
src/
- docker-compose.yml
- init.letsencrypt.sh
- .gitignore
backend
src
- Dockerfile
- package.json
- .gitignore
data
nginx
- app.conf
frontend
src
- Dockerfile
- package.json
- .gitignore
Any help would be appreciated, thanks.
Updated nginx.conf:
server {
listen 80;
server_name example.org;
location / {
root /var/www/html/;
index index.html;
autoindex on;
}
location /frontend {
proxy_pass http://example.org:8080;
try_files $uri /public/index.html;
}
location /backend {
proxy_pass http://example.org:3000;
}
location /db {
proxy_pass http://example.org:5432;
}
}
new error when changing .gitignore:
frontend_1 | 2020/02/13 16:34:58 [emerg] 1#1: cannot load certificate "/etc/letsencrypt/live/example.org/fullchain.pem":
BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/example.org/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
frontend_1 | nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/example.org/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/example.org/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
example_frontend_1 exited with code 1
The setup seems very complicated. My advice for you: Try to reduce the complicated overhead with the certbot as own docker-container.
#docker-compose.yml
version: '3'
services:
frontend:
build: ./frontend
volumes:
- ./data/nginx:/etc/nginx/conf.d
# no source-code of the frontend via volumes -
# only a nginx-image with your source.
# nginx-conf as volume is valid.
ports:
- 8080:80
depends_on:
- backend
backend:
build: ./backend
# dont put your source as volume in,
# your docker-image should contains the whole code
# and no certbot magic here
ports:
- 3000:3000
depends_on:
- db
db:
image: postgres:latest
restart: always
environment:
POSTGRES_USER: example
POSTGRES_PASSWORD: example1234
POSTGRES_DB: example
ports:
- 5432:5432
This is much cleaner and easy to read. Now you should setup a reverse proxy on your hostmachine (easy with nginx). Configure your published ports into your nginx-reverse-proxy (proxy_pass localhost:8080 for your frontend as example).
After that you can install the certbot and obtain your lets-encrypt-certificates. The certbot should discover your nginx-endpoints automatically and can automatic renew your certificates.

Docker-compose Nodejs Mongodb doesn't success

I am testing NodeJS + MongoDB on local Mac OS with docker-compose, but NodeJS and MongoDB can't connect successfully.
If I didn't setup --auth for MongoDB by below code, all works well.
Here's the code:
mongoose connection
mongodb://mongodb:27017/myprojectdatabase
docker-compose.yml
version: "3"
services:
web:
build: .
restart: always
ports:
- "8080:8080"
depends_on:
- mongodb
volumes:
- .:/mycode
mongodb:
image: mongo:latest
ports:
- "27017:27017"
Then I want to start --auth for the MongoDB like below, I got errors.
docker-compose.yml
version: "3"
services:
web:
build: .
restart: always
ports:
- "8080:8080"
depends_on:
- mongodb
volumes:
- .:/mycode
# environment:
# - NODE_ENV=production
mongodb:
image: mongo:latest
command: [--auth]
environment:
MONGO_INITDB_ROOT_USERNAME: my_admin
MONGO_INITDB_ROOT_PASSWORD: my2019
MONGO_INITDB_DATABASE: myprojectdatabase
ports:
- "27017:27017"
volumes:
- ./mydata:/data/db
Then I run
docker-compose down -v && docker-compose up --build
I got the output:
mongodb_1 | 2019-03-01T10:54:09.847+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=9554854909b1
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] db version v4.0.4
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] git version: f288a3bdf201007f3693c58e140056adf8b04839
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] allocator: tcmalloc
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] modules: none
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] build environment:
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] distmod: ubuntu1604
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] distarch: x86_64
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] target_arch: x86_64
mongodb_1 | 2019-03-01T10:54:09.869+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true }, security: { authorization: "enabled" } }
mongodb_1 | 2019-03-01T10:54:09.873+0000 W STORAGE [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
mongodb_1 | 2019-03-01T10:54:09.876+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1 | 2019-03-01T10:54:09.878+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
mongodb_1 | 2019-03-01T10:54:09.879+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=487M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
web_1 | connection error: { MongoNetworkError: failed to connect to server [mongodb:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 192.168.160.2:27017]
web_1 | at Pool.<anonymous> (/usr/src/app/node_modules/mongodb-core/lib/topologies/server.js:505:11)
web_1 | at emitOne (events.js:116:13)
And some times I can see the log contains the user created information, sometimes are not.
2019-03-01T10:38:50.323+0000 I STORAGE [conn2] createCollection: admin.system.users with generated UUID: 6b3b88f9-e77c-4094-a1c7-153816202a9e
mongodb_1 | Successfully added user: {
mongodb_1 | "user" : "my_admin",
mongodb_1 | "roles" : [
mongodb_1 | {
mongodb_1 | "role" : "root",
mongodb_1 | "db" : "admin"
mongodb_1 | }
mongodb_1 | ]
mongodb_1 | }
mongodb_1 | 2019-03-01T10:38:50.340+0000 E - [main] Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: Unknown error
I am new on docker stuff. I guess the main problem is web can't establish a connection with mongodb. Spend too long on this problem.
Any help? Thanks!
Make sure you're not going to localhost of the web container. Treat containers as separate machines: localhost in one container is not shared with another one. That's why in the connection string you have mongodb:27017 and not localhost:27017, because mongodb in your default docker network is a DNS name of the container with mongo. You are using this connection string in the first (successful) case, make sure you have a valid DNS name in the second one.
And also make sure to include your DB credentials (username:password) in the connection string too.

Mongodb connection error inside docker container

I've been trying to get a basic nodeJS api to connect to a mongo container. Both services are defined in a docker-compose.yml file. I've read countless similar questions here and on docker's forum all stating that the issue is your mongo connection URI. This is not my issue as you'll see below.
docker-compose.yml
version: '3.7'
services:
api:
build: ./
command: npm run start:dev
working_dir: /usr/src/api-boiler/
restart: always
environment:
PORT: 3001
MONGODB_URI: mongodb://mongodb:27017/TodoApp
JWT_SECRET: asdkasd9a9sdn2r3513032
links:
- mongodb
ports:
- "3001:3001"
volumes:
- ./:/usr/src/api-boiler/
depends_on:
- mongodb
mongodb:
image: mongo
restart: always
volumes:
- /usr/local/var/mongodb:/data/db
ports:
- 27017:27017
Dockerfile
FROM node:10.8.0
WORKDIR /usr/src/api-boiler
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
db/mongoose.js
Setting up mongodb connection
const mongoose = require('mongoose');
mongoose.Promise = global.Promise;
mongoose.connect(
process.env.MONGODB_URI,
{ useMongoClient: true }
);
module.exports.mongoose = mongoose;
But no matter what the api container cannot connect. I'm tried setting the mongo uri to 0.0.0.0:3001 but no joy. I checked the config settings used to launch mongo in the container using db.serverCmdLineOpts(). And, the command bind_ip_all has been passed so mongo should accept connections from any ip. The typical issue is people forgetting to replace localhost with their mongo container name. EG:
mongodb://localhost:27017/TodoApp >> mongodb://mongodb:27017/TodoApp
But, this has been done. So pretty stumped.
Logs - for good measure
Successfully built 388868008521
Successfully tagged api-boiler_api:latest
Starting api-boiler_mongodb_1 ... done
Recreating api-boiler_api_1 ... done
Attaching to api-boiler_mongodb_1, api-boiler_api_1
mongodb_1 | 2018-08-20T20:09:27.072+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify -- sslDisabledProtocols 'none'
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=72af162616c8
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] db version v4.0.1
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] git version: 54f1582fc6eb01de4d4c42f26fc133e623f065fb
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] allocator: tcmalloc
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] modules: none
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] build environment:
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] distmod: ubuntu1604
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] distarch: x86_64
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] target_arch: x86_64
mongodb_1 | 2018-08-20T20:09:27.085+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }
mongodb_1 | 2018-08-20T20:09:27.088+0000 W STORAGE [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
mongodb_1 | 2018-08-20T20:09:27.093+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1 | 2018-08-20T20:09:27.096+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
mongodb_1 | 2018-08-20T20:09:27.097+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=487M,session_max=20000,eviction= (threads_min=4,threads_max=4),config_base=false,statistics=(fast),log= (enabled=true,archive=true,path=journal,compressor=snappy),file_manager= (close_idle_time=100000),statistics_log=(wait=0),verbose= (recovery_progress),
api_1 |
api_1 | > api-boiler#0.1.0 start:dev /usr/src/api-boiler
api_1 | > cross-env NODE_ENV=development node server/server.js
api_1 |
api_1 | Started on port 3001
api_1 | (node:24) UnhandledPromiseRejectionWarning: MongoError: failed to connect to server [mongodb:27017] on first connect [MongoError: connect ECONNREFUSED 172.18.0.2:27017]
OK. I've solved it. With the help of this blog here - https://dev.to/hugodias/wait-for-mongodb-to-start-on-docker-3h8b
You need to wait for mongod to fully start inside the container. The depend_on key in docker-compose.yml is not sufficient.
You'll also need to update your Dockerfile to take advantage of docker-compose-wait.
For reference - here is my updated docker-compose and Dockerfile files.
version: '3.7'
services:
api:
build: ./
working_dir: /usr/src/api-boiler/
restart: always
environment:
PORT: 3001
MONGODB_URI: mongodb://mongodb:27017/TodoApp
JWT_SECRET: asdkasd9a9sdn2r3513032
ports:
- "3001:3001"
volumes:
- ./:/usr/src/api-boiler/
depends_on:
- mongodb
environment:
WAIT_HOSTS: mongodb:27017
mongodb:
image: mongo
container_name: mongodb
restart: always
volumes:
- 27017:27017
FROM node:10.8.0
WORKDIR /usr/src/api-boiler
COPY ./ ./
RUN npm install
EXPOSE 3001
## THE LIFE SAVER
ADD https://github.com/ufoscout/docker-compose- wait/releases/download/2.2.1/wait /wait
RUN chmod +x /wait
# CMD ["/bin/bash"]
CMD /wait && npm run start:dev
I had a similar issue and the cause was that despite I explicitly pointed that backend depends_on database it was not enough and backend was starting earlier than my database was fully ready.
This is what helped:
backend:
depends_on:
db:
condition: service_healthy
db:
...
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh db:27017 --quiet
interval: 10s
timeout: 10s
retries: 5
start_period: 40s
This way backend service isn't started until healthcheck is passed

Resources