PostgreSQL not working on docker when volume is initialized - linux

I am using windows docker
my docker-compose file is as shown below:
version: '3.5'
services:
postgres:
container_name: postgres_container
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-root}
PGDATA: /data/postgres
volumes:
- ./postgres-data:/var/
ports:
- "5432:5432"
restart: unless-stopped
When i build it i get following error log and container exits
Attaching to postgres_container
postgres_container | The files belonging to this database system will be owned by user "postgres".
postgres_container | This user must also own the server process.
postgres_container |
postgres_container | The database cluster will be initialized with locale "en_US.utf8".
postgres_container | The default database encoding has accordingly been set to "UTF8".
postgres_container | The default text search configuration will be set to "english".
postgres_container |
postgres_container | Data page checksums are disabled.
postgres_container |
postgres_container | fixing permissions on existing directory /data/postgres ... ok
postgres_container | creating subdirectories ... ok
postgres_container | selecting dynamic shared memory implementation ... posix
postgres_container | selecting default max_connections ... 100
postgres_container | selecting default shared_buffers ... 128MB
postgres_container | selecting default time zone ... Etc/UTC
postgres_container | creating configuration files ... ok
postgres_container | running bootstrap script ... ok
postgres_container | performing post-bootstrap initialization ... ok
postgres_container | syncing data to disk ... ok
postgres_container |
postgres_container |
postgres_container | Success. You can now start the database server using:
postgres_container |
postgres_container | pg_ctl -D /data/postgres -l logfile start
postgres_container |
postgres_container | initdb: warning: enabling "trust" authentication for local connections
postgres_container | You can change this by editing pg_hba.conf or using the option -A, or
postgres_container | --auth-local and --auth-host, the next time you run initdb.
postgres_container | waiting for server to start....2020-04-17 13:18:31.599 UTC [47] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_container | 2020-04-17 13:18:31.607 UTC [47] LOG: could not bind Unix address "/var/run/postgresql/.s.PGSQL.5432": Input/output error
postgres_container | 2020-04-17 13:18:31.607 UTC [47] HINT: Is another postmaster already running on port 5432? If not, remove socket file "/var/run/postgresql/.s.PGSQL.5432" and retry.
postgres_container | 2020-04-17 13:18:31.607 UTC [47] WARNING: could not create Unix-domain socket in directory "/var/run/postgresql"
postgres_container | 2020-04-17 13:18:31.607 UTC [47] FATAL: could not create any Unix-domain sockets
postgres_container | 2020-04-17 13:18:31.610 UTC [47] LOG: database system is shut down
postgres_container | stopped waiting
postgres_container | pg_ctl: could not start server
postgres_container | Examine the log output.
postgres_container |
postgres_container | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_container |
postgres_container | 2020-04-17 13:18:32.246 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_container | 2020-04-17 13:18:32.246 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_container | 2020-04-17 13:18:32.246 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_container | 2020-04-17 13:18:32.255 UTC [1] LOG: could not bind Unix address "/var/run/postgresql/.s.PGSQL.5432": Input/output error
postgres_container | 2020-04-17 13:18:32.255 UTC [1] HINT: Is another postmaster already running on port 5432? If not, remove socket file "/var/run/postgresql/.s.PGSQL.5432" and retry.
postgres_container | 2020-04-17 13:18:32.255 UTC [1] WARNING: could not create Unix-domain socket in directory "/var/run/postgresql"
postgres_container | 2020-04-17 13:18:32.255 UTC [1] FATAL: could not create any Unix-domain sockets
postgres_container | 2020-04-17 13:18:32.259 UTC [1] LOG: database system is shut down
postgres_container exited with code 1
I checked 5432 port its open and no process is using it.
when i remove volume from my docker-compose.yml file it works
perfectly
the volume i am using ./postgres-data is the local directory on my system i want to map it to the PostgreSQL container to restore database.

You are using docker on Windows and mounting the directory where the socket will be created (/var) as volume but windows filesystem doesn't support unix sockets.
Change the configuration in order to:
leave the unix socket (/var/run/postgresql/...) inside the docker without mounting as volume
mount data directory as volume

Related

Rabbitmq instances are crashing

I am using docker-compose.yml file to spin 3 instances of RabbitMQ in a single host. Running docker on mac, When I run docker-compose up, I see erlang cookies are not matching for the instances in the cluster. Let me know if you need any other information.
version: '3'
services:
rabbitmq1:
image: rabbitmq:3.8.34-management
hostname: rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
rabbitmq2:
image: rabbitmq:3.8.34-management
hostname: rabbitmq2
depends_on:
- rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- ./cluster-entrypoint.sh:/usr/local/bin/cluster-entrypoint.sh
entrypoint: /usr/local/bin/cluster-entrypoint.sh
rabbitmq3:
image: rabbitmq:3.8.34-management
hostname: rabbitmq3
depends_on:
- rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- ./cluster-entrypoint.sh:/usr/local/bin/cluster-entrypoint.sh
entrypoint: /usr/local/bin/cluster-entrypoint.sh
haproxy:
image: haproxy:1.7
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
depends_on:
- rabbitmq1
- rabbitmq2
- rabbitmq3
ports:
- 15672:15672
- 5672:5672
Below is my cluster-entrypoint.sh file
#!/bin/bash
set -e
# Start RMQ from entry point.
# This will ensure that environment variables passed
# will be honored
/usr/local/bin/docker-entrypoint.sh rabbitmq-server -detached
# Do the cluster dance
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit#rabbitmq1
# Stop the entire RMQ server. This is done so that we
# can attach to it again, but without the -detached flag
# making it run in the forground
rabbitmqctl stop
# Wait a while for the app to really stop
sleep 2s
# Start it
rabbitmq-server
Sorry for too many logs. I have used RabbitMQ:3.8.34 image, erlang cookies for all instances in cluster are different and RabbitMQ1 starts but other instances does not start.
Below is the log:
haproxy_1 | <7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -db -f /usr/local/etc/haproxy/haproxy.cfg -Ds
rabbitmq1_1 |
rabbitmq1_1 | warning: /var/lib/rabbitmq/.erlang.cookie contents do not match RABBITMQ_ERLANG_COOKIE
rabbitmq1_1 |
rabbitmq1_1 | WARNING: '/var/lib/rabbitmq/.erlang.cookie' was populated from '$RABBITMQ_ERLANG_COOKIE', which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq2_1 |
rabbitmq2_1 | warning: /var/lib/rabbitmq/.erlang.cookie contents do not match RABBITMQ_ERLANG_COOKIE
rabbitmq2_1 |
rabbitmq2_1 | WARNING: '/var/lib/rabbitmq/.erlang.cookie' was populated from '$RABBITMQ_ERLANG_COOKIE', which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq3_1 |
rabbitmq3_1 | warning: /var/lib/rabbitmq/.erlang.cookie contents do not match RABBITMQ_ERLANG_COOKIE
rabbitmq3_1 |
rabbitmq3_1 | WARNING: '/var/lib/rabbitmq/.erlang.cookie' was populated from '$RABBITMQ_ERLANG_COOKIE', which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq1_1 | WARNING: 'docker-entrypoint.sh' generated/modified the RabbitMQ configuration file, which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq1_1 |
rabbitmq1_1 | Generated end result, for reference:
rabbitmq1_1 | ------------------------------------
rabbitmq1_1 | loopback_users.guest = false
rabbitmq1_1 | listeners.tcp.default = 5672
rabbitmq1_1 | default_pass = guest
rabbitmq1_1 | default_user = guest
rabbitmq1_1 | default_vhost = /
rabbitmq1_1 | management.tcp.port = 15672
rabbitmq1_1 | ------------------------------------
rabbitmq3_1 | WARNING: 'docker-entrypoint.sh' generated/modified the RabbitMQ configuration file, which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq3_1 |
rabbitmq3_1 | Generated end result, for reference:
rabbitmq3_1 | ------------------------------------
rabbitmq3_1 | loopback_users.guest = false
rabbitmq3_1 | listeners.tcp.default = 5672
rabbitmq3_1 | default_pass = guest
rabbitmq3_1 | default_user = guest
rabbitmq3_1 | default_vhost = /
rabbitmq3_1 | management.tcp.port = 15672
rabbitmq3_1 | ------------------------------------
rabbitmq2_1 | WARNING: 'docker-entrypoint.sh' generated/modified the RabbitMQ configuration file, which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq2_1 |
rabbitmq2_1 | Generated end result, for reference:
rabbitmq2_1 | ------------------------------------
rabbitmq2_1 | loopback_users.guest = false
rabbitmq2_1 | listeners.tcp.default = 5672
rabbitmq2_1 | default_pass = guest
rabbitmq2_1 | default_user = guest
rabbitmq2_1 | default_vhost = /
rabbitmq2_1 | management.tcp.port = 15672
rabbitmq2_1 | ------------------------------------
rabbitmq3_1 | RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
rabbitmq3_1 | Stopping rabbit application on node rabbit#rabbitmq3 ...
rabbitmq3_1 | Error: unable to perform an operation on node 'rabbit#rabbitmq3'. Please see diagnostics information and suggestions below.
rabbitmq3_1 |
rabbitmq3_1 | Most common reasons for this are:
rabbitmq3_1 |
rabbitmq3_1 | * Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
rabbitmq3_1 | * CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
rabbitmq3_1 | * Target node is not running
rabbitmq3_1 |
rabbitmq3_1 | In addition to the diagnostics info below:
rabbitmq3_1 |
rabbitmq3_1 | * See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
rabbitmq3_1 | * Consult server logs on node rabbit#rabbitmq3
rabbitmq3_1 | * If target node is configured to use long node names, don't forget to use --longnames with CLI tools
rabbitmq3_1 |
rabbitmq3_1 | DIAGNOSTICS
rabbitmq3_1 | ===========
rabbitmq3_1 |
rabbitmq3_1 | attempted to contact: [rabbit#rabbitmq3]
rabbitmq3_1 |
rabbitmq3_1 | rabbit#rabbitmq3:
rabbitmq3_1 | * connected to epmd (port 4369) on rabbitmq3
rabbitmq3_1 | * epmd reports: node 'rabbit' not running at all
rabbitmq3_1 | no other nodes on rabbitmq3
rabbitmq3_1 | * suggestion: start the node
rabbitmq3_1 |
rabbitmq3_1 | Current node details:
rabbitmq3_1 | * node name: 'rabbitmqcli-797-rabbit#rabbitmq3'
rabbitmq3_1 | * effective user's home directory: /var/lib/rabbitmq
rabbitmq3_1 | * Erlang cookie hash: gnzLDuqKcGxMNKFokfhOew==
rabbitmq3_1 |
docker-rabbitmq-cluster_rabbitmq3_1 exited with code 69
rabbitmq2_1 | RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
rabbitmq2_1 | Stopping rabbit application on node rabbit#rabbitmq2 ...
rabbitmq2_1 | Error: unable to perform an operation on node 'rabbit#rabbitmq2'. Please see diagnostics information and suggestions below.
rabbitmq2_1 |
rabbitmq2_1 | Most common reasons for this are:
rabbitmq2_1 |
rabbitmq2_1 | * Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
rabbitmq2_1 | * CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
rabbitmq2_1 | * Target node is not running
rabbitmq2_1 |
rabbitmq2_1 | In addition to the diagnostics info below:
rabbitmq2_1 |
rabbitmq2_1 | * See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
rabbitmq2_1 | * Consult server logs on node rabbit#rabbitmq2
rabbitmq2_1 | * If target node is configured to use long node names, don't forget to use --longnames with CLI tools
rabbitmq2_1 |
rabbitmq2_1 | DIAGNOSTICS
rabbitmq2_1 | ===========
rabbitmq2_1 |
rabbitmq2_1 | attempted to contact: [rabbit#rabbitmq2]
rabbitmq2_1 |
rabbitmq2_1 | rabbit#rabbitmq2:
rabbitmq2_1 | * connected to epmd (port 4369) on rabbitmq2
rabbitmq2_1 | * epmd reports: node 'rabbit' not running at all
rabbitmq2_1 | no other nodes on rabbitmq2
rabbitmq2_1 | * suggestion: start the node
rabbitmq2_1 |
rabbitmq2_1 | Current node details:
rabbitmq2_1 | * node name: 'rabbitmqcli-568-rabbit#rabbitmq2'
rabbitmq2_1 | * effective user's home directory: /var/lib/rabbitmq
rabbitmq2_1 | * Erlang cookie hash: gnzLDuqKcGxMNKFokfhOew==
rabbitmq2_1 |
docker-rabbitmq-cluster_rabbitmq2_1 exited with code 69
rabbitmq1_1 | Configuring logger redirection
rabbitmq1_1 | 2022-07-05 02:43:40.659 [debug] <0.288.0> Lager installed handler error_logger_lager_h into error_logger
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.291.0> Lager installed handler lager_forwarder_backend into error_logger_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.312.0> Lager installed handler lager_forwarder_backend into rabbit_log_mirroring_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.303.0> Lager installed handler lager_forwarder_backend into rabbit_log_feature_flags_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.309.0> Lager installed handler lager_forwarder_backend into rabbit_log_ldap_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.294.0> Lager installed handler lager_forwarder_backend into rabbit_log_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.297.0> Lager installed handler lager_forwarder_backend into rabbit_log_channel_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.306.0> Lager installed handler lager_forwarder_backend into rabbit_log_federation_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.300.0> Lager installed handler lager_forwarder_backend into rabbit_log_connection_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.671 [debug] <0.315.0> Lager installed handler lager_forwarder_backend into rabbit_log_prelaunch_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.672 [debug] <0.318.0> Lager installed handler lager_forwarder_backend into rabbit_log_queue_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.673 [debug] <0.321.0> Lager installed handler lager_forwarder_backend into rabbit_log_ra_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.675 [debug] <0.324.0> Lager installed handler lager_forwarder_backend into rabbit_log_shovel_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.676 [debug] <0.327.0> Lager installed handler lager_forwarder_backend into rabbit_log_upgrade_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.691 [info] <0.44.0> Application lager started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:41.159 [debug] <0.284.0> Lager installed handler lager_backend_throttle into lager_event
haproxy_1 | [ALERT] 185/024336 (8) : parsing [/usr/local/etc/haproxy/haproxy.cfg:32] : 'server rabbitmq2' : could not resolve address 'rabbitmq2'.
haproxy_1 | [ALERT] 185/024336 (8) : parsing [/usr/local/etc/haproxy/haproxy.cfg:33] : 'server rabbitmq3' : could not resolve address 'rabbitmq3'.
haproxy_1 | [ALERT] 185/024336 (8) : parsing [/usr/local/etc/haproxy/haproxy.cfg:43] : 'server rabbitmq2' : could not resolve address 'rabbitmq2'.
haproxy_1 | [ALERT] 185/024336 (8) : parsing [/usr/local/etc/haproxy/haproxy.cfg:44] : 'server rabbitmq3' : could not resolve address 'rabbitmq3'.
haproxy_1 | [ALERT] 185/024336 (8) : Failed to initialize server(s) addr.
haproxy_1 | <5>haproxy-systemd-wrapper: exit, haproxy RC=1
docker-rabbitmq-cluster_haproxy_1 exited with code 1
rabbitmq1_1 | 2022-07-05 02:43:43.065 [info] <0.44.0> Application mnesia started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:43.066 [info] <0.273.0>
rabbitmq1_1 | Starting RabbitMQ 3.8.34 on Erlang 24.3.4.1 [emu]
rabbitmq1_1 | Copyright (c) 2007-2022 VMware, Inc. or its affiliates.
rabbitmq1_1 | Licensed under the MPL 2.0. Website: https://rabbitmq.com
rabbitmq1_1 |
rabbitmq1_1 | ## ## RabbitMQ 3.8.34
rabbitmq1_1 | ## ##
rabbitmq1_1 | ########## Copyright (c) 2007-2022 VMware, Inc. or its affiliates.
rabbitmq1_1 | ###### ##
rabbitmq1_1 | ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
rabbitmq1_1 |
rabbitmq1_1 | Erlang: 24.3.4.1 [emu]
rabbitmq1_1 | TLS Library: OpenSSL - OpenSSL 1.1.1o 3 May 2022
rabbitmq1_1 |
rabbitmq1_1 | Doc guides: https://rabbitmq.com/documentation.html
rabbitmq1_1 | Support: https://rabbitmq.com/contact.html
rabbitmq1_1 | Tutorials: https://rabbitmq.com/getstarted.html
rabbitmq1_1 | Monitoring: https://rabbitmq.com/monitoring.html
rabbitmq1_1 |
rabbitmq1_1 | Logs: <stdout>
rabbitmq1_1 |
rabbitmq1_1 | Config file(s): /etc/rabbitmq/rabbitmq.conf
rabbitmq1_1 |
rabbitmq1_1 | Starting broker...2022-07-05 02:43:43.068 [info] <0.273.0>
rabbitmq1_1 | node : rabbit#rabbitmq1
rabbitmq1_1 | home dir : /var/lib/rabbitmq
rabbitmq1_1 | config file(s) : /etc/rabbitmq/rabbitmq.conf
rabbitmq1_1 | cookie hash : VlfoFK5J8f9Ln3G9sXDoPQ==
rabbitmq1_1 | log(s) : <stdout>
rabbitmq1_1 | database dir : /var/lib/rabbitmq/mnesia/rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.265 [info] <0.44.0> Application amqp_client started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.279 [info] <0.584.0> Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq1_1 | 2022-07-05 02:43:44.279 [info] <0.612.0> Statistics database started.
rabbitmq1_1 | 2022-07-05 02:43:44.279 [info] <0.611.0> Starting worker pool 'management_worker_pool' with 3 processes in it
rabbitmq1_1 | 2022-07-05 02:43:44.279 [info] <0.44.0> Application rabbitmq_management started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.292 [info] <0.44.0> Application prometheus started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.294 [info] <0.625.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692
rabbitmq1_1 | 2022-07-05 02:43:44.294 [info] <0.525.0> Ready to start client connection listeners
rabbitmq1_1 | 2022-07-05 02:43:44.294 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.297 [info] <0.669.0> started TCP listener on [::]:5672
rabbitmq1_1 | 2022-07-05 02:43:45.321 [info] <0.525.0> Server startup complete; 4 plugins started.
rabbitmq1_1 | * rabbitmq_prometheus
rabbitmq1_1 | * rabbitmq_management
rabbitmq1_1 | * rabbitmq_web_dispatch
rabbitmq1_1 | * rabbitmq_management_agent
rabbitmq1_1 | completed with 4 plugins.
rabbitmq1_1 | 2022-07-05 02:43:45.322 [info] <0.525.0> Resetting node maintenance status
I am not sure what I am missing. Sorry for too much of logs.
After analysing your files and log files I can observe few issues.
Your setup will be able to start only once while there no files/configuration and you are doing it from the scratch. The reason of such behaviour is that RabbitMQ stores configuration to internal DB called Mnesia and it is mandatory that once nodes added it should be present for further start. If you won't follow it you will observer errors that the node waiting for mnesia to find its nodes.
Another issue with repetitive start is that node that was added (2 or 3) to a cluster marks itself as a cluster member, you will see an error that main node (1) expects to connect to earlier connected but your entry-point already reset 2 and 3 nodes.
You cannot combine versions of RabbitMQ cause it might contain different data structures that will not allow nodes to sync and you will have an error like "schema_integrity_check_failed...", it should be all identical.
When I was using RabbitMQ, my configuration ensures that there is persistent location (disk) with all the data and that repetitive start will use already initialised data. Also it is a good practice to use cluster management software such as etc or consul that is supported by RabbitMQ and you don't need to handle it yourself.
Hope that would help.
Generally I was able to successfully start your setup on my machine (macOS) with docker, the script is the following:
ensure you never started the compose in order to avoid the data from previous runs
prepare everything in the folder
do docker-compose up, enjoy everything works
use docker-compose down for cleanup the stack not '... stop' command cause the data will stay
Below you can find a log for the start, please ignore ha_proxy error, there is no config file attached so it must fail
% docker-compose up
Creating network "rabbitmq_default" with the default driver
Creating rabbitmq_rabbitmq1_1 ... done
Creating rabbitmq_rabbitmq3_1 ... done
Creating rabbitmq_rabbitmq2_1 ... done
Creating rabbitmq_haproxy_1 ... done
Attaching to rabbitmq_rabbitmq1_1, rabbitmq_rabbitmq3_1, rabbitmq_rabbitmq2_1, rabbitmq_haproxy_1
(due to limitation I've put it here)
My setup looks like this, the only change is that rabbitMQ data is persisted and init is done manually, you also can mount data folder from file system path
version: '3'
services:
rabbitmq1:
image: rabbitmq:3.8.34-management
hostname: rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- rabbitmq-01-data:/var/lib/rabbitmq
rabbitmq2:
image: rabbitmq:3.8.34-management
hostname: rabbitmq2
depends_on:
- rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- rabbitmq-02-data:/var/lib/rabbitmq
rabbitmq3:
image: rabbitmq:3.8.34-management
hostname: rabbitmq3
depends_on:
- rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- rabbitmq-03-data:/var/lib/rabbitmq
volumes:
rabbitmq-01-data:
rabbitmq-02-data:
rabbitmq-03-data:
Manual run on every "follower" node
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit#rabbitmq1
rabbitmqctl start_app
Don't forget to enable redundant queues if applicable for your case
rabbitmqctl set_policy ha "." '{"ha-mode":"all"}'
I have a complete RabbitMQ setup that forms a cluster using docker-compose here:
https://github.com/lukebakken/docker-rabbitmq-cluster
Please note that the following mirroring policy is NOT recommended. There is no need to mirror queues to all nodes -
rabbitmqctl set_policy ha "." '{"ha-mode":"all"}'
You should mirror to 2 nodes in your cluster.
BETTER YET, use the latest version of RabbitMQ and use Quorum Queues! Classic HA mirroring will be removed from RabbitMQ in version 4.0
https://www.rabbitmq.com/quorum-queues.html
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Deploying a multi-container app on Azure App Services

I have been struggling for some days now to deploy an Azure App Service with two minimalist Docker containers.
My first image is a basic PostgreSQL image and my second image is built with the following Dockerfile:
FROM python:3.7
RUN pip install streamlit
COPY app.py /streamlit-docker/
WORKDIR /streamlit-docker/
CMD streamlit run app.py
and where app.py is:
import streamlit as st
st.write('Hello world')
The docker-compose.yml file I use for creating my Azure App service is the following:
version: '3.3'
services:
db:
image: atestcr.azurecr.io/postgres:latest
environment:
POSTGRES_PASSWORD: postgres
app:
image: atestcr.azurecr.io/my_app:latest
ports:
- '8501:8501'
I get an 'Application Error' whenever I try to access the webapp URL. However, when I deploy this app with a single docker container, using this docker-compose.yml:
version: '3.3'
services:
app:
image: atestcr.azurecr.io/my_app:latest
ports:
- '8501:8501'
everything works fine.
Here is a GitHub repo to recreate the bug.
These are the Azure logs:
2021-08-09T17:29:34.382Z INFO - Starting multi-container app..
2021-08-09T17:29:35.089Z INFO - Pulling image: atestcr.azurecr.io/postgres:latest
2021-08-09T17:29:35.313Z INFO - latest Pulling from postgres
2021-08-09T17:29:35.315Z INFO - Digest: sha256:b6df1345afa5990ea32866e5c331eefbf2e30a05f2a715c3a9691a6cb18fa253
2021-08-09T17:29:35.317Z INFO - Status: Image is up to date for atestcr.azurecr.io/postgres:latest
2021-08-09T17:29:35.319Z INFO - Pull Image successful, Time taken: 0 Minutes and 0 Seconds
2021-08-09T17:29:35.335Z INFO - Starting container for site
2021-08-09T17:29:35.335Z INFO - docker run -d -p 8477:5432 --name testapp32_db_0_6662435d -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=testapp32 -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=testapp32.azurewebsites.net -e WEBSITE_INSTANCE_ID=42c09ff46e6c54cb467d28e88f2ab5b1e8971ee4daf2e883f44401bde67fe89f -e HTTP_LOGGING_ENABLED=1 atestcr.azurecr.io/postgres:latest
2021-08-09T17:29:35.781Z INFO - Pulling image: atestcr.azurecr.io/my_app:latest
2021-08-09T17:29:35.984Z INFO - latest Pulling from my_app
2021-08-09T17:29:35.987Z INFO - Digest: sha256:659937a52a6223b938b3d429901ab8648497870bf8068b5dcc05816050db5eaf
2021-08-09T17:29:35.988Z INFO - Status: Image is up to date for atestcr.azurecr.io/my_app:latest
2021-08-09T17:29:35.993Z INFO - Pull Image successful, Time taken: 0 Minutes and 0 Seconds
2021-08-09T17:29:36.014Z INFO - Starting container for site
2021-08-09T17:29:36.014Z INFO - docker run -d -p 0:8501 --name testapp32_app_0_6662435d -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=testapp32 -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=testapp32.azurewebsites.net -e WEBSITE_INSTANCE_ID=42c09ff46e6c54cb467d28e88f2ab5b1e8971ee4daf2e883f44401bde67fe89f -e HTTP_LOGGING_ENABLED=1 atestcr.azurecr.io/my_app:latest
2021-08-09T17:33:26.741Z ERROR - multi-container unit was not started successfully
2021-08-09T17:33:26.846Z INFO - Container logs from testapp32_db_0_6662435d = 2021-08-09T17:29:40.459721366Z The files belonging to this database system will be owned by user "postgres".
2021-08-09T17:29:40.491740899Z This user must also own the server process.
2021-08-09T17:29:40.493019808Z
2021-08-09T17:29:40.497263739Z The database cluster will be initialized with locale "en_US.utf8".
2021-08-09T17:29:40.502456876Z The default database encoding has accordingly been set to "UTF8".
2021-08-09T17:29:40.503203182Z The default text search configuration will be set to "english".
2021-08-09T17:29:40.503218482Z
2021-08-09T17:29:40.503223282Z Data page checksums are disabled.
2021-08-09T17:29:40.506809008Z
2021-08-09T17:29:40.521275113Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2021-08-09T17:29:40.523912632Z creating subdirectories ... ok
2021-08-09T17:29:40.525237242Z selecting dynamic shared memory implementation ... posix
2021-08-09T17:29:40.642905697Z selecting default max_connections ... 100
2021-08-09T17:29:40.733900958Z selecting default shared_buffers ... 128MB
2021-08-09T17:29:41.039050775Z selecting default time zone ... Etc/UTC
2021-08-09T17:29:41.047757638Z creating configuration files ... ok
2021-08-09T17:29:44.416903507Z running bootstrap script ... ok
2021-08-09T17:29:50.023628737Z performing post-bootstrap initialization ... ok
2021-08-09T17:30:04.217544961Z syncing data to disk ... ok
2021-08-09T17:30:04.218321267Z
2021-08-09T17:30:04.219189873Z initdb: warning: enabling "trust" authentication for local connections
2021-08-09T17:30:04.219206473Z You can change this by editing pg_hba.conf or using the option -A, or
2021-08-09T17:30:04.219223973Z --auth-local and --auth-host, the next time you run initdb.
2021-08-09T17:30:04.219491175Z
2021-08-09T17:30:04.219513275Z Success. You can now start the database server using:
2021-08-09T17:30:04.219519575Z
2021-08-09T17:30:04.219523675Z pg_ctl -D /var/lib/postgresql/data -l logfile start
2021-08-09T17:30:04.219527775Z
2021-08-09T17:30:08.340584036Z waiting for server to start.......2021-08-09 17:30:08.340 UTC [44] LOG: starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2021-08-09T17:30:08.478247580Z 2021-08-09 17:30:08.410 UTC [44] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-08-09T17:30:08.680599067Z 2021-08-09 17:30:08.680 UTC [45] LOG: database system was shut down at 2021-08-09 17:29:49 UTC
2021-08-09T17:30:08.753589768Z 2021-08-09 17:30:08.753 UTC [44] LOG: database system is ready to accept connections
2021-08-09T17:30:08.755965684Z done
2021-08-09T17:30:08.760382514Z server started
2021-08-09T17:30:09.790821978Z
2021-08-09T17:30:09.799723839Z /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
2021-08-09T17:30:09.802079455Z
2021-08-09T17:30:09.840304817Z 2021-08-09 17:30:09.834 UTC [44] LOG: received fast shutdown request
2021-08-09T17:30:09.862102366Z waiting for server to shut down....2021-08-09 17:30:09.861 UTC [44] LOG: aborting any active transactions
2021-08-09T17:30:09.950488072Z 2021-08-09 17:30:09.950 UTC [44] LOG: background worker "logical replication launcher" (PID 51) exited with exit code 1
2021-08-09T17:30:09.954360498Z 2021-08-09 17:30:09.950 UTC [46] LOG: shutting down
2021-08-09T17:30:13.063848848Z ...2021-08-09 17:30:13.063 UTC [44] LOG: database system is shut down
2021-08-09T17:30:13.138558463Z done
2021-08-09T17:30:13.140082774Z server stopped
2021-08-09T17:30:13.144102302Z
2021-08-09T17:30:13.162430928Z PostgreSQL init process complete; ready for start up.
2021-08-09T17:30:13.165654351Z
2021-08-09T17:30:14.083504086Z 2021-08-09 17:30:13.992 UTC [1] LOG: starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2021-08-09T17:30:14.084760295Z 2021-08-09 17:30:14.011 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-08-09T17:30:14.084774095Z 2021-08-09 17:30:14.015 UTC [1] LOG: listening on IPv6 address "::", port 5432
2021-08-09T17:30:14.084779495Z 2021-08-09 17:30:14.082 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-08-09T17:30:14.156076187Z 2021-08-09 17:30:14.132 UTC [63] LOG: database system was shut down at 2021-08-09 17:30:12 UTC
2021-08-09T17:30:14.198749982Z 2021-08-09 17:30:14.176 UTC [1] LOG: database system is ready to accept connections
2021-08-09T17:30:15.006882460Z 2021-08-09 17:30:14.998 UTC [70] LOG: invalid length of startup packet
2021-08-09T17:30:16.112919592Z 2021-08-09 17:30:16.112 UTC [71] LOG: invalid length of startup packet
2021-08-09T17:30:17.178350344Z 2021-08-09 17:30:17.174 UTC [72] LOG: invalid length of startup packet
...
2021-08-09T17:33:25.735293291Z 2021-08-09 17:33:25.735 UTC [260] LOG: invalid length of startup packet
2021-08-09T17:33:29.142Z INFO - Container logs from testapp32_app_0_6662435d = 2021-08-09T17:30:24.010212694Z
2021-08-09T17:30:24.011085300Z You can now view your Streamlit app in your browser.
2021-08-09T17:30:24.011101800Z
2021-08-09T17:30:24.012092107Z Network URL: http://172.16.232.3:8501
2021-08-09T17:30:24.012855612Z External URL: http://20.40.148.207:8501
2021-08-09T17:30:24.013407416Z
2021-08-09T17:33:36.002Z INFO - Stopping site testapp32 because it failed during startup.
It's worth noting your shared example doesn't work because your docker-compose.yml is using your own private container registries.
However, tinkering around the edges I have managed to get your dummy example working with changes to the following files:
Your app dockerfile:
FROM python:3.7
RUN pip install streamlit
COPY app.py /streamlit-docker/
WORKDIR /streamlit-docker/
EXPOSE 80
CMD streamlit run app.py --server.port 80
And adjusting your docker-compose.yml
version: '3.3'
services:
db:
image: postgis/postgis:13-master
environment:
- POSTGRES_DB=counterfactualcovid
- POSTGRES_USER=django
- POSTGRES_PASSWORD=django
app:
image: testazurecontainerregistryajc.azurecr.io/mcmegaapp:latest
ports:
- '80:80'
Ignore the fact i've used a generic postgres docker image and my own private container registry for the images and you'll see the key changes are:
in the Dockerfile exposing port 80, and adding --server.port 80 to the streamlit CMD call
in the docker-compose.yml setting the port forwarding to 80:80
I /think/ your issues arise from the preview limitations for the multi-container web apps feature, so setting everything to work via port 80 appears to resolve this.

Docker-compose up exited with code 1 but successful docker-compose build

When trying to docker-compose up I get errors from the frontend and backend where they exit with code 1. docker ps shows that the postgres container is still running but the frontend and backend still exit. Using npm start, there is no errors. I don't know if it this helps, but my files do not copy from my src folder to /usr/src/app/ so maybe there is an error with my docker-compose or Dockerfiles.
Docker ps shows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
509208b2243b postgres:latest "docker-entrypoint.s…" 14 hours ago Up 11 minutes 0.0.0.0:5432->5432/tcp example_db_1
docker-compose.yml
version: '3'
services:
frontend:
build: ./frontend
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- ./frontend/build:/usr/share/nginx/html
ports:
- 80:80
- 443:443
depends_on:
- backend
backend:
build: ./backend
volumes:
- ./backend/src:/usr/src/app/src
- ./data/certbot/conf:/etc/letsencrypt
ports:
- 3000:3000
depends_on:
- db
db:
image: postgres:latest
restart: always
environment:
POSTGRES_USER: example
POSTGRES_PASSWORD: example1234
POSTGRES_DB: example
ports:
- 5432:5432
certbot:
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
# Automatic certificate renewal
# entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
This is what the backend Dockerfile looks like.
FROM node:current-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app/
COPY package*.json /usr/src/app/
RUN npm install
COPY . /usr/src/app/
EXPOSE 3000
ENV NODE_ENV=production
CMD ["npm", "start"]
And the output error:
example_db_1 is up-to-date
Starting example_certbot_1 ... done
Starting example_backend_1 ... done
Starting example_frontend_1 ... done
Attaching to example_db_1, example_certbot_1, example_backend_1, example_frontend_1
backend_1 |
backend_1 | > example-backend#1.0.0 start /usr/src/app
backend_1 | > npx tsc; node ./out/
backend_1 |
certbot_1 | Saving debug log to /var/log/letsencrypt/letsencrypt.log
certbot_1 | Certbot doesn't know how to automatically configure the web server on this system. However, it can still get a certificate for you. Please run "certbot certonly" to do so. You'll need to manually configure your web server to use the resulting certificate.
frontend_1 | 2020/02/13 11:35:59 [emerg] 1#1: open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/app.conf:21
frontend_1 | nginx: [emerg] open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/app.conf:21
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... Etc/UTC
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 | initdb: warning: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | waiting for server to start....2020-02-12 21:51:40.137 UTC [43] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-02-12 21:51:40.147 UTC [43] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-02-12 21:51:40.229 UTC [44] LOG: database system was shut down at 2020-02-12 21:51:39 UTC
db_1 | 2020-02-12 21:51:40.240 UTC [43] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 | CREATE DATABASE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | 2020-02-12 21:51:40.606 UTC [43] LOG: received fast shutdown request
db_1 | waiting for server to shut down....2020-02-12 21:51:40.608 UTC [43] LOG: aborting any
active transactions
db_1 | 2020-02-12 21:51:40.614 UTC [43] LOG: background worker "logical replication launcher" (PID 50) exited with exit code 1
db_1 | 2020-02-12 21:51:40.614 UTC [45] LOG: shutting down
db_1 | 2020-02-12 21:51:40.652 UTC [43] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2020-02-12 21:51:40.728 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-02-12 21:51:40.729 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-02-12 21:51:40.729 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-02-12 21:51:40.748 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-02-12 21:51:40.788 UTC [61] LOG: database system was shut down at 2020-02-12 21:51:40 UTC
db_1 | 2020-02-12 21:51:40.799 UTC [1] LOG: database system is ready to accept connections
db_1 | 2020-02-13 09:51:41.562 UTC [787] LOG: invalid length of startup packet
db_1 | 2020-02-13 11:09:27.384 UTC [865] FATAL: password authentication failed for user "postgres"
db_1 | 2020-02-13 11:09:27.384 UTC [865] DETAIL: Role "postgres" does not exist.
db_1 | Connection matched pg_hba.conf line 95: "host all all all md5"
db_1 | 2020-02-13 11:32:18.771 UTC [1] LOG: received smart shutdown request
db_1 | 2020-02-13 11:32:18.806 UTC [1] LOG: background worker "logical replication launcher"
(PID 67) exited with exit code 1
db_1 | 2020-02-13 11:32:18.806 UTC [62] LOG: shutting down
db_1 | 2020-02-13 11:32:18.876 UTC [1] LOG: database system is shut down
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-02-13 11:33:01.343 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-02-13 11:33:01.343 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-02-13 11:33:01.343 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-02-13 11:33:01.355 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-02-13 11:33:01.427 UTC [23] LOG: database system was shut down at 2020-02-13 11:32:18 UTC
db_1 | 2020-02-13 11:33:01.466 UTC [1] LOG: database system is ready to accept connections
example_certbot_1 exited with code 1
example_frontend_1 exited with code 1
backend_1 | Authenticating with database...
backend_1 | internal/fs/utils.js:220
backend_1 | throw err;
backend_1 | ^
backend_1 |
backend_1 | Error: ENOENT: no such file or directory, open '/etc/letsencrypt/live/example.org/privkey.pem'
backend_1 | at Object.openSync (fs.js:440:3)
backend_1 | at Object.readFileSync (fs.js:342:35)
backend_1 | at Object.<anonymous> (/usr/src/app/out/index.js:68:23)
backend_1 | at Module._compile (internal/modules/cjs/loader.js:955:30)
backend_1 | at Object.Module._extensions..js (internal/modules/cjs/loader.js:991:10)
backend_1 | syscall: 'open', 811:32)
backend_1 | code: 'ENOENT', loader.js:723:14)
backend_1 | path: '/etc/letsencrypt/live/example.org/s/loader.js:1043:10)privkey.pem'
backend_1 | }
backend_1 | npm ERR! code ELIFECYCLE
backend_1 | npm ERR! errno 1
backend_1 | npm ERR! example-backend#1.0.0 start: `npx tsc; noprivkey.pem'de ./out/`
backend_1 | npm ERR! Exit status 1
backend_1 | npm ERR!
backend_1 | npm ERR! Failed at the example-backend#1.0.0 startde ./out/` script.
backend_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above. script.
backend_1 | here is likely additional logging ou
backend_1 | npm ERR! A complete log of this run can be found in:
backend_1 | npm ERR! /root/.npm/_logs/2020-02-13T11_36_10_3:30Z-debug.log 30Z-debug.log
example_backend_1 exited with code 1
There are no errors with certbot when run outside this project.
Directory structure:
src/
- docker-compose.yml
- init.letsencrypt.sh
- .gitignore
backend
src
- Dockerfile
- package.json
- .gitignore
data
nginx
- app.conf
frontend
src
- Dockerfile
- package.json
- .gitignore
Any help would be appreciated, thanks.
Updated nginx.conf:
server {
listen 80;
server_name example.org;
location / {
root /var/www/html/;
index index.html;
autoindex on;
}
location /frontend {
proxy_pass http://example.org:8080;
try_files $uri /public/index.html;
}
location /backend {
proxy_pass http://example.org:3000;
}
location /db {
proxy_pass http://example.org:5432;
}
}
new error when changing .gitignore:
frontend_1 | 2020/02/13 16:34:58 [emerg] 1#1: cannot load certificate "/etc/letsencrypt/live/example.org/fullchain.pem":
BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/example.org/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
frontend_1 | nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/example.org/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/example.org/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
example_frontend_1 exited with code 1
The setup seems very complicated. My advice for you: Try to reduce the complicated overhead with the certbot as own docker-container.
#docker-compose.yml
version: '3'
services:
frontend:
build: ./frontend
volumes:
- ./data/nginx:/etc/nginx/conf.d
# no source-code of the frontend via volumes -
# only a nginx-image with your source.
# nginx-conf as volume is valid.
ports:
- 8080:80
depends_on:
- backend
backend:
build: ./backend
# dont put your source as volume in,
# your docker-image should contains the whole code
# and no certbot magic here
ports:
- 3000:3000
depends_on:
- db
db:
image: postgres:latest
restart: always
environment:
POSTGRES_USER: example
POSTGRES_PASSWORD: example1234
POSTGRES_DB: example
ports:
- 5432:5432
This is much cleaner and easy to read. Now you should setup a reverse proxy on your hostmachine (easy with nginx). Configure your published ports into your nginx-reverse-proxy (proxy_pass localhost:8080 for your frontend as example).
After that you can install the certbot and obtain your lets-encrypt-certificates. The certbot should discover your nginx-endpoints automatically and can automatic renew your certificates.

Getting Docker image and Docker-Compose file to work correctly

I have a docker file and docker compose fiel in my project directory. I am running the docker compose file with the following command
docker-compose up
It builds and runs the different images for the server and database, but I am getting an error that is saying my package.json file is not in the correct directory. I am not sure where it is going wrong.
Here is my docker file
FROM node:10
WORKDIR /app
COPY package.json ./app
RUN npm install
COPY . /app
CMD npm start
EXPOSE 5585
this is my docker compose file
web:
image: node
command: npm start
ports:
- "5585:5588"
links:
- db
working_dir: /app
environment:
SEQ_DB: addidas
SEQ_USER: sdfsdf
SEQ_PW: sdfsdfs
PORT: 4242
DATABASE_URL: postgres://sdfsdf:sdfsdfs#localhost:5432/addidas
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: sdfsdf
POSTGRES_PASSWORD: sdfsdfs
the error that i am getting in my terminal is the following :
Attaching to addidas_db_1, addidas_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | waiting for server to start....2018-11-06 17:38:51.968 UTC [43] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-11-06 17:38:51.983 UTC [44] LOG: database system was shut down at 2018-11-06 17:38:51 UTC
db_1 | 2018-11-06 17:38:51.987 UTC [43] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 | CREATE DATABASE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | waiting for server to shut down...2018-11-06 17:38:52.438 UTC [43] LOG: received fast shutdown request
db_1 | .2018-11-06 17:38:52.441 UTC [43] LOG: aborting any active transactions
db_1 | 2018-11-06 17:38:52.443 UTC [43] LOG: background worker "logical replication launcher" (PID 50) exited with exit code 1
db_1 | 2018-11-06 17:38:52.444 UTC [45] LOG: shutting down
db_1 | 2018-11-06 17:38:52.459 UTC [43] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2018-11-06 17:38:52.556 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-11-06 17:38:52.556 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-11-06 17:38:52.560 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-11-06 17:38:52.575 UTC [61] LOG: database system was shut down at 2018-11-06 17:38:52 UTC
db_1 | 2018-11-06 17:38:52.580 UTC [1] LOG: database system is ready to accept connections
db_1 | 2018-11-06 17:46:15.922 UTC [1] LOG: received smart shutdown request
db_1 | 2018-11-06 17:46:15.926 UTC [1] LOG: background worker "logical replication launcher" (PID 67) exited with exit code 1
db_1 | 2018-11-06 17:46:15.928 UTC [62] LOG: shutting down
db_1 | 2018-11-06 17:46:15.944 UTC [1] LOG: database system is shut down
db_1 | 2018-11-06 17:46:19.284 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-11-06 17:46:19.284 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-11-06 17:46:19.288 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-11-06 17:46:19.309 UTC [24] LOG: database system was shut down at 2018-11-06 17:46:15 UTC
db_1 | 2018-11-06 17:46:19.316 UTC [1] LOG: database system is ready to accept connections
web_1 | npm ERR! path /app/package.json
web_1 | npm ERR! code ENOENT
web_1 | npm ERR! errno -2
web_1 | npm ERR! syscall open
web_1 | npm ERR! enoent ENOENT: no such file or directory, open '/app/package.json'
web_1 | npm ERR! enoent This is related to npm not being able to find a file.
web_1 | npm ERR! enoent
web_1 |
web_1 | npm ERR! A complete log of this run can be found in:
web_1 | npm ERR! /root/.npm/_logs/2018-11-06T17_47_14_825Z-debug.log
addidas_web_1 exited with code 254
You are not using your docker image your docker-compose.yml.
You should point to your Dockerfile:
web:
build: ./path/to/Dockerfile
There is also some mistakes with your configuration. You should share the containers (your web server and the database) on the same network to be able to access the database from the web server.
networks:
mynetwork:
driver: bridge
web:
build: ./path/to/Dockerfile
networks:
- mynetwork
links:
- db
environment:
SEQ_DB: addidas
SEQ_USER: sdfsdf
SEQ_PW: sdfsdfs
PORT: 4242
DATABASE_URL: postgres://sdfsdf:sdfsdfs#db:5432/addidas
db:
image: postgres
ports:
- "5432:5432"
networks:
- mynetwork
environment:
POSTGRES_USER: sdfsdf
POSTGRES_PASSWORD: sdfsdfs

Node Postgres Docker

I try to set up Docker with my node.js app which uses sequelize to connect to postgres.
const sequelize = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASS,
{
host: process.env.DB_HOST,
port: process.env.DB_PORT,
dialect: 'postgres',
},
);
In my .env file I declare
DB_HOST=postgres (which is the name of the service declared in the docker-conpose.yml) and DB_PORT=5432 among all the other environment variables.
My Dockerfile looks as follows:
FROM node:8.6.0
# Working directory for application
WORKDIR /usr/src/app
EXPOSE 8080
COPY . /usr/src/app
# In this file I create a user and a DB and give him the privlages
ADD init.sql /docker-entrypoint-initdb.d/
RUN npm install
And my docker-compose.yml looks as follows:
version: "2"
services:
postgres:
image: "postgres:9.4"
restart: always
ports:
- "5432:5432"
env_file:
- .env
node:
build: .
ports:
- "8080:8080"
depends_on:
- postgres
command: ["npm", "start"]
When I docker-compose up I get the error that the Sequelize is not able to connect to the DB.
Unhandled rejection SequelizeConnectionRefusedError: connect ECONNREFUSED 172.18.0.2:5431
Can someone help me with this error?
all the docker logs:
WARNING: Image for service node was built because it did not already exist. To rebuild this image you must use docker-compose build or docker-compose up --build.
Creating graphqlpostgrestemplate_postgres_1 ...
Creating graphqlpostgrestemplate_postgres_1 ... done
Creating graphqlpostgrestemplate_node_1 ...
Creating graphqlpostgrestemplate_node_1 ... done
Attaching to graphqlpostgrestemplate_postgres_1, graphqlpostgrestemplate_node_1
postgres_1 | The files belonging to this database system will be owned by user "postgres".
postgres_1 | This user must also own the server process.
postgres_1 |
postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
postgres_1 | The default database encoding has accordingly been set to "UTF8".
postgres_1 | The default text search configuration will be set to "english".
postgres_1 |
postgres_1 | Data page checksums are disabled.
postgres_1 |
postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1 | creating subdirectories ... ok
postgres_1 | selecting default max_connections ... 100
postgres_1 | selecting default shared_buffers ... 128MB
postgres_1 | selecting dynamic shared memory implementation ... posix
postgres_1 | creating configuration files ... ok
postgres_1 | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
postgres_1 | initializing pg_authid ... ok
postgres_1 | initializing dependencies ... ok
postgres_1 | creating system views ... ok
node_1 | npm info it worked if it ends with ok
node_1 | npm info using npm#5.3.0
node_1 | npm info using node#v8.6.0
postgres_1 | loading system objects' descriptions ... ok
node_1 | npm info lifecycle graphql-postgres-template#1.0.0~prestart: graphql-postgres-template#1.0.0
node_1 | npm info lifecycle graphql-postgres-template#1.0.0~start: graphql-postgres-template#1.0.0
node_1 |
node_1 | > graphql-postgres-template#1.0.0 start /usr/src/app
node_1 | > nodemon --exec babel-node index.js
node_1 |
postgres_1 | creating collations ... ok
postgres_1 | creating conversions ... ok
postgres_1 | creating dictionaries ... ok
postgres_1 | setting privileges on built-in objects ... ok
postgres_1 | creating information schema ... ok
postgres_1 | loading PL/pgSQL server-side language ... ok
node_1 | [nodemon] 1.12.1
node_1 | [nodemon] to restart at any time, enter rs
node_1 | [nodemon] watching: .
node_1 | [nodemon] starting babel-node index.js
postgres_1 | vacuuming database template1 ... ok
postgres_1 | copying template1 to template0 ... ok
postgres_1 | copying template1 to postgres ... ok
postgres_1 | syncing data to disk ... ok
postgres_1 |
postgres_1 | Success. You can now start the database server using:
postgres_1 |
postgres_1 | postgres -D /var/lib/postgresql/data
postgres_1 | or
postgres_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres_1 |
postgres_1 |
postgres_1 | WARNING: enabling "trust" authentication for local connections
postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1 | --auth-local and --auth-host, the next time you run initdb.
postgres_1 | ****************************************************
postgres_1 | WARNING: No password has been set for the database.
postgres_1 | This will allow anyone with access to the
postgres_1 | Postgres port to access your database. In
postgres_1 | Docker's default configuration, this is
postgres_1 | effectively any other container on the same
postgres_1 | system.
postgres_1 |
postgres_1 | Use "-e POSTGRES_PASSWORD=password" to set
postgres_1 | it in "docker run".
postgres_1 | ****************************************************
postgres_1 | waiting for server to start....LOG: could not bind IPv6 socket: Cannot assign requested address
postgres_1 | HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
postgres_1 | LOG: database system was shut down at 2017-10-10 12:17:15 UTC
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
postgres_1 | done
postgres_1 | server started
postgres_1 | ALTER ROLE
postgres_1 |
postgres_1 |
postgres_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres_1 |
postgres_1 | waiting for server to shut down....LOG: received fast shutdown request
postgres_1 | LOG: aborting any active transactions
postgres_1 | LOG: autovacuum launcher shutting down
postgres_1 | LOG: shutting down
postgres_1 | LOG: database system is shut down
node_1 | Tue, 10 Oct 2017 12:17:16 GMT sequelize deprecated String based operators are now deprecated. Please use Symbol based operators for better security, read more at http://docs.sequelizejs.com/manual/tutorial/querying.html#operators at node_modules/sequelize/lib/sequelize.js:236:13
node_1 | WARNING: No configurations found in configuration directory:/usr/src/app/config
node_1 | WARNING: To disable this warning set SUPPRESS_NO_CONFIG_WARNING in the environment.
node_1 | Tue, 10 Oct 2017 12:17:17 GMT body-parser deprecated undefined extended: provide extended option at index.js:53:30
node_1 | Unhandled rejection SequelizeConnectionRefusedError: connect ECONNREFUSED 172.18.0.2:5431
node_1 | at connection.connect.err (/usr/src/app/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:96:24)
node_1 | at Connection.connectingErrorHandler (/usr/src/app/node_modules/pg/lib/client.js:123:14)
node_1 | at emitOne (events.js:115:13)
node_1 | at Connection.emit (events.js:210:7)
node_1 | at Socket. (/usr/src/app/node_modules/pg/lib/connection.js:71:10)
node_1 | at emitOne (events.js:115:13)
node_1 | at Socket.emit (events.js:210:7)
node_1 | at emitErrorNT (internal/streams/destroy.js:64:8)
node_1 | at _combinedTickCallback (internal/process/next_tick.js:138:11)
node_1 | at process._tickDomainCallback (internal/process/next_tick.js:218:9)
node_1 | [nodemon] clean exit - waiting for changes before restart
postgres_1 | done
postgres_1 | server stopped
postgres_1 |
postgres_1 | PostgreSQL init process complete; ready for start up.
postgres_1 |
postgres_1 | LOG: database system was shut down at 2017-10-10 12:17:16 UTC
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
To your Docker-compose add link configuration option in your service node point to service postgres something like this:
node:
links:
- postgres
Then you can connect to postgresdb point with name service postgres

Resources