Docker mount works only on localhost - linux

I want to create mounting with docker. I have dotnet app, where I want to mounting certs folder with self signet ssl certificate. On localhost it works well, but on my VPS (I have root permissions) don't.
My operation system on remote machine:
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
Simply docker file in app:
FROM microsoft/dotnet:latest
WORKDIR /app
COPY out .
EXPOSE 5000/tcp 5001/tcp
ENTRYPOINT ["dotnet", "myapp.dll"]
My docker compose file:
version: '2'
services:
# mssql:
# image: iws/mssql
# container_name: mssql-server
# ports:
# - "1433:1433"
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs
myapp:
image: iws/cestujnakole
container_name: myapp
environment:
- VIRTUAL_HOST=cestujnakole.cz,www.cestujnakole.cz
volumes:
- ./certs:/app/certs
- ../myapp/wwwroot:/app/wwwroot
I have follow structure:
In my pc:
- /Users/me/projects
- nginx-proxy
- docker-compose.yml
- certs
- myapp.pfs
- myapp.key
- etc.
- myapp
- out
- Dockerfile
And in my VPS:
- /var/www/projects
- nginx-proxy
- docker-compose.yml
- certs
- myapp.pfs
- myapp.key
- etc.
- myapp
- etc.
In my app, I need to read certs/myapp.pfs file. When file doesn't exist, application crash down.
To vps I load finished container (docker save/load)
On localhost everything works file, after run docker compose up, from nginx-proxy folder, but on remote server, I got this:
docker-compose up --force-recreate cestujnakole
Recreating cestujnakole
Attaching to cestujnakole
cestujnakole | Running Environment: Production
cestujnakole |
cestujnakole | Unhandled Exception: Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
cestujnakole | at Interop.Crypto.CheckValidOpenSslHandle(SafeHandle handle)
cestujnakole | at Internal.Cryptography.Pal.CertificatePal.FromFile(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
cestujnakole | at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
cestujnakole | at Microsoft.AspNetCore.Hosting.KestrelServerOptionsHttpsExtensions.UseHttps(KestrelServerOptions options, String fileName, String password)
cestujnakole | at Microsoft.Extensions.Options.OptionsCache`1.CreateOptions()
cestujnakole | at System.Threading.LazyInitializer.EnsureInitializedCore[T](T& target, Boolean& initialized, Object& syncLock, Func`1 valueFactory)
cestujnakole | at Microsoft.AspNetCore.Server.Kestrel.KestrelServer..ctor(IOptions`1 options, IApplicationLifetime applicationLifetime, ILoggerFactory loggerFactory)
cestujnakole | --- End of stack trace from previous location where exception was thrown ---
cestujnakole | at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitConstructor(ConstructorCallSite constructorCallSite, ServiceProvider provider)
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitScoped(ScopedCallSite scopedCallSite, ServiceProvider provider)
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceProvider.<>c__DisplayClass16_0.<RealizeService>b__0(ServiceProvider provider)
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService(IServiceProvider provider, Type serviceType)
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService[T](IServiceProvider provider)
cestujnakole | at Microsoft.AspNetCore.Hosting.Internal.WebHost.EnsureServer()
cestujnakole | at Microsoft.AspNetCore.Hosting.Internal.WebHost.BuildApplication()
cestujnakole | at Microsoft.AspNetCore.Hosting.WebHostBuilder.Build()
cestujnakole | at CykloWeb.Program.Main(String[] args) in /Users/petrtomasek/projects/CykloWeb/Program.cs:line 22
cestujnakole exited with code 139
No such file in app means, cert file doesn't exist in remote pc
When i drop from remote machine folders to mount, after compose-up are created. It is very interesting.
Here is ls- l commnad on my remote structure:
ls -l
total 16
drwxrwxrwx 4 root root 4096 Jan 25 17:58 myapp
drwxr-xr-x 3 root root 4096 Jan 20 13:02 mysql
drwxr-xr-x 4 root root 4096 Jan 25 16:50 nginx-proxy
And nginx-proxy (my docker startup folder):
drwxrwxrwx 2 root root 4096 Jan 25 18:03 certs
-rw-r--r-- 1 root root 883 Jan 25 18:03 docker-compose.yml
I tested to nginx-proxy folder chmod -R and 775, after this 777, but nothing helped.
Where could the problem be?
Thank you for your time.

Related

Incorrect permissions for file with docker compose volume? 13: Permission denied

I have the following docker_compose.yaml:
version: "3.8"
services:
reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy
volumes:
- ../nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- "8050:8050"
- "8051:8051"
webapp:
image: my-site
command: --port 8050 8051 --debug yes
volumes:
- /home/user/data:/data
ports:
- "8050:8050"
- "8051:8051"
depends_on:
- reverse-proxy
When I run via docker compose I get the following error:
$ sudo docker-compose -f /home/user/docker_compose.yaml up
...
reverse_proxy | 2022/03/09 00:49:19 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (13: Permission denied)
reverse_proxy | nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (13: Permission denied)
reverse_proxy exited with code 1
So to investigate I re-ran just the nginx container:
$ sudo docker run -v ../nginx/nginx.conf:/etc/nginx/nginx.conf -t docker.io/nginx tail -f /dev/null
ssh'd in and I see:
root#d8e84f89fcad:/# ls -la /etc/nginx/
ls: cannot access '/etc/nginx/nginx.conf': Permission denied
total 20
drwxr-xr-x. 3 root root 132 Mar 1 14:00 .
drwxr-xr-x. 1 root root 66 Mar 9 00:54 ..
drwxr-xr-x. 2 root root 26 Mar 1 14:00 conf.d
-rw-r--r--. 1 root root 1007 Jan 25 15:03 fastcgi_params
-rw-r--r--. 1 root root 5349 Jan 25 15:03 mime.types
lrwxrwxrwx. 1 root root 22 Jan 25 15:13 modules -> /usr/lib/nginx/modules
-?????????? ? ? ? ? ? nginx.conf
-rw-r--r--. 1 root root 636 Jan 25 15:03 scgi_params
-rw-r--r--. 1 root root 664 Jan 25 15:03 uwsgi_params
I consulted the following Q and others and they seem to suggest to just restart the docker service, so I did and I still get ? permissions upon re running.
I assume that this is causing the permission error? If so, how can I set the correct permissions on this nginx config file? Is this really a volume permission issue?
Versions:
Docker version 1.13.1, build 7d71120/1.13.1
docker-compose version 1.29.2, build 5becea4c
CentOS 7
I think it was an SELinux thing, appending :z to the volume fixed it.
volumes:
- ../nginx/nginx.conf:/etc/nginx/nginx.conf:z

Docker: Permission denied to local MySQL volume

I'm new with Docker and I don't know Linux well. I'm trying to build my own environment for local development with Docker. I’m using docker-compose utility. I want to store MySQL data in the local volume. When I run docker-compose build and docker-compose up -d commands for the first time, there are no errors. Data from MySQL container goes into the local folder. Everything works well except one: when I want to change my docker-compose.yml file and rebuild containers I get an error
vo#vo-ThinkPad-Edge-E330:~/www/test$ docker-compose build
mysql uses an image, skipping
nginx uses an image, skipping
Building app
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 126, in perform_command
File "compose/cli/main.py", line 302, in build
File "compose/project.py", line 468, in build
File "compose/project.py", line 450, in build_service
File "compose/service.py", line 1125, in build
File "docker/api/build.py", line 160, in build
File "docker/utils/build.py", line 30, in tar
File "docker/utils/build.py", line 49, in exclude_paths
File "docker/utils/build.py", line 214, in rec_walk
File "docker/utils/build.py", line 214, in rec_walk
File "docker/utils/build.py", line 214, in rec_walk
[Previous line repeated 1 more time]
File "docker/utils/build.py", line 184, in rec_walk
PermissionError: [Errno 13] Permission denied: '/home/vo/www/test/docker/mysql/dbdata/performance_schema'
[301838] Failed to execute script docker-compose
I found out that the owner of the folder is systemd-coredump from root group. So I have 2 ways:
sudo docker-compose build
Delete /home/vo/www/test/docker/mysql/dbdata folder with sudo permissions and run docker-compose build again.
So, my question: Is this how it should be or is it possible to solve the permissions problem?
My project structure:
/
├── docker
│   ├── mysql
│   │   ├── conf
│   │   │   └── my.cnf
│   │   └── dbdata
│   ├── nginx
│   │   └── conf
│   │   └── nginx.conf
│   └── php
│   ├── conf
│   │   └── local.ini
│   ├── config
│   │   └── local.ini
│   └── Dockerfile
├── docker-compose.yml
└── src
My docker-compose.yml:
version: "3.7"
services:
#PHP Service
app:
build:
args:
user: laravel
uid: 1000
context: ./
dockerfile: ./docker/php/Dockerfile
image: laravel-image
container_name: laravel
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www/
volumes:
- ./src:/var/www
- ./docker/php/config/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- laravel
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: laravel
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD: secret
MYSQL_USER: laravel
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./docker/mysql/dbdata:/var/lib/mysql
- ./docker/mysql/conf/my.cnf:/etc/mysql/my.cnf
networks:
- laravel
#Nginx Service
nginx:
image: nginx:1.17-alpine
container_name: nginx
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
environment:
SERVICE_NAME: nginx
SERVICE_TAGS: dev
volumes:
- ./src:/var/www
- ./docker/nginx/conf:/etc/nginx/conf.d
networks:
- laravel
#Networks
networks:
laravel:
driver: bridge
Ok, I found a trick. In my docker-compose.yml in service volume section I have to use named volumes instead of path. For example, 'mysqldbvolume' instead of './docker/mysql/dbdata'. Then I have to define a named volume in the top-level volumes key:
services:
#MySQL Service
mysql:
image: mysql:5.7
...
volumes:
- mysqldbvolume:/var/lib/mysql
- ./docker/mysql/conf/my.cnf:/etc/mysql/my.cnf
...
...
# Volumes
volumes:
mysqldbvolume:
driver: local
So, where is my volume now? If I want to see list of my volumes, I have to run docker volume ls:
DRIVER VOLUME NAME
local test_mysqldbvolume
local test_postgresdbvolume
Inspect volume - docker volume inspect test_mysqldbvolume:
[
{
"CreatedAt": "2020-12-17T21:54:53+02:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "test",
"com.docker.compose.version": "1.27.4",
"com.docker.compose.volume": "mysqldbvolume"
},
"Mountpoint": "/var/lib/docker/volumes/test_mysqldbvolume/_data",
"Name": "test_mysqldbvolume",
"Options": null,
"Scope": "local"
}
]
So, path is "Mountpoint": "/var/lib/docker/volumes/test_mysqldbvolume/_data"
Run with regular user ls -la /var/lib/docker/volumes/test_mysqldbvolume/_data says access is denied. But if I run sudo ls -la /var/lib/docker/volumes/test_mysqldbvolume/_data I see my volume data:
drwxrwxrwt 6 systemd-coredump systemd-coredump 4096 дек 17 21:54 .
drwxr-xr-x 3 root root 4096 дек 17 21:42 ..
-rw-r----- 1 systemd-coredump systemd-coredump 56 дек 17 21:42 auto.cnf
-rw------- 1 systemd-coredump systemd-coredump 1676 дек 17 21:42 ca-key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 дек 17 21:42 ca.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 дек 17 21:42 client-cert.pem
-rw------- 1 systemd-coredump systemd-coredump 1680 дек 17 21:42 client-key.pem
-rw-r----- 1 systemd-coredump systemd-coredump 2 дек 17 21:54 ed50eca9e01e.pid
-rw-r----- 1 systemd-coredump systemd-coredump 6093953 дек 17 21:54 general.log
-rw-r----- 1 systemd-coredump systemd-coredump 445 дек 17 21:49 ib_buffer_pool
-rw-r----- 1 systemd-coredump systemd-coredump 79691776 дек 17 21:54 ibdata1
-rw-r----- 1 systemd-coredump systemd-coredump 50331648 дек 17 21:54 ib_logfile0
-rw-r----- 1 systemd-coredump systemd-coredump 50331648 дек 17 21:42 ib_logfile1
-rw-r----- 1 systemd-coredump systemd-coredump 12582912 дек 17 21:54 ibtmp1
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 дек 17 21:47 laravel
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 дек 17 21:42 mysql
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 дек 17 21:42 performance_schema
-rw------- 1 systemd-coredump systemd-coredump 1680 дек 17 21:42 private_key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 452 дек 17 21:42 public_key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 дек 17 21:42 server-cert.pem
-rw------- 1 systemd-coredump systemd-coredump 1680 дек 17 21:42 server-key.pem
drwxr-x--- 2 systemd-coredump systemd-coredump 12288 дек 17 21:42 sys
Most importantly, the permission error is gone.
I had this issue too but for a different reason than the majority of answers here.
I have a dual boot setup with a 2nd disk accessible to both Linux and Windows.
I had my docker images and code on the 2nd disk, an NTFS-3G drive. I tried all workarounds with chmod and chown but it just would not work.
When the penny dropped NTFS-3G was causing the issue, I moved Docker back to the default configuration with everything on the system disk including the code of my project.
Images & containers were once again in the default location /var/lib/docker and my code I moved to ~/code/project.
Once I did this, all permission issues went away.

Permission denied: '/var/lib/pgadmin/sessions' in Docker

I have got the same problem described in this post, but inside a docker container.
I don't really know where my pgadmin file reside to edit it's default path.How do I go about fixing this issue? Please be as detailed as possible because I don't know how to docker.
Here is an abstract of the verbatim of docker-compose up command:
php-worker_1 | 2020-11-11 05:50:13,700 INFO spawned: 'laravel-worker_03' with pid 67
pgadmin_1 | [2020-11-11 05:50:13 +0000] [223] [INFO] Worker exiting (pid: 223)
pgadmin_1 | WARNING: Failed to set ACL on the directory containing the configuration database:
pgadmin_1 | [Errno 1] Operation not permitted: '/var/lib/pgadmin'
pgadmin_1 | HINT : You may need to manually set the permissions on
pgadmin_1 | /var/lib/pgadmin to allow pgadmin to write to it.
pgadmin_1 | ERROR : Failed to create the directory /var/lib/pgadmin/sessions:
pgadmin_1 | [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin_1 | HINT : Create the directory /var/lib/pgadmin/sessions, ensure it is writeable by
pgadmin_1 | 'pgadmin', and try again, or, create a config_local.py file
pgadmin_1 | and override the SESSION_DB_PATH setting per
pgadmin_1 | https://www.pgadmin.org/docs/pgadmin4/4.27/config_py.html
pgadmin_1 | /usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
pgadmin_1 | return io.open(fd, *args, **kwargs)
pgadmin_1 | [2020-11-11 05:50:13 +0000] [224] [INFO] Booting worker with pid: 224
my docker-compose.yml:
### pgAdmin ##############################################
pgadmin:
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}"
- "PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}"
ports:
- "${PGADMIN_PORT}:80"
volumes:
- ${DATA_PATH_HOST}/pgadmin:/var/lib/pgadmin
depends_on:
- postgres
networks:
- frontend
- backend
Okay. looks like problem appears when you try to run pgadmin service.
This part
### pgAdmin ##############################################
pgadmin:
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}"
- "PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}"
ports:
- "${PGADMIN_PORT}:80"
volumes:
- ${DATA_PATH_HOST}/pgadmin:/var/lib/pgadmin
depends_on:
- postgres
networks:
- frontend
- backend
As you can see you trying to mount local directory ${DATA_PATH_HOST}/pgadmin into container's /var/lib/pgadmin
volumes:
- ${DATA_PATH_HOST}/pgadmin:/var/lib/pgadmin
As you can read in this article your local ${DATA_PATH_HOST}/pgadmin directory's UID and GID must be 5050. Is this 5050?
You can check it by running
ls -l ${DATA_PATH_HOST}
Output will be like
drwxrwxr-x 1 5050 5050 12693 Nov 11 14:56 pgadmin
or
drwxrwxr-x 1 SOME_USER SOME_GROUP 12693 Nov 11 14:56 pgadmin
if SOME_USER's and SOME_GROUP's IDs are 5050, it is okay. 5050 as is also okay. If not, try to do as described in article above.
sudo chown -R 5050:5050 ${DATA_PATH_HOST}/pgadmin
Also you need to check is environment variable exists:
# run it as same user as you running docker-compose
echo ${DATA_PATH_HOST}
If output will be empty you need to set ${DATA_PATH_HOST} or allow docker to read variables from file. There are many ways to do it.
If you're on Windows, add this line to your docker-compose.yml. It gives container an access to your local folder
version: "3.9"
services:
postgres:
user: root <- this one
container_name: postgres_container
image: postgres:14.2
When running in kubernetes environment, I had to add these values.
spec:
containers:
- name: pgadmin
image: dpage/pgadmin4:5.4
securityContext:
runAsUser: 0
runAsGroup: 0

docker-compose gives mongo error about not owning the data folder

Based on the following files:
Dockerfile
FROM node:8-alpine
RUN apk add --no-cache ffmpeg
RUN apk add --no-cache git
RUN apk add --no-cache tar
WORKDIR /app/
COPY package*.json /app/
COPY bower.json /app/
RUN npm i
RUN npm i -g bower
RUN bower install --allow-root
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
docker-compose-yml
version: '3.1'
services:
node:
container_name: nodetube
build:
context: .
dockerfile: Dockerfile
ports:
- "49161:3000"
volumes:
- .:/app/
- /app/node_modules
- ./upload:/app/upload
- ./uploads:/app/uploads
environment:
- REDIS_HOST=redis
- MONGODB_DOCKER_URI=mongodb://nodetube-mongo:27017/nodetube
depends_on:
- redis
- mongo
command: npm start
networks:
- nodetube-network
mongo:
container_name: nodetube-mongo
image: mongo:3.6
volumes:
- ./data/db:/data/db
ports:
- "27011:27017"
networks:
- nodetube-network
redis:
container_name: nodetube-redis
image: redis
networks:
- nodetube-network
networks:
nodetube-network:
driver: bridge
.dockerignore
.*
docker-compose.yml
*.md
node_modules
npm-debug.log
Running $ docker-compose up --build gives me an error:
nodetube-mongo | 2020-01-06T19:33:08.815+0000 I STORAGE [initandlisten] exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
I'm not a Docker expert, how can I get this to work? Thanks
On my local machine if I run $ ls -al /data
I receive:
total 0
drwxr-xr-x 4 root wheel 128 May 13 2018 .
drwxr-xr-x 33 root wheel 1056 Apr 11 2019 ..
drwxrwxrwx 432 anthony wheel 13824 Jan 6 12:21 db
drwxr-xr-x 385 anthony wheel 12320 May 13 2018 db2
anthony at Anthonys-MacBook-Pro in /data/db
$ ls -al
total 17565016
drwxrwxrwx 432 anthony wheel 13824 Jan 6 12:21 .
drwxr-xr-x 4 root wheel 128 May 13 2018 ..
-rw--w--w- 1 anthony wheel 48 May 13 2018 WiredTiger
-rw--w--w- 1 anthony wheel 21 May 13 2018 WiredTiger.lock
-rw--w--w- 1 anthony wheel 1088 Jan 6 12:21 WiredTiger.turtle
-rw--w--w- 1 anthony wheel 1216512 Jan 6 12:21 WiredTiger.wt
-rw--w--w- 1 anthony wheel 4096 Jan 6 12:20 WiredTigerLAS.wt

Setting up Docker with Knex.js and PostgreSQL

Today I've been trying to setup Docker for my GraphQL API that runs with Knex.js, PostgreSQL and Node.js. The problem that I'm facing is that Knex.js is timing out when trying to access data in my database. I believe that it's due to my incorrect way of trying to link them together.
I've really tried to do this on my own, but I can't figure this out. I'd like to walk through each file that play a part of making this work so (hopefully) someone could spot my mistake(s) and help me.
knexfile.js
In my knexfile I have a connection key which used to look like this:
connection: 'postgres://localhost/devblog'
This worked just fine, but this wont work if I want to use Docker. So I modified it a bit and ended up with this:
connection: {
host: 'db' || 'localhost',
port: process.env.DB_PORT || 5432,
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD || undefined,
database: process.env.DATABASE // DATABASE = devblog
}
I've noticed that something is wrong with host. Since it always times-out when I have something else (in this case, db) than localhost.
Dockerfile
My Dockerfile looks like this:
FROM node:9
WORKDIR /app
COPY package-lock.json /app
COPY package.json /app
RUN npm install
COPY dist /app
COPY wait-for-it.sh /app
CMD node index.js
EXPOSE 3010
docker-compose.yml
And this file looks like this:
version: "3"
services:
redis:
image: redis
networks:
- webnet
db:
image: postgres
networks:
- webnet
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
web:
image: node:8-alpine
command: "node /app/dist/index.js"
volumes:
- ".:/app"
working_dir: "/app"
depends_on:
- "db"
ports:
- "3010:3010"
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
networks:
webnet:
When I try to run this with docker-compose up I get the following output:
Starting backend_db_1 ... done
Starting backend_web_1 ... done
Attaching to backend_db_1, backend_redis_1, backend_web_1
redis_1 | 1:C 12 Feb 16:05:21.303 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 12 Feb 16:05:21.303 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
db_1 | 2018-02-12 16:05:21.337 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
redis_1 | 1:C 12 Feb 16:05:21.303 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 12 Feb 16:05:21.311 * Running mode=standalone, port=6379.
db_1 | 2018-02-12 16:05:21.338 UTC [1] LOG: listening on IPv6 address "::", port 5432
redis_1 | 1:M 12 Feb 16:05:21.311 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 12 Feb 16:05:21.314 # Server initialized
redis_1 | 1:M 12 Feb 16:05:21.315 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
db_1 | 2018-02-12 16:05:21.348 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-02-12 16:05:21.367 UTC [20] LOG: database system was shut down at 2018-02-12 16:01:17 UTC
redis_1 | 1:M 12 Feb 16:05:21.317 * DB loaded from disk: 0.002 seconds
redis_1 | 1:M 12 Feb 16:05:21.317 * Ready to accept connections
db_1 | 2018-02-12 16:05:21.374 UTC [1] LOG: database system is ready to accept connections
web_1 | DB_HOST db
web_1 |
web_1 | App listening on 3010
web_1 | Env: undefined
But when I try to make a query with GraphQL I get:
"message": "Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?"
I really don't know why this is not working for me and it's driving me nuts. If anyone could help me with this I would be delighted. I have also added a link to my project below.
Thanks a lot for reading! Cheers.
LINK TO PROJECT: https://github.com/Martinnord/DevBlog/tree/master/backend
Updated docker-compose file:
version: "3"
services:
redis:
image: redis
networks:
- webnet
db:
image: postgres
networks:
- webnet
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
ports:
- "15432:5432"
web:
image: devblog-server
ports:
- "3010:3010"
networks:
- webnet
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
command: ["./wait-for-it.sh", "db:5432", "--", "node", "index.js"]
networks:
webnet:
Maybe like this:
version: "3"
services:
redis:
image: redis
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
ports:
- "15432:5432"
web:
image: node:8-alpine
command: "node /app/dist/index.js"
volumes:
- ".:/app"
working_dir: "/app"
depends_on:
- "db"
ports:
- "3010:3010"
links:
- "db"
- "redis"
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
And you should be able to connect from webapp to postgres with:
postgres://martinnord:password#db/devblog
or
connection: {
host: process.DB_HOST,
port: process.env.DB_PORT || 5432,
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME || 'postgres'
}
I also added row that exposes postgres running in docker to a port 15432 so you can try to connect it first directly from your host machine with
psql postgres://martinnord:password#localhost:15432/devblog

Resources