I have got the same problem described in this post, but inside a docker container.
I don't really know where my pgadmin file reside to edit it's default path.How do I go about fixing this issue? Please be as detailed as possible because I don't know how to docker.
Here is an abstract of the verbatim of docker-compose up command:
php-worker_1 | 2020-11-11 05:50:13,700 INFO spawned: 'laravel-worker_03' with pid 67
pgadmin_1 | [2020-11-11 05:50:13 +0000] [223] [INFO] Worker exiting (pid: 223)
pgadmin_1 | WARNING: Failed to set ACL on the directory containing the configuration database:
pgadmin_1 | [Errno 1] Operation not permitted: '/var/lib/pgadmin'
pgadmin_1 | HINT : You may need to manually set the permissions on
pgadmin_1 | /var/lib/pgadmin to allow pgadmin to write to it.
pgadmin_1 | ERROR : Failed to create the directory /var/lib/pgadmin/sessions:
pgadmin_1 | [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin_1 | HINT : Create the directory /var/lib/pgadmin/sessions, ensure it is writeable by
pgadmin_1 | 'pgadmin', and try again, or, create a config_local.py file
pgadmin_1 | and override the SESSION_DB_PATH setting per
pgadmin_1 | https://www.pgadmin.org/docs/pgadmin4/4.27/config_py.html
pgadmin_1 | /usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
pgadmin_1 | return io.open(fd, *args, **kwargs)
pgadmin_1 | [2020-11-11 05:50:13 +0000] [224] [INFO] Booting worker with pid: 224
my docker-compose.yml:
### pgAdmin ##############################################
pgadmin:
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}"
- "PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}"
ports:
- "${PGADMIN_PORT}:80"
volumes:
- ${DATA_PATH_HOST}/pgadmin:/var/lib/pgadmin
depends_on:
- postgres
networks:
- frontend
- backend
Okay. looks like problem appears when you try to run pgadmin service.
This part
### pgAdmin ##############################################
pgadmin:
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}"
- "PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}"
ports:
- "${PGADMIN_PORT}:80"
volumes:
- ${DATA_PATH_HOST}/pgadmin:/var/lib/pgadmin
depends_on:
- postgres
networks:
- frontend
- backend
As you can see you trying to mount local directory ${DATA_PATH_HOST}/pgadmin into container's /var/lib/pgadmin
volumes:
- ${DATA_PATH_HOST}/pgadmin:/var/lib/pgadmin
As you can read in this article your local ${DATA_PATH_HOST}/pgadmin directory's UID and GID must be 5050. Is this 5050?
You can check it by running
ls -l ${DATA_PATH_HOST}
Output will be like
drwxrwxr-x 1 5050 5050 12693 Nov 11 14:56 pgadmin
or
drwxrwxr-x 1 SOME_USER SOME_GROUP 12693 Nov 11 14:56 pgadmin
if SOME_USER's and SOME_GROUP's IDs are 5050, it is okay. 5050 as is also okay. If not, try to do as described in article above.
sudo chown -R 5050:5050 ${DATA_PATH_HOST}/pgadmin
Also you need to check is environment variable exists:
# run it as same user as you running docker-compose
echo ${DATA_PATH_HOST}
If output will be empty you need to set ${DATA_PATH_HOST} or allow docker to read variables from file. There are many ways to do it.
If you're on Windows, add this line to your docker-compose.yml. It gives container an access to your local folder
version: "3.9"
services:
postgres:
user: root <- this one
container_name: postgres_container
image: postgres:14.2
When running in kubernetes environment, I had to add these values.
spec:
containers:
- name: pgadmin
image: dpage/pgadmin4:5.4
securityContext:
runAsUser: 0
runAsGroup: 0
Related
This question already has answers here:
Docker - Can't reach database server at `localhost`:`3306` with Prisma service
(2 answers)
ECONNREFUSED for Postgres on nodeJS with dockers
(7 answers)
Closed 17 days ago.
This post was edited and submitted for review 16 days ago and failed to reopen the post:
Original close reason(s) were not resolved
I am trying to build a docker image from my project and run it in a container,
The project is keystone6 project connecting to a postgres database, everything worked well when I normally run the project and it connects successfully to the database.
Here is my dockerfile:
FROM node:18.13.0-alpine3.16
ENV NODE_VERSION 18.13.0
ENV NODE_ENV=development
LABEL Name="di-wrapp" Version="1"
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
COPY .env .
EXPOSE 9999
CMD ["npm", "run", "dev"]
I am building an image using the command docker build -t di-wrapp:1.0 .
after that I run docker-compose file which contains the following code:
version: "3.8"
services:
postgres:
image: postgres:15-alpine
container_name: localhost
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=di_wrapp
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
dashboard:
image: di-wrapp:1.0
container_name: di-wrapp-container
restart: always
environment:
- DB_CONNECTION=postgres
- DB_PORT=5432
- DB_HOST=localhost
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_NAME=di_wrapp
tty: true
depends_on:
- postgres
ports:
- 8273:9999
links:
- postgres
command: "npm run dev"
volumes:
- /usr/src/app
volumes:
postgres-data:
And this is the connection URI used to connect my project to postgres:
DATABASE_URL=postgresql://postgres:postgres#localhost:5432/di_wrapp
which I am using to configure my db setting in keystone config file like this:
export const db: DatabaseConfig<BaseKeystoneTypeInfo> = {
provider: "postgresql",
url: String(DATABASE_URL!),
};
when I run the command docker-compose -f docker-compose.yaml up
This is what I receive:
localhost | 2023-02-03 13:43:35.034 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
localhost | 2023-02-03 13:43:35.034 UTC [1] LOG: listening on IPv6 address "::", port 5432
localhost | 2023-02-03 13:43:35.067 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
localhost | 2023-02-03 13:43:35.121 UTC [24] LOG: database system was shut down at 2023-02-03 13:43:08 UTC
localhost | 2023-02-03 13:43:35.155 UTC [1] LOG: database system is ready to accept connections
di-wrapp-container | > keystone-app#1.0.2 dev
di-wrapp-container | > keystone dev
di-wrapp-container |
di-wrapp-container | ✨ Starting Keystone
di-wrapp-container | ⭐️ Server listening on :8273 (http://localhost:8273/)
di-wrapp-container | ⭐️ GraphQL API available at /api/graphql
di-wrapp-container | ✨ Generating GraphQL and Prisma schemas
di-wrapp-container | Error: P1001: Can't reach database server at `localhost`:`5432`
di-wrapp-container |
di-wrapp-container | Please make sure your database server is running at `localhost`:`5432`.
di-wrapp-container | at Object.createDatabase (/usr/src/app/node_modules/#prisma/internals/dist/migrateEngineCommands.js:115:15)
di-wrapp-container | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
di-wrapp-container | at async ensureDatabaseExists (/usr/src/app/node_modules/#keystone-6/core/dist/migrations-e3b5740b.cjs.dev.js:262:19)
di-wrapp-container | at async Object.pushPrismaSchemaToDatabase (/usr/src/app/node_modules/#keystone-6/core/dist/migrations-e3b5740b.cjs.dev.js:68:3)
di-wrapp-container | at async Promise.all (index 1)
di-wrapp-container | at async setupInitialKeystone (/usr/src/app/node_modules/#keystone-6/core/scripts/cli/dist/keystone-6-core-scripts-cli.cjs.dev.js:984:3)
di-wrapp-container | at async initKeystone (/usr/src/app/node_modules/#keystone-6/core/scripts/cli/dist/keystone-6-core-scripts-cli.cjs.dev.js:762:35)di-wrapp-container exited with code 1
even though I receive that the database server is up on port 5432, my app container can't connect to it.
any help is appreciated.
I am new to writing my own docker-compose.yml files because I previously had coworkers around to write them for me. Not the case now.
I am trying to connect an Express server with TypeORM to a Postgres database inside of Docker containers.
I see there are two separate containers running: One for the express server and another for postgres.
I expect the port 5432 to be sufficient to direct container A to connect to container B. I am wrong. It's not.
I read somewhere that 0.0.0.0 refers to the host machine's actual network, not the internal network of the containers. Hence I am trying to change 0.0.0.0 in the Postgres output to something else.
Here's what I have so far:
docker-compose.yml
version: "3.1"
services:
app:
container_name: express_be
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./src:/app/src
depends_on:
- postgres_db
ports:
- "8000:8000"
networks:
- efinternal
postgres_db:
container_name: postgres_be
image: postgres:15.1
restart: always
environment:
- POSTGRES_USERNAME=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
volumes:
- postgresDB:/var/lib/postgresql/data
networks:
- efinternal
volumes:
postgresDB:
driver: local
networks:
efinternal:
driver: bridge
Here's my console log output
ostgres_be | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_be |
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: starting PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 // look here
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_be | 2022-12-13 04:09:51.636 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_be | 2022-12-13 04:09:51.644 UTC [28] LOG: database system was shut down at 2022-12-13 04:09:48 UTC
postgres_be | 2022-12-13 04:09:51.650 UTC [1] LOG: database system is ready to accept connections
express_be |
express_be | > efinternal#1.0.1 dev
express_be | > nodemon ./src/server.ts
express_be |
express_be | [nodemon] 2.0.20
express_be | [nodemon] starting `ts-node ./src/server.ts`
express_be | /task ... is running
express_be | /committee ... is running
express_be | App has started on port 8000
express_be | Database connection failed Failed to create connection with database
express_be | Error: getaddrinfo ENOTFOUND efinternal // look here
express_be | at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {
express_be | errno: -3008,
express_be | code: 'ENOTFOUND',
express_be | syscall: 'getaddrinfo',
express_be | hostname: 'efinternal'
express_be | }
Not sure what to say about it. What I'm trying to do is to change the line " listening on IPv4 address "0.0.0.0", port 5432" to say "listening on ... efinternal, port 5432" so that "Error: getaddrinfo ENOTFOUND efinternal" will have somewhere to point to.
This link seemed to have the answer but I couldn't decipher it.
Same story here, I can't decipher what I'm doing wrong.
A helpful person told me about "external_links" which sounds like the wrong tool for the job: "Link to containers started outside this docker-compose.yml or even outside of Compose"
I suspect this is The XY Problem where I'm trying to Y (create an "efinternal" network in my containers) so I can X (connect express to postgres) when in fact Y is the wrong solution to X.
In case it matters, here's my TypeORM Postgres connection settings:
export const AppDataSource = new DataSource({
type: "postgres",
host: "efinternal",
port: 5432,
username: "postgres",
password: "postgres",
database: "postgres",
synchronize: true,
logging: true,
entities: [User, Committee, Task, OnboardingStep, RefreshToken, PasswordToken],
subscribers: [],
migrations: [],
});
The special IPv4 address 0.0.0.0 means "all interfaces", in whichever context it's invoked in. In a Docker container you almost always want to listen to 0.0.0.0, which will listen to all "interfaces" in the isolated Docker container network environment (not the host network environment). So you don't need to change the PostgreSQL configuration here.
If you read through Networking in Compose in the Docker documentation, you'll find that each container is addressable by its Compose service name. The network name itself is not usable as a host name.
I'd strongly suggest making your database location configurable. Even in this setup, it will have a different hostname in your non-Docker development environment (localhost) vs. running in a container (postgres_db).
export const AppDataSource = new DataSource({
host: process.env.PGHOST || 'localhost',
...
});
Then in your Compose setup, you can specify that environment-variable setting.
version: '3.8'
services:
app:
build: . # and name the Dockerfile just "Dockerfile"
depends_on: [postgres_db]
ports: ['8000:8000']
environment: # add
PGHOST: postgres_db
postgres_db: {...}
Compose also provides you a network named default, and for simplicity you can delete all of the networks: blocks in the entire Compose file. You also do not need to manually specify container_name:, and I'd avoid overwriting the image's code with volumes:.
This setup also supports using Docker for your database but plain Node for ordinary development. (Also see near-daily SO questions about live reloading not working in Docker.) You can start only the database, and since the code defaults the database location to a developer-friendly localhost, you can just develop as normal.
docker-compose up -d postgres_db
yarn dev
I did the following:
https://github.com/dockersamples/example-voting-app
cd example-voting-app
Inside that there are number of files/folders
MAINTAINERS
LICENSE
Jenkinsfile
ExampleVotingApp.sln
README.md
docker-stack-windows-1809.yml
docker-stack-simple.yml
docker-compose.yml
docker-compose-windows.yml
docker-compose-windows-1809.yml
docker-compose-simple.yml
docker-compose-k8s.yml
docker-compose-javaworker.yml
architecture.png
kube-deployment.yml
k8s-specifications
docker-stack.yml
docker-stack-windows.yml
result
vote
worker
I did cd vote and executed following commands
docker build . -t voting-app
docker run -p 5000:80 voting-app
After I run docker run command, I see the following output, and nothing is happening . I am clueless, as there is no error messages etc.
[root#osboxes vote]# docker run -p 5000:80 voting-app
[2020-06-16 17:59:27 +0000] [1] [INFO] Starting gunicorn 19.10.0
[2020-06-16 17:59:27 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2020-06-16 17:59:27 +0000] [1] [INFO] Using worker: sync
[2020-06-16 17:59:27 +0000] [9] [INFO] Booting worker with pid: 9
[2020-06-16 17:59:27 +0000] [10] [INFO] Booting worker with pid: 10
[2020-06-16 17:59:27 +0000] [11] [INFO] Booting worker with pid: 11
[2020-06-16 17:59:27 +0000] [12] [INFO] Booting worker with pid: 12
Please guide how to fix this issue, and how to get the vote app running on container.
My OS details are as follows:
NAME="CentOS Linux"
VERSION="7 (Core)"
Thanks
In my earlier answer, i got the app working, by building and running each image individually.
Finally, after spending few hours, I am finally able to create docker-compose.yml file and able to run the entire application using the following command:
docker-compose up
Hope it helps other who are struggling to make this application work.
docker-compose.yml
version: "3"
services:
redis:
image: redis
db:
image: postgres:9.4
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST_AUTH_METHOD=trust
vote:
image: voting-app
ports:
- 5000:80
links:
- redis
worker:
image: worker-app
links:
- db
- redis
result:
image: result-app
ports:
- 5001:80
links:
- db
After code checkout, I followed the following steps, and got the voting application to run.
change to vote directory
docker run -d --name=redis redis
docker build . -t voting-app
docker run -p 5000:80 --link redis:redis voting-app
docker run -d --name=db -e POSTGRES_PASSWORD=postgres postgres:9.4
change to worker directory
docker build . -t worker-app
docker run --link redis:redis --link db:db worker-app
change to result directory
docker build . -t result-app
docker run -p 5001:80 --link db:db result-app
Access the URLs
http://<IP>:5000/
http://<IP>:5001/
Replace the IP with IP of your machine. Now I can access both the URLs.
Today I've been trying to setup Docker for my GraphQL API that runs with Knex.js, PostgreSQL and Node.js. The problem that I'm facing is that Knex.js is timing out when trying to access data in my database. I believe that it's due to my incorrect way of trying to link them together.
I've really tried to do this on my own, but I can't figure this out. I'd like to walk through each file that play a part of making this work so (hopefully) someone could spot my mistake(s) and help me.
knexfile.js
In my knexfile I have a connection key which used to look like this:
connection: 'postgres://localhost/devblog'
This worked just fine, but this wont work if I want to use Docker. So I modified it a bit and ended up with this:
connection: {
host: 'db' || 'localhost',
port: process.env.DB_PORT || 5432,
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD || undefined,
database: process.env.DATABASE // DATABASE = devblog
}
I've noticed that something is wrong with host. Since it always times-out when I have something else (in this case, db) than localhost.
Dockerfile
My Dockerfile looks like this:
FROM node:9
WORKDIR /app
COPY package-lock.json /app
COPY package.json /app
RUN npm install
COPY dist /app
COPY wait-for-it.sh /app
CMD node index.js
EXPOSE 3010
docker-compose.yml
And this file looks like this:
version: "3"
services:
redis:
image: redis
networks:
- webnet
db:
image: postgres
networks:
- webnet
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
web:
image: node:8-alpine
command: "node /app/dist/index.js"
volumes:
- ".:/app"
working_dir: "/app"
depends_on:
- "db"
ports:
- "3010:3010"
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
networks:
webnet:
When I try to run this with docker-compose up I get the following output:
Starting backend_db_1 ... done
Starting backend_web_1 ... done
Attaching to backend_db_1, backend_redis_1, backend_web_1
redis_1 | 1:C 12 Feb 16:05:21.303 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 12 Feb 16:05:21.303 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
db_1 | 2018-02-12 16:05:21.337 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
redis_1 | 1:C 12 Feb 16:05:21.303 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 12 Feb 16:05:21.311 * Running mode=standalone, port=6379.
db_1 | 2018-02-12 16:05:21.338 UTC [1] LOG: listening on IPv6 address "::", port 5432
redis_1 | 1:M 12 Feb 16:05:21.311 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 12 Feb 16:05:21.314 # Server initialized
redis_1 | 1:M 12 Feb 16:05:21.315 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
db_1 | 2018-02-12 16:05:21.348 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-02-12 16:05:21.367 UTC [20] LOG: database system was shut down at 2018-02-12 16:01:17 UTC
redis_1 | 1:M 12 Feb 16:05:21.317 * DB loaded from disk: 0.002 seconds
redis_1 | 1:M 12 Feb 16:05:21.317 * Ready to accept connections
db_1 | 2018-02-12 16:05:21.374 UTC [1] LOG: database system is ready to accept connections
web_1 | DB_HOST db
web_1 |
web_1 | App listening on 3010
web_1 | Env: undefined
But when I try to make a query with GraphQL I get:
"message": "Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?"
I really don't know why this is not working for me and it's driving me nuts. If anyone could help me with this I would be delighted. I have also added a link to my project below.
Thanks a lot for reading! Cheers.
LINK TO PROJECT: https://github.com/Martinnord/DevBlog/tree/master/backend
Updated docker-compose file:
version: "3"
services:
redis:
image: redis
networks:
- webnet
db:
image: postgres
networks:
- webnet
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
ports:
- "15432:5432"
web:
image: devblog-server
ports:
- "3010:3010"
networks:
- webnet
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
command: ["./wait-for-it.sh", "db:5432", "--", "node", "index.js"]
networks:
webnet:
Maybe like this:
version: "3"
services:
redis:
image: redis
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
ports:
- "15432:5432"
web:
image: node:8-alpine
command: "node /app/dist/index.js"
volumes:
- ".:/app"
working_dir: "/app"
depends_on:
- "db"
ports:
- "3010:3010"
links:
- "db"
- "redis"
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
And you should be able to connect from webapp to postgres with:
postgres://martinnord:password#db/devblog
or
connection: {
host: process.DB_HOST,
port: process.env.DB_PORT || 5432,
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME || 'postgres'
}
I also added row that exposes postgres running in docker to a port 15432 so you can try to connect it first directly from your host machine with
psql postgres://martinnord:password#localhost:15432/devblog
I want to create mounting with docker. I have dotnet app, where I want to mounting certs folder with self signet ssl certificate. On localhost it works well, but on my VPS (I have root permissions) don't.
My operation system on remote machine:
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
Simply docker file in app:
FROM microsoft/dotnet:latest
WORKDIR /app
COPY out .
EXPOSE 5000/tcp 5001/tcp
ENTRYPOINT ["dotnet", "myapp.dll"]
My docker compose file:
version: '2'
services:
# mssql:
# image: iws/mssql
# container_name: mssql-server
# ports:
# - "1433:1433"
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs
myapp:
image: iws/cestujnakole
container_name: myapp
environment:
- VIRTUAL_HOST=cestujnakole.cz,www.cestujnakole.cz
volumes:
- ./certs:/app/certs
- ../myapp/wwwroot:/app/wwwroot
I have follow structure:
In my pc:
- /Users/me/projects
- nginx-proxy
- docker-compose.yml
- certs
- myapp.pfs
- myapp.key
- etc.
- myapp
- out
- Dockerfile
And in my VPS:
- /var/www/projects
- nginx-proxy
- docker-compose.yml
- certs
- myapp.pfs
- myapp.key
- etc.
- myapp
- etc.
In my app, I need to read certs/myapp.pfs file. When file doesn't exist, application crash down.
To vps I load finished container (docker save/load)
On localhost everything works file, after run docker compose up, from nginx-proxy folder, but on remote server, I got this:
docker-compose up --force-recreate cestujnakole
Recreating cestujnakole
Attaching to cestujnakole
cestujnakole | Running Environment: Production
cestujnakole |
cestujnakole | Unhandled Exception: Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
cestujnakole | at Interop.Crypto.CheckValidOpenSslHandle(SafeHandle handle)
cestujnakole | at Internal.Cryptography.Pal.CertificatePal.FromFile(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
cestujnakole | at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
cestujnakole | at Microsoft.AspNetCore.Hosting.KestrelServerOptionsHttpsExtensions.UseHttps(KestrelServerOptions options, String fileName, String password)
cestujnakole | at Microsoft.Extensions.Options.OptionsCache`1.CreateOptions()
cestujnakole | at System.Threading.LazyInitializer.EnsureInitializedCore[T](T& target, Boolean& initialized, Object& syncLock, Func`1 valueFactory)
cestujnakole | at Microsoft.AspNetCore.Server.Kestrel.KestrelServer..ctor(IOptions`1 options, IApplicationLifetime applicationLifetime, ILoggerFactory loggerFactory)
cestujnakole | --- End of stack trace from previous location where exception was thrown ---
cestujnakole | at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitConstructor(ConstructorCallSite constructorCallSite, ServiceProvider provider)
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitScoped(ScopedCallSite scopedCallSite, ServiceProvider provider)
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceProvider.<>c__DisplayClass16_0.<RealizeService>b__0(ServiceProvider provider)
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService(IServiceProvider provider, Type serviceType)
cestujnakole | at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService[T](IServiceProvider provider)
cestujnakole | at Microsoft.AspNetCore.Hosting.Internal.WebHost.EnsureServer()
cestujnakole | at Microsoft.AspNetCore.Hosting.Internal.WebHost.BuildApplication()
cestujnakole | at Microsoft.AspNetCore.Hosting.WebHostBuilder.Build()
cestujnakole | at CykloWeb.Program.Main(String[] args) in /Users/petrtomasek/projects/CykloWeb/Program.cs:line 22
cestujnakole exited with code 139
No such file in app means, cert file doesn't exist in remote pc
When i drop from remote machine folders to mount, after compose-up are created. It is very interesting.
Here is ls- l commnad on my remote structure:
ls -l
total 16
drwxrwxrwx 4 root root 4096 Jan 25 17:58 myapp
drwxr-xr-x 3 root root 4096 Jan 20 13:02 mysql
drwxr-xr-x 4 root root 4096 Jan 25 16:50 nginx-proxy
And nginx-proxy (my docker startup folder):
drwxrwxrwx 2 root root 4096 Jan 25 18:03 certs
-rw-r--r-- 1 root root 883 Jan 25 18:03 docker-compose.yml
I tested to nginx-proxy folder chmod -R and 775, after this 777, but nothing helped.
Where could the problem be?
Thank you for your time.