Running commands on docker container from the host - node.js

Code first, it will be easier to explain what I'm after.
docker-compose.yml
version: '3.4'
services:
db:
user: '${UID}:${GID}'
image: postgres
container_name: postgres
ports:
- '5432:5432'
restart: always
environment:
POSTGRES_HOST: db
POSTGRES_USER: root
POSTGRES_PASSWORD: secret
POSTGRES_DATABASE: foo
PGDATA: /var/lib/postgresql/data/db
volumes:
- ./db/data:/var/lib/postgresql/data/db
- db-init.sh:/docker-entrypoint-initdb.d/:ro
cache:
image: redis:alpine
container_name: redis
sysctls:
net.core.somaxconn: '511'
ports:
- '6379:6379'
command: ['--requirepass "secret"']
api:
image: node:alpine
container_name: api
working_dir: /var/www/app
command: sh -c "npm start"
ports:
- '5000:5000'
volumes:
- node_modules:/var/www/app/node_modules
- .:/var/www/app
env_file: .env
depends_on:
- db
- cache
volumes:
node_modules:
postgres connection settings for node.js app:
export const {
POSTGRES_USER = 'root',
POSTGRES_PASSWORD = 'secret',
POSTGRES_HOST = 'db',
POSTGRES_PORT = 5432,
POSTGRES_DATABASE = 'foo',
} = process.env
issue:
When using service or container name (db or postgres) for the POSTGRES_HOST setting of node app:
I can successfully connect to, and query, the database.
I'm not able to run commands from host which affect the container. For example, seeding db won't work:
npx knex --esm seed:run
This makes sense, as the DNS resolution for db / postgres is taken care of by docker and those only have meaning on the network connecting the containers. Commands run from the host and targeting that container will fail as host doesn't know how to resolve the DNS here.
On the other hand, when using localhost for the POSTGRES_HOST setting of node app:
Queries to postgres from api will fail.
Commands run from the host, like npx knex --esm seed:run, will succeed.
Again, this makes perfect sense. Addressing container as localhost from host will work thanks to the port forwarding in docker-compose.yml. But in the context of the container, it refers to that very container: for api localhost means itself, and its trying to find a database on localhost:5432 or api:5432.
I want to have working inter-container network and also run commands from the host, addressing the said containers. I'm aware of two approaches to achieve that:
Use container / service name as POSTGRES_HOST, and run commands against the containers with:
docker exec -it <container_name> <command>
Assign static ips to the containers and use those instead of service / container names.
Do I have any other options here?

since you are exposing the database ports on the host machine you can do the following.
Use service or container name (db or postgres) for the POSTGRES_HOST, this way it will work for Docker containers.
when you run the seed command form the host, overwrite the POSTGRES_HOST. This can be done in this way
$ export POSTGRES_HOST=127.0.0.1
$ npx knex --esm seed:run
or in one step
$ POSTGRES_HOST=127.0.0.1 npx knex --esm seed:run

Related

I can't connect a nodejs app to a redis server using docker

Good morning guys.
I'm having a problem connecting a nodejs application, in a container, to another container that contains a redis server. On my local machine I can connect the application to this redis container without any problem. However, when trying to upload this application in a container, a timeout error is returned.
I'm new to docker and I don't understand why I can connect to this docker container in the application running locally on my machine but that same connection doesn't work when I upload the application in a container.
I tried using docker-compose, but from what I understand it will upload in another container to the redis server, instead of using the redis container that is already in docker.
To connect to redis I'm using the following code:
createClient({
socket: {
host: process.env.REDIS_HOST,
port: Number(process.env.REDIS_PORT)
}
});
Where REDIS_HOST is the address of my container running on the server and REDIS_PORT is the port where this container is running on my server.
To run redis on docker I used the following guide: https://redis.io/docs/stack/get-started/install/docker/
I apologize if my problem was not very clear, I'm still studying docker.
You mentioned you are using Docker Compose. Here's an example showing how to start Redis in a container, and make your Node application wait for that container then use an environment variable in your Node application to specify the name of the host to connect to Redis on. In this example it connects to the container running Redis that I've called "redis":
version: "3.9"
services:
redis:
container_name: redis_kaboom
image: "redislabs/redismod"
ports:
- 6379:6379
volumes:
- ./redisdata:/data
entrypoint:
redis-server
--loadmodule /usr/lib/redis/modules/rejson.so
--appendonly yes
deploy:
replicas: 1
restart_policy:
condition: on-failure
node:
container_name: node_kaboom
build: .
volumes:
- .:/app
- /app/node_modules
command: sh -c "npm run load && npm run dev"
depends_on:
- redis
ports:
- 8080:8080
environment:
- REDIS_HOST=redis
So in your Node code you'd then use the value of process.env.REDIS_HOST to connect to the right Redis host. Here, I'm not using a password or a non-standard port, you could also supply those as environment variables that match the configuration of the Redis container in Docker Compose too if you needed to.
Disclosure: I work for Redis.

Docker - Redis connect ECONNREFUSED 127.0.0.1:6379

I know this is a common error, but I literally spent the entire day trying to get past this error, trying everything I could find online. But I cant find anything that works for me.
I am very new to Docker and using it for my NodeJS + Express + Postgresql + Redis application.
Here is what I have for my docker-compose file:
version: "3.8"
services:
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=admin
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
api:
container_name: api
build:
context: .
# target: production
# image: api
depends_on:
- db
- cache
ports:
- 3000:3000
environment:
NODE_ENV: production
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: admin
DB_NAME: postgres
REDIS_HOST: cache
REDIS_PORT: 6379
links:
- db
- cache
volumes:
- ./:/src
volumes:
db:
driver: local
cache:
driver: local
Here is my app.js upper part:
const express = require('express')
const app = express()
const cors = require('cors')
const redis = require('redis')
const client = redis.createClient({
host: 'cache',
port: 6379,
legacyMode: true // Also tried without this line, same behavior
})
client.connect()
client.on('connect', () => {
log('Redis connected')
})
app.use(cors())
app.use(express.json())
And my Dockerfile:
FROM node:16.15-alpine3.14
WORKDIR ./
COPY package.json ./
RUN npm install
COPY ./ ./
EXPOSE 3000 6379
CMD [ "npm", "run", "serve" ]
npm run serve is nodemon ./app.js.
I also already tried to prune the system and network.
What am I missing? Help!
There are two things to put in mind here,
First of All Docker Network:
Containers are exposed to your localhost system, so as a "Server" you can access each of them directly through the browser or the command-line, But
Taken for granted that you only can access the containers because they are exposed to a default network that is accessible by the root of the system - the docker user which you can inspect by the way.
The deployed containers are not exposed to each other by default, so you need to define a virtual network and expose them to it so they can talk to each other through the ports or the host name -- which will be the container_name
So you need to do two things:
Add a container name to the redis, in the compose file just like you did on the API
Create a network and bind all the services to it, one way of doing that will be:
version: "3.8"
Network:
my-network:
name: my-network
services:
....
cache:
container_name: cache
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
networks: # add it in all containers that communicate together
- my-network
Then and only then you can call redis container name as the host, since docker network will create a host name for the service by the container name,
When deploying the whole compose file later, the containers will be created and all will be joined to the network by default on startup and that will allow you API app to communicate with Redis container via the docker container name as the host name
Refer to these resources for more details:
Networking on Docker Compose
Docker Network Overview
A side Unrelated note:
I personally used redis from npm for some testing projects, but I found that ioredis was much better with TypeScript projects and more expected in its behavior
To Avoid any problems with Redis, make sure to create a password and use it to connect, sometimes redis randomly considers the client as a ReadOnly client and fails to find a read replica, adding the password solved it for me

How to connect to Node API docker container from Angular Nginx container

I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://server-external-ip:80 but when trying to login, Nginx is not connecting to NodeApi. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
ports:
- 5432:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
networks:
- my-network
backend: # name of the second service
image: myId/mynodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
networks:
- my-network
command: bash -c "sleep 20 && node server.js"
myapp:
image: myId/myangularapp
ports:
- "80:80"
depends_on:
- backend
networks:
- my-network
networks:
my-network:
I am not sure what the apiUrl should be? I have tried the following and nothing worked:
apiUrl: "http://backend:3000/api"
apiUrl: "http://server-external-ip:3000/api"
apiUrl: "http://server-internal-ip:3000/api"
apiUrl: "http://localhost:3000/api"
I think you should use the docker-compose service as a DNS. It seems you've several docker hosts/ports available, there are the following in your docker-compose structure:
db:5432
http://backend:3000
http://myapp
Make sure to use db as POSTGRES_DB in the environment part for backend service.
Take a look to my repo, I think is the best way to learn how a similar project works and how to build several apps with nginx, you also can check my docker-compose.yml, it uses several services and are proxied using nginx and are worked together.
On this link you’ll find a nginx/default.conf file and it contains several nginx upstream configuration please take a look at how I used docker-compose service references there as hosts.
Inside the client/ directory, I also have another nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
If you need set or redirect traffic to /api you can use some ngnix config like this
I think this use case can be useful for you and other users!

How to connect graphql and prisma in docker containers?

I am trying to build a docker-compose file that run a node.js graphql api that uses prisma and mongodb.
But I got an error request to http://localhost:4466/ failed, reason: connect ECONNREFUSED 127.0.0.1:4466 when ever I try to send requests from graphql playground and the same error when I run prisma deploy or just try to ping http:localhost:4466 from inside the graphql container.
I have tried to use the default network and creating new network but I got the same error.
I have tried to use links (which is deprecated) in version 3 but also I got the same error.
P.S I can open the playground of prisma normally in the browser with the link: http://localhost:4466
This is my docker-compose file:
version: '3'
services:
web:
build: .
networks:
net:
ports:
- "80:4000"
command: wait-for-it/wait-for-it.sh http://localhost:4466 -t 30 -- ./run.sh
prisma:
image: prismagraphql/prisma:1.34
restart: always
networks:
net:
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: mongo
uri: mongodb://prisma:prisma#mongo
command: /bin/sh.sh
mongo:
image: mongo:3.6
restart: always
networks:
net:
environment:
MONGO_INITDB_ROOT_USERNAME: prisma
MONGO_INITDB_ROOT_PASSWORD: prisma
ports:
- "27017:27017"
volumes:
- mongo:/var/lib/mongo
volumes:
mongo:
networks:
net:
And this is the dockerfile of the web service:
FROM node:10
WORKDIR /app
COPY . /app/
RUN yarn install --pure-lockfile
RUN yarn global add prisma
And this is the run.sh file:
echo "prisma deploy command "
prisma deploy
echo "get-schema command"
yarn run get-schema
echo "starting command"
yarn run start
Are there anything that I misunderstand, Or what I need to fix to make it work?
You should use http://prisma:4466 as the connection URL in your web container. As your containers will be connected to the same network the name of the container will be DNS name and therefor will be resolved to IP of concrete container.
The localhost in your Node application points to the container running Node itself, not your host machine. Replace http://localhost:4466 with http://prisma:4466 or http://<host-machine-local-ip>:4466
To get host IP on a Unix machine run:
hostname -i
Or
ifconfig | awk '/broadcast/ {print $2}'
Change the content of your prisma.yml from
endpoint: http://localhost:4466
datamodel: datamodel.prisma
to
endpoint: http://192.168.99.100:4466
datamodel: datamodel.prisma
This worked for me.
Run your docker container by typing $docker-compose up -d -d flag is for running container in detach mode.
Instead of using endpoint as http://localhost:4466 use http://127.0.1.1:4466 Or check your localhost by the cmd: $localhost -i.
In prisma-binding your constructor should have endpoint as http://127.0.1.1:4466.
const prisma = new Prisma({
typeDefs: './src/generated/prisma.graphql',
endpoint: 'http://127.0.1.1:4466'
});

Cant connect to postgres database inside docker container

My problem is I have a script that should scrap data and put it inside postgres database, however it has a problem to reach out postgres container.
When I run my docker-compose here is the result:
Name Command State Ports
------------------------------------------------------------------------------------------
orcsearch_dev-db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
orcsearch_flask_1 gunicorn wsgi:application ... Up 0.0.0.0:80->80/tcp, 8000/tcp
We can clearly see that postgres is on 5432 port.
This is my python script database setting:(ofcourse I removed password for obvious reason)
class App():
settings = {
'db_host': 'db',
'db_user': 'postgres',
'db_pass': '',
'db_db': 'orc',
}
db = None
proxies = None
and this is my docker-compose.yml
version: '2'
services:
flask:
build:
context: ./backend
dockerfile: Dockerfile.dev
volumes:
- ./backend:/app
- pip-cache:/root/.cache
ports:
- "80:80"
links:
- "dev-db:db"
environment:
- DATABASE_URL=postgresql://postgres#db:5432/postgres
stdin_open: true
command: gunicorn wsgi:application -w 1 --bind 0.0.0.0:80 --log-level debug --reload
networks:
app:
aliases:
- flask
dev-db:
image: postgres:9.5
ports:
- "5432:5432"
networks:
app:
aliases:
- dev-db
volumes:
pip-cache:
driver: local
networks:
app:
When going into exec flask bash(inside flask container) and running script command I get this error:
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Obviously there is postgres running on this port and I cant figure out what wrong do I do. Any help would be nice!
Probably you are using DSN instead of URI, and PostgreSQL thinks that "db" is not a host because it's hard to tell if "db" is host or path to socket. To fix it, use URI instead of DSN if you use >=9.2 version of PostgreSQL.
Example of URI:
postgresql://[user[:password]#][netloc][:port][/dbname][?param1=value1&...]
https://www.postgresql.org/docs/9.2/static/libpq-connect.html#LIBPQ-CONNSTRING
In your App class it should be 'db_host': 'dev-db', Seems like that hostname is exposed, not db.
I think that the problem is related to the fact that you're using the network and the link together. Try remove the link and change the postgres address to dev-db or change the alias to:
networks:
app:
aliases:
- dev-db
- db

Resources