The API does not find the DB when using docker compose.
I already configured the DATABA_URL and it doesn't work.
.env file:
JWT_SECRET="palavrasecreta"
NODE_ENV=production
DATABASE_URL=postgres://postgres:1234#db:5432/postgres
docker-compose file:
version: "3"
services:
app-front-end:
build: charllenger-front/.
container_name: front-end-ui
expose:
- 3000
ports:
- 3000:3000
links:
- api
api:
container_name: charllenger-back-end-Api
build: Full-Stack-charlenger-API/.
volumes:
- ./src:/app/src
expose:
- 3001
ports:
- 3001:3001
depends_on:
- db
command: bash -c 'yarn migration:run && yarn dev'
links:
- db
db:
container_name: charllenger-Api-postgres
image: "postgres"
env_file:
- Full-Stack-charlenger-API/.env
expose:
- 5432
ports:
- 5432:5432
DATA SOURCE
import { DataSource } from "typeorm";
require("dotenv").config();
export const AppDataSource = new DataSource({
type: "postgres",
host: "database",
url: process.env.DATABASE_URL,
ssl:
process.env.NODE_ENV === "production"
? { rejectUnauthorized: false }
: false,
synchronize: false,
logging: true,
entities:
process.env.NODE_ENV === "production"
? ["src/entities/*.js"]
: ["src/entities/*.ts"],
migrations:
process.env.NODE_ENV === "production"
? ["src/migrations/*.js"]
: ["src/migrations/*.ts"],
});
AppDataSource.initialize()
.then(() => {
console.log("Data Source Initialized");
})
.catch((err) => {
console.error("Error during data source Initialization", err);
}
);
Migration
import { MigrationInterface, QueryRunner } from "typeorm";
export class initialMigration1673714934213 implements MigrationInterface {
name = 'initialMigration1673714934213'
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`CREATE TABLE "transaction" ("transaction_id" uuid NOT NULL, "value" numeric NOT NULL, "createdAt" TIMESTAMP NOT NULL DEFAULT now(), "debitedAccountAccountId" uuid, "creditedAccountAccountId" uuid, CONSTRAINT "PK_6e02e5a0a6a7400e1c944d1e946" PRIMARY KEY ("transaction_id"))`);
await queryRunner.query(`CREATE TABLE "user" ("user_id" uuid NOT NULL, "username" character varying NOT NULL, "password" character varying NOT NULL, "account" uuid, CONSTRAINT "REL_4ab2df0a57a74fdf904e0e2708" UNIQUE ("account"), CONSTRAINT "PK_758b8ce7c18b9d347461b30228d" PRIMARY KEY ("user_id"))`);
await queryRunner.query(`CREATE TABLE "account" ("account_id" uuid NOT NULL, "balance" double precision NOT NULL, CONSTRAINT "PK_ea08b54a9d7322975ffc57fc612" PRIMARY KEY ("account_id"))`);
await queryRunner.query(`ALTER TABLE "transaction" ADD CONSTRAINT "FK_bbbfcdb3330cc4e5846f2d23200" FOREIGN KEY ("debitedAccountAccountId") REFERENCES "account"("account_id") ON DELETE NO ACTION ON UPDATE NO ACTION`);
await queryRunner.query(`ALTER TABLE "transaction" ADD CONSTRAINT "FK_4ece0117a7c2689832bab37209b" FOREIGN KEY ("creditedAccountAccountId") REFERENCES "account"("account_id") ON DELETE NO ACTION ON UPDATE NO ACTION`);
await queryRunner.query(`ALTER TABLE "user" ADD CONSTRAINT "FK_4ab2df0a57a74fdf904e0e27086" FOREIGN KEY ("account") REFERENCES "account"("account_id") ON DELETE NO ACTION ON UPDATE NO ACTION`);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "user" DROP CONSTRAINT "FK_4ab2df0a57a74fdf904e0e27086"`);
await queryRunner.query(`ALTER TABLE "transaction" DROP CONSTRAINT "FK_4ece0117a7c2689832bab37209b"`);
await queryRunner.query(`ALTER TABLE "transaction" DROP CONSTRAINT "FK_bbbfcdb3330cc4e5846f2d23200"`);
await queryRunner.query(`DROP TABLE "account"`);
await queryRunner.query(`DROP TABLE "user"`);
await queryRunner.query(`DROP TABLE "transaction"`);
}
}
Error:
enter image description here
t's working on another computer, I've already changed localhost to db, I've tried other ports and it doesn't work.
In the docker-compose you define your database container as db.
db:
container_name: charllenger-Api-postgres
image: "postgres"
db: is the servicename.
If you want to connect from your api to the database container, inside your docker network, you need to use this servicename as db_host.
the section:
ports:
- 5432:5432
means: map the port 5432 inside the container to my host (your Pc where you installed docker).
So you can access the container on his port 5432, from your PC with localhost:5432 and this works fine if you execute your code localy even if postgres is running inside a container.
If your code runs in a container localhost refers to that container.
Lets say each container has it own localhost. So you can access the other containers only by its servicename. There are a few other possibilities, but lets remain to use the servicename.
In side your Typeorm configuration ensure that the connection to the database use the servicename when you run the code inside the conatiner, or localhost if you run it directly on your pc without docker.
When you deploy it to Production, again you must look where the db_host is. That depends on your deployment. If you use an external DB from a provvider it should be a full qualified domain name. If you deploy your db by yourself it depends if you use docker Kubernetes or what else.
Related
I have a web app, and I've written a migrator to create all my tables and relations, recently no matter what I try, typeorm does not appear to find this migrator and hence, does not run it.
My file structure (just the migrations)
src> Databas> Migrations>1663525805095-add_users.ts,1663529676790-make_institute_nullable.ts
ormconfig.ts
import { DataSource } from 'typeorm';
import { ConfigService } from '#nestjs/config';
import { config } from 'dotenv';
config();
const configService = new ConfigService();
const source = new DataSource({
type: 'postgres',
host: configService.get('POSTGRES_HOST'),
port: configService.get('POSTGRES_PORT'),
username: configService.get('POSTGRES_USER'),
password: configService.get('POSTGRES_PASSWORD'),
database: configService.get('POSTGRES_DB'),
synchronize: false,
logging: false,
migrations: ['src/database/migrations/*.ts'],
migrationsTableName: 'migrations',
entities: ['src/**/*.entity.ts'],
});
export default source;
In order to run this, I type yarn start:dev in order to get my Server started.
Then I run yarn migrations:run which I get:
query: SELECT * FROM current_schema()
query: SELECT version();
query: SELECT * FROM "information_schema"."tables" WHERE "table_schema" = 'public' AND "table_name" = 'migrations'
query: CREATE TABLE "migrations" ("id" SERIAL NOT NULL, "timestamp" bigint NOT NULL, "name" character varying NOT NULL, CONSTRAINT "PK_8c82d7f526340ab734260ea46be" PRIMARY KEY ("id"))
query: SELECT * FROM "migrations" "migrations" ORDER BY "id" DESC
No migrations are pending
When I look at my db, I see a migrations table with no entries.
I have tried to delete my migrator file and create it again with a more recent timestamp and that does not work either.
scripts from my package.json
"migrations:run": "yarn typeorm migration:run"
"typeorm": "typeorm-ts-node-commonjs -d ./ormconfig.ts"
"start:dev": "nest start --watch"
Other info
I'm using docker for the postgres DB and pgAdmin, it connects with no problem.
Any help would be greatly appreciated.
I am trying to run node JS server & Postgres inside docker & using sequalize for DB Connection. However, Seems like my Node JS Server is not able to communicate with Postgres DB inside docker.
Before someone mark it as Duplicate, Please note that I have already checked other answers & none of them worked out for me.
I already tried implementing Retry Strategy for Sequalize connection.
Here's my docker-compose file:
version: "3.8"
services:
rapi:
container_name: rapi
image: rapi/latest
build: .
ports:
- "3001:3001"
environment:
- EXTERNAL_PORT=3001
- PGUSER=rapiuser
- PGPASSWORD=12345
- PGDATABASE=postgres
- PGHOST=rapi_db # NAME OF THE SERVICE
depends_on:
- rapi_db
rapi_db:
container_name: rapi_db
image: "postgres:12"
ports:
- "5432:5432"
environment:
- POSTGRES_USER=rapiuser
- POSTGRES_PASSWORD=12345
- POSTGRES_DB=postgres
volumes:
- rapi_data:/var/lib/postgresql/data
volumes:
rapi_data: {}
Here's my Dockerfile:
FROM node:16
EXPOSE 3000
# Use latest version of npm
RUN npm i npm#latest -g
COPY package.json package-lock.json* ./
RUN npm install --no-optional && npm cache clean --force
# copy in our source code last, as it changes the most
WORKDIR /
COPY . .
CMD [ "node", "index.js" ]
My DB Credentials:
credentials = {
PGUSER :process.env.PGUSER,
PGDATABASE :process.env.PGNAME,
PGPASSWORD : process.env.PGPASSWORD,
PGHOST : process.env.PGHOST,
PGPORT:process.env.PGPORT,
PGNAME:'postgres'
}
console.log("env Users: " + process.env.PGUSER + " env Database: " + process.env.PGDATABASE + " env PGHOST: " + process.env.PGHOST + " env PORT: " + process.env.EXTERNAL_PORT)
}
//else credentials = {}
module.exports = credentials;
Sequalize DB code:
const db = new Sequelize(credentials.PGDATABASE,credentials.PGUSER,credentials.PGPASSWORD, {
host: credentials.PGHOST,
dialect: credentials.PGNAME,
port:credentials.PGPORT,
protocol: credentials.PGNAME,
dialectOptions: {
},
logging: false,
define: {
timestamps: false
}
,
pool: {
max: 10,
min: 0,
acquire: 100000,
},
retry: {
match: [/Deadlock/i, Sequelize.ConnectionError], // Retry on connection errors
max: 3, // Maximum retry 3 times
backoffBase: 3000, // Initial backoff duration in ms. Default: 100,
backoffExponent: 1.5, // Exponent to increase backoff each try. Default: 1.1
},
});
module.exports = db;
Your process.env.PGPORT does not exists. Add an enviroment variable in the docker-compose for service rapi or set it to 5432 in your credentials file.
I am trying to connect the containers for postgres and node. Here is my setup:
yml file:
version: "3"
services:
postgresDB:
image: postgres:alpine
container_name: postgresDB
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myDB
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Thisisngo1995!
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
links:
- postgresDB
ports:
- "3000:3000"
Dockerfile:
FROM node:12
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 3000
CMD ["npm", "start"]
connect to postgres:
let { Pool, Client } = require("pg");
let postgres = new Pool({
host: "postgresDB",
port: 5432,
user: "postgres",
password: "Thisisngo1995!",
database: "myDB",
});
module.exports = postgres;
and here is how I handled my endpoint:
exports.postgres_get_controller = (req, resp) => {
console.log("Reached Here");
postgres
.query('SELECT * FROM public."People"')
.then((results) => {
console.log(results);
resp.send({ allData: results.rows });
})
.catch((e) => console.log(e));
};
Whenever I try to touch the endpoint above, I get this error in the container:
Reasons why?
Note: I am able to have everything functioning on my local machine (without docker) simply by changing "host: localhost"
Your postgres database name and username should be the same
You can use docker-compose-wait to make sure interdependent services are launched in proper order.
See below on how to use it for your case.
update the final part of your Dockerfile as below;
# ...
# this will be used to check if DB is up
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait ./wait
RUN chmod +x ./wait
CMD ./wait && npm start
Update some parts of your docker-compose.yml as below:
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
- WAIT_HOSTS=postgresDB:5432
- WAIT_BEFORE_HOSTS=4
links:
- postgresDB
depends_on:
- postgresDB
ports:
- "3000:3000"
I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.
I'm trying to get my nodejs application up and running using a docker container. I have no clue what might be wrong. The credentials seems to be passed correctly when I debug the credentials with the console. Also firing up sequel pro and connecting directly with the same username and password seems to work. When node starts in the container I get the error message:
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
The application itself is loading correctly on port 3000, however no data is retrieved from the database. If have also tried adding the environment variables directly to the docker compose file, but this also doesn't seem to work.
My project code is hosted over here: https://github.com/pietheinstrengholt/rssmonster
The following database.js configuration is used. When I add console.log(config) the correct credentials from the .env file are displayed.
require('dotenv').load();
const Sequelize = require('sequelize');
const fs = require('fs');
const path = require('path');
const env = process.env.NODE_ENV || 'development';
const config = require(path.join(__dirname + '/../config/config.js'))[env];
if (config.use_env_variable) {
var sequelize = new Sequelize(process.env[config.use_env_variable], config);
} else {
var sequelize = new Sequelize(config.database, config.username, config.password, config);
}
module.exports = sequelize;
When I do a console.log(config) inside the database.js I get the following output:
{
username: 'rssmonster',
password: 'password',
database: 'rssmonster',
host: 'localhost',
dialect: 'mysql'
}
Following .env:
DB_HOSTNAME=localhost
DB_PORT=3306
DB_DATABASE=rssmonster
DB_USERNAME=rssmonster
DB_PASSWORD=password
And the following docker-compose.yml:
version: '2.3'
services:
app:
depends_on:
mysql:
condition: service_healthy
build:
context: ./
dockerfile: app.dockerfile
image: rssmonster/app
ports:
- 3000:3000
environment:
NODE_ENV: development
PORT: 3000
DB_USERNAME: rssmonster
DB_PASSWORD: password
DB_DATABASE: rssmonster
DB_HOSTNAME: localhost
working_dir: /usr/local/rssmonster/server
env_file:
- ./server/.env
links:
- mysql:mysql
mysql:
container_name: mysqldb
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_DATABASE: "rssmonster"
MYSQL_USER: "rssmonster"
MYSQL_PASSWORD: "password"
ports:
- "3307:3306"
volumes:
- /var/lib/mysql
restart: unless-stopped
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 5s
retries: 10
volumes:
dbdata:
Error output:
{ SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
app_1 | at Promise.tap.then.catch.err (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:128:19)
app_1 | From previous event:
app_1 | at ConnectionManager.connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:125:13)
app_1 | at sequelize.runHooks.then (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:50)
app_1 | From previous event:
app_1 | at ConnectionManager._connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:8)
app_1 | at ConnectionManager.getConnection (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:247:46)
app_1 | at Promise.try (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:564:34)
app_1 | From previous event:
app_1 | at Promise.resolve.retryParameters (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:464:64)
app_1 | at /usr/local/rssmonster/server/node_modules/retry-as-promised/index.js:60:21
app_1 | at new Promise (<anonymous>)
Insteaf of localhost point to mysql which is the service name (DNS) that nodejs will resolve into the MySQL container:
DB_HOSTNAME: mysql
And
{
...
host: 'mysql',
...
}
Inside of the container you should reference the container by the name you gave in your docker-compose.yml file.
In this case you should use
DB_HOSTNAME: mysql
After searching and digging up through several googling attempt, the culprit of the problem soon appear. In this context, the database server is not in the same machine. In other words, the MySQL Database Server address is not localhost. So, how can the above MySQL database configuration by default is pointing to localhost address. Well, it seems that if there is no further definition of the host address, it will connect to the localhost address by default. Read the article for further reference about sequelize syntax pattern in this link.
So, in order to solve the problem, just modify the file with the right configuration database. The following is the correction of the configuration database :
const sequelize = require("sequelize")
const db = new sequelize("db_master","db_user","password", {
host : "10.0.2.2",
dialect: "mysql"
});
db.sync({});
module.exports = db;
Actually, the NodeJS application is running in a virtual server. It is a guest machine run in a VirtualBox application. On the other hand, MySQL Database server exist outside the guest machine. It is available in the host machine where the VirtualBox application is running. The host machine IP address is 10.0.2.2. So, in order to connect to MySQL Database Server in the host machine, the IP address of the host is 10.0.2.2.
use your connection string as :
mysql://username:password#mysql:(port_running_on_container)or(exposed_port)/db_name
Answers already exist, but to provide some further explanation:
You can't use 127.0.0.1 (localhost) to access other services/containers since each container will view that as inside itself. When running docker-compose, all your services will be entered into the same docker network. All services inside the same docker network, are able to reach eachother by service name.
hence, as already stated in previous answers: in your configuration, change db hostname from localhost to mysql.
three things to check before
make sure your service name must be MySQL
in Configure DB_HOST also a MySQL
And your backend service depends on mysql in docker-compose.yml
here is my success code
export const db = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
port: process.env.DB_PORT,
host:'mysql',
dialect: "mysql",
logging: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
},
}
);