TypeORM migration:run not working errno: -3008,getaddrinfo ENOTFOUND - node.js

Do you know why I get the following error when I try to run typeorm:run to execute migration?
node --require ts-node/register ./node_modules/typeorm/cli.js migration:run
Error during migration run:
Error: getaddrinfo ENOTFOUND users-service-db
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:69:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'users-service-db',
fatal: true
}
error Command failed with exit code 1.
my config is
users-service-db:
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=db
image: mysql:5.7.20
ports:
- "7201:3306"
the users-service-db is running does this Error: getaddrinfo ENOTFOUND users-service-db say that the host doesn't know what to do. Can you help?
After trying Answer 1 and 2 still getting the same error don't know what to do it worked before?
version: "3"
services:
api-gateway:
build:
context: "."
dockerfile: "./api-gateway/Dockerfile"
depends_on:
- chat-service
- users-service
ports:
- "7000:7000"
volumes:
- ./api-gateway:/opt/app
chat-service:
build:
context: "."
dockerfile: "./chat-service/Dockerfile"
depends_on:
- chat-service-db
ports:
- "7100:7100"
volumes:
- ./chat-service:/opt/app
chat-service-db:
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=db
image: mysql:5.7.20
ports:
- "7200:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
ports:
- "7300:80"
volumes:
- ./phpmyadmin/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php
users-service:
build:
context: "."
dockerfile: "./users-service/Dockerfile"
depends_on:
- users-service-db
ports:
- "7101:7101"
volumes:
- ./users-service:/opt/app
users-service-db:
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=db
image: mysql:5.7.20
ports:
- "7201:3306"
hostname: 'localhost'
finally I resolved the error thanks to #Eranga Heshan
I created an additional ormConfig.js file at pasted this:
export = {
"type": "mysql",
"host": "localhost",
"port": 7201,
"username": "root",
"password": "password",
"database": "db",
"synchronize": true,
"logging": false,
"entities": [
"src/entities/**/*.ts"
],
"migrations": [
"./src/db/migrations/**/*.ts"
],
"cli": {
"entitiesDir": "src/db/entities",
"migrationsDir": "src/db/migrations"
}
}
then
node --require ts-node/register ./node_modules/typeorm/cli.js migration:run --config src/db/migrations/ormConfig

Your VS Code terminal is running inside your machine. So it can't resolve users-service-db host.
You can do this in two ways.
1. Using a new config file and execute migrations from your localhost
Create a new typeorm connection config file migrationsOrmConfig.ts and put it inside your project (Let's say you put it in src/migrations directory)
export = {
host: 'localhost',
port: '7201',
type: 'mysql',
user : 'root',
password : 'password',
database : 'db' ,
};
Now you can modify the command you used earlier to run migrations
node --require ts-node/register ./node_modules/typeorm/cli.js migration:run --config src/migrations/migrationsOrmConfig
2. Execute migrations from a terminal within the container
In your VSCode terminal type
docker ps -a
Get the CONTAINER ID of user-service (Let's say it is CONTAINER_ID)
Open up a terminal inside the container
docker exec -it CONTAINER_ID /bin/bash
Execute the command you used earlier to run migrations (if the following command complained about typeorm node module not being found, you can install it inside the container)
node --require ts-node/register ./node_modules/typeorm/cli.js migration:run

Related

Docker - Files Contain Bad Line Terminators

I setup a development environment using Docker on Windows 10. My Dockerfile and docker-compose.yml file uses php:8.2.2-apache, mysql:8.0.32, composer:2.5.3, and phpMyAdmin:5.2.1.
I will admit that getting Docker up and running to basically mimic my old xampp development environment has been incredibly frustrating.
Recently, I added robmorgan/phinx 0.13 to my composer.json. Initially, I ran from the docker container's terminal vendor/bin/phinx init and it successfully created a phinx.php file. I stopped the container, modified my phinx.php file to use values from my .env file. When I reran Docker, back into the container's terminal to run vendor/bin/phinx create <name>, I now get this error in the terminal:
usr/bin/env: 'php\r': No such file or directory
I have read in several places that this is because files have the Windows line terminators instead of the Unix line terminators.
The issue is that I do not understand which file is affected. How can I audit my files to find out what is the culprit?
In case you are curious, this is my docker-compose.yml and phinx.json:
version: '3.9'
services:
webserver:
build: ./docker
image: -redacted-
ports:
- "80:80"
- "443:443"
volumes:
- ./www:/var/www/html
links:
- db
db:
image: mysql:8.0.32
ports:
- "3306:3306"
volumes:
- ./database:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
composer:
image: composer:2.5.3
command: ["composer", "install"]
volumes:
- ./www:/app
phpmyadmin:
depends_on:
- db
image: phpmyadmin:5.2.1
restart: always
ports:
- 8080:80
environment:
PMA_HOST: db
<?php
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
$databaseName = $_ENV['MYSQL_DATABASE'];
$username = $_ENV['MYSQL_USER'];
$password = $_ENV['MYSQL_PASSWORD'];
return
[
'paths' => [
'migrations' => '%%PHINX_CONFIG_DIR%%/db/migrations',
'seeds' => '%%PHINX_CONFIG_DIR%%/db/seeds'
],
'environments' => [
'default_migration_table' => 'phinxlog',
'default_environment' => 'development',
'production' => [
'adapter' => 'mysql',
'host' => 'localhost',
'name' => $databaseName,
'user' => $username,
'pass' => $password,
'port' => '3306',
'charset' => 'utf8',
]
],
'version_order' => 'creation'
];
And I am running this to load Docker: docker-compose --env-file=./www/.env up
This should find files w/ CRLFs:
find -type f -print0 | xargs -0 file | grep CRLF

In sequelize connection I am getting operation timeout error. How to fix this issue?

I am trying to run node JS server & Postgres inside docker & using sequalize for DB Connection. However, Seems like my Node JS Server is not able to communicate with Postgres DB inside docker.
Before someone mark it as Duplicate, Please note that I have already checked other answers & none of them worked out for me.
I already tried implementing Retry Strategy for Sequalize connection.
Here's my docker-compose file:
version: "3.8"
services:
rapi:
container_name: rapi
image: rapi/latest
build: .
ports:
- "3001:3001"
environment:
- EXTERNAL_PORT=3001
- PGUSER=rapiuser
- PGPASSWORD=12345
- PGDATABASE=postgres
- PGHOST=rapi_db # NAME OF THE SERVICE
depends_on:
- rapi_db
rapi_db:
container_name: rapi_db
image: "postgres:12"
ports:
- "5432:5432"
environment:
- POSTGRES_USER=rapiuser
- POSTGRES_PASSWORD=12345
- POSTGRES_DB=postgres
volumes:
- rapi_data:/var/lib/postgresql/data
volumes:
rapi_data: {}
Here's my Dockerfile:
FROM node:16
EXPOSE 3000
# Use latest version of npm
RUN npm i npm#latest -g
COPY package.json package-lock.json* ./
RUN npm install --no-optional && npm cache clean --force
# copy in our source code last, as it changes the most
WORKDIR /
COPY . .
CMD [ "node", "index.js" ]
My DB Credentials:
credentials = {
PGUSER :process.env.PGUSER,
PGDATABASE :process.env.PGNAME,
PGPASSWORD : process.env.PGPASSWORD,
PGHOST : process.env.PGHOST,
PGPORT:process.env.PGPORT,
PGNAME:'postgres'
}
console.log("env Users: " + process.env.PGUSER + " env Database: " + process.env.PGDATABASE + " env PGHOST: " + process.env.PGHOST + " env PORT: " + process.env.EXTERNAL_PORT)
}
//else credentials = {}
module.exports = credentials;
Sequalize DB code:
const db = new Sequelize(credentials.PGDATABASE,credentials.PGUSER,credentials.PGPASSWORD, {
host: credentials.PGHOST,
dialect: credentials.PGNAME,
port:credentials.PGPORT,
protocol: credentials.PGNAME,
dialectOptions: {
},
logging: false,
define: {
timestamps: false
}
,
pool: {
max: 10,
min: 0,
acquire: 100000,
},
retry: {
match: [/Deadlock/i, Sequelize.ConnectionError], // Retry on connection errors
max: 3, // Maximum retry 3 times
backoffBase: 3000, // Initial backoff duration in ms. Default: 100,
backoffExponent: 1.5, // Exponent to increase backoff each try. Default: 1.1
},
});
module.exports = db;
Your process.env.PGPORT does not exists. Add an enviroment variable in the docker-compose for service rapi or set it to 5432 in your credentials file.

MongoDB cluster timeout while connecting to Node-RED

I am facing troubles while trying to connect my MongoDB:3.4 cluster to Node-RED:2 using Docker Swarm.
My environment consists of one leader machine, two workers with one Mongo node on each (mongo1 and mongo2), and the Node-RED container on one of the workers.
I successfully initiated my cluster with the below command:
rs.initiate({
_id : "rs1",
members: [
{ _id: 1, host: "mongo1:27017" },
{ _id: 2, host: "mongo2:27017" }
]
})
A connection with Mongo Express was successful on both the primary and secondary nodes of my cluster.
But when I tried to connect to the cluster from node-RED using the node-red-node-mongodb module, I got the following error:
MongoNetworkError: failed to connect to server [mongo2:27017] on first connect [MongoNetworkTimeoutError: connection timed out
at connectionFailureError (/data/node_modules/mongodb/lib/core/connection/connect.js:362:14)
at Socket.<anonymous> (/data/node_modules/mongodb/lib/core/connection/connect.js:330:16)
at Object.onceWrapper (events.js:519:28)
at Socket.emit (events.js:400:28)
at Socket._onTimeout (net.js:495:8)
at listOnTimeout (internal/timers.js:557:17)
at processTimers (internal/timers.js:500:7)]
This is how the MongoDB node was configured:
Host: mongo1,mongo2
Connection topology: RelicaSet/Cluster (mongodb://)
Connection options: replicaSet=rs1&tls=true&tlsAllowInvalidCertificates=true&wtimeoutMS=10000&slaveOk=true
And these are the relevant parts of the docker-compose.yml file:
version: '3.4'
services:
NodeRed:
user: root
networks:
- mynetwork
volumes:
- /home/ssmanager/nfsdata/nodered:/data
- /home/ssmanager/nfsdata/records:/data/records
- /home/ssmanager/nfsdata/cdr:/data/cdr
- /home/ssmanager/nfsdata/html/decrypted_temp:/data/records/decrypted
image: nodered/node-red:2
deploy:
placement:
constraints:
- "node.hostname!=ssmanager3"
endpoint_mode: dnsrr
mode: replicated
replicas: 1
update_config:
delay: 10s
restart_policy:
condition: any
max_attempts: 5
mongo1:
image: mongo:3.4
command: mongod --replSet rs1 --noauth --oplogSize 3
environment:
TERM: xterm
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- mynetwork
deploy:
replicas: 1
placement:
constraints:
- node.labels.mongo.replica == 1
- "node.hostname!=ssmanager3"
mongo2:
image: mongo:3.4
command: mongod --replSet rs1 --noauth --oplogSize 3
environment:
TERM: xterm
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- mynetwork
deploy:
replicas: 1
placement:
constraints:
- node.labels.mongo.replica == 2
- "node.hostname!=ssmanager3"
express:
container_name: express
image: mongo-express:0.54.0
environment:
ME_CONFIG_BASICAUTH_USERNAME: admin
ME_CONFIG_BASICAUTH_PASSWORD: password
ME_CONFIG_MONGODB_ENABLE_ADMIN: "true"
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_SERVER: mongo1
ME_CONFIG_MONGODB_URL: mongodb://mongo:27017
ME_CONFIG_REQUEST_SIZE: 100Mb
command:
- "mongo-express"
networks:
- mynetwork
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.hostname!=dcsynmgr01"
- "node.hostname!=ssmanager3"
ports:
- target: 8081
published: 8081
protocol: tcp
mode: host
networks:
host_mode:
external:
name: 'host'
mynetwork:
attachable: true

ECONNREFUSED at TCPConnectWrap.afterConnect NodeJS

i created a nodejs app which should use a URI to connect to rabbitmq. both are containerized with docker and are created by a docker-compose file. after running of "docker-compose up" the nodejs app returns an error:
Error: connect ECONNREFUSED X.X.X:X:5672
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1133:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '172.26.0.4',
port: 5672
}
when starting the api-server locally (not as a container-> as an node application), the connection to the containerized rabbitmq server estabilish without any problems.
my rabbitmq.conf file looks like:
default_vhost = /
default_user = guest
default_pass = guest
default_user_tags.administrator = true
default_permissions.configure = .*
default_permissions.read = .*
default_permissions.write = .*
loopback_users = none
listeners.tcp.default = 5672
management.listener.port = 15672
management.listener.ssl = false
management.load_definitions = /etc/rabbitmq/definitions.json
URI for connecting:
{
"mongoURI":"mongodb://mongo:27017",
"amqpURI": "amqp://guest:guest#rabbitmq:5672"
}
as you can see, the hostname is equal to the one, which is within the docker-compose file
finally the docker-compose file:
version: "3.8"
services:
react-app:
image: react-app
stdin_open: true
ports:
- "3000:3000"
networks:
- mern-app
api-server:
image: api-server
ports:
- "5000:5000"
networks:
- mern-app
depends_on:
- mongo
- rabbitmq
process-schedular:
image: process-schedular
ports:
- "5005:5005"
networks:
- mern-app
depends_on:
- mongo
- rabbitmq
mongo:
image: mongo:3.6.19-xenial
ports:
- "27017:27017"
networks:
- mern-app
volumes:
- mongo-data:/data/db
rabbitmq:
image: rabbitmq:3-management
hostname: rabbitmq
volumes:
- ./server/amqp/docker/enabled_plugins:/etc/rabbitmq/enabled_plugins
- ./server/amqp/docker/definitions.json:/etc/rabbitmq/definitions.json
- ./server/amqp/docker/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
ports:
- "5672:5672"
- "15672:15672"
networks:
- mern-app
networks:
mern-app:
driver: bridge
volumes:
mongo-data:
driver: local

(Docker-Compose) UnhandledPromiseRejectionWarning when connecting node and postgres

I am trying to connect the containers for postgres and node. Here is my setup:
yml file:
version: "3"
services:
postgresDB:
image: postgres:alpine
container_name: postgresDB
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myDB
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Thisisngo1995!
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
links:
- postgresDB
ports:
- "3000:3000"
Dockerfile:
FROM node:12
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 3000
CMD ["npm", "start"]
connect to postgres:
let { Pool, Client } = require("pg");
let postgres = new Pool({
host: "postgresDB",
port: 5432,
user: "postgres",
password: "Thisisngo1995!",
database: "myDB",
});
module.exports = postgres;
and here is how I handled my endpoint:
exports.postgres_get_controller = (req, resp) => {
console.log("Reached Here");
postgres
.query('SELECT * FROM public."People"')
.then((results) => {
console.log(results);
resp.send({ allData: results.rows });
})
.catch((e) => console.log(e));
};
Whenever I try to touch the endpoint above, I get this error in the container:
Reasons why?
Note: I am able to have everything functioning on my local machine (without docker) simply by changing "host: localhost"
Your postgres database name and username should be the same
You can use docker-compose-wait to make sure interdependent services are launched in proper order.
See below on how to use it for your case.
update the final part of your Dockerfile as below;
# ...
# this will be used to check if DB is up
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait ./wait
RUN chmod +x ./wait
CMD ./wait && npm start
Update some parts of your docker-compose.yml as below:
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
- WAIT_HOSTS=postgresDB:5432
- WAIT_BEFORE_HOSTS=4
links:
- postgresDB
depends_on:
- postgresDB
ports:
- "3000:3000"

Resources