(Docker-Compose) UnhandledPromiseRejectionWarning when connecting node and postgres - node.js

I am trying to connect the containers for postgres and node. Here is my setup:
yml file:
version: "3"
services:
postgresDB:
image: postgres:alpine
container_name: postgresDB
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myDB
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Thisisngo1995!
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
links:
- postgresDB
ports:
- "3000:3000"
Dockerfile:
FROM node:12
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 3000
CMD ["npm", "start"]
connect to postgres:
let { Pool, Client } = require("pg");
let postgres = new Pool({
host: "postgresDB",
port: 5432,
user: "postgres",
password: "Thisisngo1995!",
database: "myDB",
});
module.exports = postgres;
and here is how I handled my endpoint:
exports.postgres_get_controller = (req, resp) => {
console.log("Reached Here");
postgres
.query('SELECT * FROM public."People"')
.then((results) => {
console.log(results);
resp.send({ allData: results.rows });
})
.catch((e) => console.log(e));
};
Whenever I try to touch the endpoint above, I get this error in the container:
Reasons why?
Note: I am able to have everything functioning on my local machine (without docker) simply by changing "host: localhost"

Your postgres database name and username should be the same

You can use docker-compose-wait to make sure interdependent services are launched in proper order.
See below on how to use it for your case.
update the final part of your Dockerfile as below;
# ...
# this will be used to check if DB is up
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait ./wait
RUN chmod +x ./wait
CMD ./wait && npm start
Update some parts of your docker-compose.yml as below:
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
- WAIT_HOSTS=postgresDB:5432
- WAIT_BEFORE_HOSTS=4
links:
- postgresDB
depends_on:
- postgresDB
ports:
- "3000:3000"

Related

Docker - Files Contain Bad Line Terminators

I setup a development environment using Docker on Windows 10. My Dockerfile and docker-compose.yml file uses php:8.2.2-apache, mysql:8.0.32, composer:2.5.3, and phpMyAdmin:5.2.1.
I will admit that getting Docker up and running to basically mimic my old xampp development environment has been incredibly frustrating.
Recently, I added robmorgan/phinx 0.13 to my composer.json. Initially, I ran from the docker container's terminal vendor/bin/phinx init and it successfully created a phinx.php file. I stopped the container, modified my phinx.php file to use values from my .env file. When I reran Docker, back into the container's terminal to run vendor/bin/phinx create <name>, I now get this error in the terminal:
usr/bin/env: 'php\r': No such file or directory
I have read in several places that this is because files have the Windows line terminators instead of the Unix line terminators.
The issue is that I do not understand which file is affected. How can I audit my files to find out what is the culprit?
In case you are curious, this is my docker-compose.yml and phinx.json:
version: '3.9'
services:
webserver:
build: ./docker
image: -redacted-
ports:
- "80:80"
- "443:443"
volumes:
- ./www:/var/www/html
links:
- db
db:
image: mysql:8.0.32
ports:
- "3306:3306"
volumes:
- ./database:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
composer:
image: composer:2.5.3
command: ["composer", "install"]
volumes:
- ./www:/app
phpmyadmin:
depends_on:
- db
image: phpmyadmin:5.2.1
restart: always
ports:
- 8080:80
environment:
PMA_HOST: db
<?php
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
$databaseName = $_ENV['MYSQL_DATABASE'];
$username = $_ENV['MYSQL_USER'];
$password = $_ENV['MYSQL_PASSWORD'];
return
[
'paths' => [
'migrations' => '%%PHINX_CONFIG_DIR%%/db/migrations',
'seeds' => '%%PHINX_CONFIG_DIR%%/db/seeds'
],
'environments' => [
'default_migration_table' => 'phinxlog',
'default_environment' => 'development',
'production' => [
'adapter' => 'mysql',
'host' => 'localhost',
'name' => $databaseName,
'user' => $username,
'pass' => $password,
'port' => '3306',
'charset' => 'utf8',
]
],
'version_order' => 'creation'
];
And I am running this to load Docker: docker-compose --env-file=./www/.env up
This should find files w/ CRLFs:
find -type f -print0 | xargs -0 file | grep CRLF

In sequelize connection I am getting operation timeout error. How to fix this issue?

I am trying to run node JS server & Postgres inside docker & using sequalize for DB Connection. However, Seems like my Node JS Server is not able to communicate with Postgres DB inside docker.
Before someone mark it as Duplicate, Please note that I have already checked other answers & none of them worked out for me.
I already tried implementing Retry Strategy for Sequalize connection.
Here's my docker-compose file:
version: "3.8"
services:
rapi:
container_name: rapi
image: rapi/latest
build: .
ports:
- "3001:3001"
environment:
- EXTERNAL_PORT=3001
- PGUSER=rapiuser
- PGPASSWORD=12345
- PGDATABASE=postgres
- PGHOST=rapi_db # NAME OF THE SERVICE
depends_on:
- rapi_db
rapi_db:
container_name: rapi_db
image: "postgres:12"
ports:
- "5432:5432"
environment:
- POSTGRES_USER=rapiuser
- POSTGRES_PASSWORD=12345
- POSTGRES_DB=postgres
volumes:
- rapi_data:/var/lib/postgresql/data
volumes:
rapi_data: {}
Here's my Dockerfile:
FROM node:16
EXPOSE 3000
# Use latest version of npm
RUN npm i npm#latest -g
COPY package.json package-lock.json* ./
RUN npm install --no-optional && npm cache clean --force
# copy in our source code last, as it changes the most
WORKDIR /
COPY . .
CMD [ "node", "index.js" ]
My DB Credentials:
credentials = {
PGUSER :process.env.PGUSER,
PGDATABASE :process.env.PGNAME,
PGPASSWORD : process.env.PGPASSWORD,
PGHOST : process.env.PGHOST,
PGPORT:process.env.PGPORT,
PGNAME:'postgres'
}
console.log("env Users: " + process.env.PGUSER + " env Database: " + process.env.PGDATABASE + " env PGHOST: " + process.env.PGHOST + " env PORT: " + process.env.EXTERNAL_PORT)
}
//else credentials = {}
module.exports = credentials;
Sequalize DB code:
const db = new Sequelize(credentials.PGDATABASE,credentials.PGUSER,credentials.PGPASSWORD, {
host: credentials.PGHOST,
dialect: credentials.PGNAME,
port:credentials.PGPORT,
protocol: credentials.PGNAME,
dialectOptions: {
},
logging: false,
define: {
timestamps: false
}
,
pool: {
max: 10,
min: 0,
acquire: 100000,
},
retry: {
match: [/Deadlock/i, Sequelize.ConnectionError], // Retry on connection errors
max: 3, // Maximum retry 3 times
backoffBase: 3000, // Initial backoff duration in ms. Default: 100,
backoffExponent: 1.5, // Exponent to increase backoff each try. Default: 1.1
},
});
module.exports = db;
Your process.env.PGPORT does not exists. Add an enviroment variable in the docker-compose for service rapi or set it to 5432 in your credentials file.

my puppeteer not working on docker container

It was working fine on just development environment not docker container.
But not working on docker container. How do I setup puppeteer on docker container?
I have seen a lot of answer and question but not working at all. I think that my code is wrong..
My Env:
OS: MacOS, M1
Here is Dockerfile:
FROM node:18-alpine
WORKDIR /usr/src/app
RUN apk update && apk upgrade
RUN apk add --no-cache udev ttf-freefont chromium
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true \
CHROME_PATH=/usr/bin/chromium-browser \
PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8120
CMD ["npm", "run", "start:dev"]
And docker-compose.yml:
version: "3.7"
services:
node:
container_name: node-test
build:
context: .
dockerfile: Dockerfile
restart: always
platform: linux/amd64
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- test-net
env_file:
- .env
ports:
- "8010:8010"
depends_on:
- mongo
mongo:
image: mongo
container_name: mongo
restart: always
networks:
- test-net
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: test
MONGO_INITDB_ROOT_PASSWORD: test
MONGO_INITDB_DATABASE: test
volumes:
mongo:
networks:
test-net:
name: test-net
external: true
crawler.ts:
import puppeteer from 'puppeteer';
export const crawler = async (url: string) => {
console.log(url); // <- [v] working
const browser = await puppeteer.launch({
ignoreHTTPSErrors: false,
executablePath: '/usr/bin/chromium-browser',
headless: true,
args:
'--disable-dev-shm-usage',
'--no-sandbox',
'--disable-setuid-sandbox'
],
slowMo: 30
});
console.log(browser) // [v] print something.
const page = await browser.newPage();
console.log(page); // <- [x] not working. print anything.
await page.setViewport({ width: 1920, height: 1080 })
await page.goto("https://www.example.com");
await page.close();
await browser.close();
};
My error message is different whenever modify my code, but It is latest error message on terminal.
node-test | ProtocolError: Protocol error (Target.createTarget): Target closed.
node-test | at /usr/src/app/node_modules/puppeteer/src/common/Connection.ts:119:16
node-test | at new Promise (<anonymous>)
node-test | at Connection.send (/usr/src/app/node_modules/puppeteer/src/common/Connection.ts:115:12)
node-test | at Browser._createPageInContext (/usr/src/app/node_modules/puppeteer/src/common/Browser.ts:525:47)
node-test | at BrowserContext.newPage (/usr/src/app/node_modules/puppeteer/src/common/Browser.ts:886:26)
node-test | at Browser.newPage (/usr/src/app/node_modules/puppeteer/src/common/Browser.ts:518:33)
node-test | at crawler (/usr/src/app/src/crawler/crawler.ts:16:30)
node-test | at processTicksAndRejections (node:internal/process/task_queues:95:5)
node-test | at async CrawlerController (/usr/src/app/src/crawler/crawler.controller.ts:10:5) {
node-test | originalMessage: ''
node-test | }

ampqlib Error:"Frame size exceeds frame max" inside docker container

I am trying to do simple application with backend on node.js + ts and rabbitmq, based on docker. So there are 2 containers: rabbitmq container and backend container with 2 servers running - producer and consumer. So now I am trying to get an access to rabbitmq server, but I get this error "Frame size exceeds frame max".
The full code is:
My producer server code is:
import express from 'express';
import amqplib, { Connection, Channel, Options } from 'amqplib';
const producer = express();
const sendRabbitMq = () =>{
amqplib.connect('amqp://localhost', function(error0: any, connection: any) {
if(error0){
console.log('Some error...')
throw error0
}
})
}
producer.post('/send', (_req, res) => {
sendRabbitMq();
console.log('Done...');
res.send("Ok")
})
export { producer };
It is connected to main file index.ts and running inside this file.
Also maybe I have some bad configuration inside docker. My Dockerfile is
FROM node:16
WORKDIR /app/backend/src
COPY *.json ./
RUN npm install
COPY . .
And my docker-compose include this code:
version: '3'
services:
backend:
build: ./backend
container_name: 'backend'
command: npm run start:dev
restart: always
volumes:
- ./backend:/app/backend/src
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
ports:
- 3000:3000
environment:
- PRODUCER_PORT=3000
- CONSUMER_PORT=5672
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.13
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
I will be very appreciated for your help

elasticsearch check indices exist returns true on first run

I have a docker-compose running 2 containers, each with its own service, node and elasticsearch.
app.js
...
const isElasticReady = await elastic.checkConnection();
if (isElasticReady) {
const elasticIndex = await elastic.esclient.indices.exists({index:elastic.index});
if (!elasticIndex.body) {
await elastic.createIndex(elastic.index);
await elastic.setMapping();
await data.populateDatabase();
}
}
...
Whenever I run docker-compose up, esclient.indices.exists always returns false, even though the index already exists. As a result I always get thrown a resource_already_exists_exception.
The strange thing is that I am using nodemon for development, and whenever I make changes while development esclient.indices.exists will return true. So the problem only happens when I run docker-compose up. I suspect something is happening asynchronously, but I am not sure what.
*docker-compose.yml - depends_on has been set.
version: '3.6'
services:
api:
image: nodeservice/node:10.15.3-alpine
container_name: nodeservice
build: .
ports:
- 3000:3000
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- NODE_PORT=3000
- ELASTIC_URL=http://elasticsearch:9200
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- esnet
elasticsearch:
container_name:my_elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- esnet
volumes:
esdata:
networks:
esnet:
Any hints?

Resources