Nodejs Sequelize instance timeout when connecting to mysql container - node.js

I have a Node.js server running on an Ubuntu-20.04 Virtual Machine.
I'm using docker compose to setup a mysql container with a production database.
My docker-compose.yml file is like so,
prod_db:
image: mysql:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_PRODUCTION_PASSWORD}
MYSQL_DATABASE: ${MYSQL_PRODUCTION_DATABASE}
ports:
- ${MYSQL_PRODUCTION_PORT}:3306
Running docker compose up on it appears to work fine,
lockers-prod_db-1 | 2022-08-08T19:18:03.005576Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.30' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
And docker container list yields the following,
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33289149af9f mysql:latest "docker-entrypoint.s…" 37 seconds ago Up 35 seconds 33060/tcp, 0.0.0.0:3308->3306/tcp, :::3308->3306/tcp lockers-prod_db-1
But yet when attempting to connect via the Sequelize with the following code,
config = {
id: 'production',
port: process.env.NODE_PORT,
sqlConfig: {
port: parseInt(process.env.MYSQL_PRODUCTION_PORT),
host: process.env.MYSQL_PRODUCTION_HOST,
user: process.env.MYSQL_PRODUCTION_USER,
password: process.env.MYSQL_PRODUCTION_PASSWORD,
database: process.env.MYSQL_PRODUCTION_DATABASE,
locationsCsvPath: process.env.LOCATIONS_CSV_ABSOLUTE_PATH,
lockersCsvPath: process.env.LOCKERS_CSV_ABSOLUTE_PATH,
contractsCsvPath: process.env.CONTRACTS_CSV_ABSOLUTE_PATH
}
const sequelize = new Sequelize({
dialect: 'mysql',
host: config.sqlConfig.host,
port: config.sqlConfig.port,
password: config.sqlConfig.password,
username: config.sqlConfig.user,
database: config.sqlConfig.database,
models: [Contract, Location, Locker],
logging: false
});
I get the following error,
/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102
throw new SequelizeErrors.ConnectionError(err);
^
ConnectionError [SequelizeConnectionError]: connect ETIMEDOUT
at ConnectionManager.connect (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async ConnectionManager._connect (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:220:24)
at async /home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:174:32
at async ConnectionManager.getConnection (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:197:7)
at async /home/freemaa7/lockers/node_modules/sequelize/lib/sequelize.js:301:26
at async MySQLQueryInterface.tableExists (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/query-interface.js:102:17)
at async Function.sync (/home/freemaa7/lockers/node_modules/sequelize/lib/model.js:939:21)
at async Sequelize.sync (/home/freemaa7/lockers/node_modules/sequelize/lib/sequelize.js:373:9) {
parent: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/home/freemaa7/lockers/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true
},
original: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/home/freemaa7/lockers/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true
}
}
I'm running this on a virtual machine, this works perfectly locally though. The main difference is that on the VM an apache2 instance is running. I'm starting to think it may be redirecting the TPC requests to the container to another address because it's setup as a reverse proxy. Could this be a possibility ?

Related

Postgres: Error 28P01 and non-asked user login

I have an NodeJS app that runs ok in my dev machine, but in production have a wierd behaviour: it asks for a user that I didn't call!
Here is my .env file:
PGUSER=postgres
PGHOST=my.domain
PGPASSWORD=my.passwd
PGDATABASE=my.dbase
PGPORT=5432
As I said, it runs ok in my machine but when I try to run it in my AWS Lighsail VPS it crashes:
/home/ubuntu/apps/bounce/node_modules/pg-protocol/dist/parser.js:287
const message = name === 'notice' ? new messages_1.NoticeMessage(length, messageValue) : new messages_1.DatabaseError(messageValue, length, name);
^
error: password authentication failed for user "ubuntu"
at Parser.parseErrorMessage (/home/ubuntu/apps/bounce/node_modules/pg-protocol/dist/parser.js:287:98)
at Parser.handlePacket (/home/ubuntu/apps/bounce/node_modules/pg-protocol/dist/parser.js:126:29)
at Parser.parse (/home/ubuntu/apps/bounce/node_modules/pg-protocol/dist/parser.js:39:38)
at Socket.<anonymous> (/home/ubuntu/apps/bounce/node_modules/pg-protocol/dist/index.js:11:42)
at Socket.emit (node:events:527:28)
at addChunk (node:internal/streams/readable:315:12)
at readableAddChunk (node:internal/streams/readable:289:9)
at Socket.Readable.push (node:internal/streams/readable:228:10)
at TCP.onStreamRead (node:internal/stream_base_commons:190:23) {
length: 102,
severity: 'FATAL',
code: '28P01',
detail: undefined,
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'auth.c',
line: '330',
routine: 'auth_failed'
}
The wierd thing: I didn't called "ubuntu" user in my code - I´m using "postgres", as in my .env file. I tried setting an user/passwd "ubuntu" and tested it , with PgAdmin and BeeKeeper - in booth apps I can access Postgres, but I couldn't do it through my Nodejs app hosted online.
My pg_hba.conf is here:
local all postgres peer
local all ubuntu trust
local all all md5
host all all 0.0.0.0/0 md5
And my connectin file is here:
require('dotenv').config();
const Pool = require('pg').Pool;
const conecta = new Pool({
user: process.env.PGUSER,
password: process.env.PGPASSWORD,
database: process.env.PGDATABASE,
host: process.env.PGHOST,
port: process.env.PGPORT
});
module.exports = conecta;
Why it insists in "ubuntu" user? And why my NodeJS app can't connect if PgAdmin and BeeKeeper can?
Simple answer: use a literal call to the .env file inside config():
require('dotenv').config({ path: '/path/to/your/.env' });

Executing ksql query using node npm ksqlDb-client throwing an timeout error

Hi am trying to execute the KSQL query using npm package ksqlDb-client package, it throws an timeout error. I have attached the code as well, please let me know any issues over there.
when am hit this GET URL below method will execute https://localhost:5000/testKsql
exports.getKSQLStream = async () => {
const options = {
authorization: {
username: "admin",
password: "pw",
ssl: {
ca: CAs,
crt: myClientCert,
key: myClientKey,
}
},
host: 'https://ixxxx.xcxx.net',
timeout: 20000
}
const client = new KsqldbClient(options);
await client.connect();
const streamRes = await client.query('list streams;');
console.log("streams",streamRes);
await client.disconnect();
}
when I hit that URL from postman am getting below response in node console.
Error on Http2 client. Error: connect ETIMEDOUT 16.2XX.X4.xxx:8088
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1159:16) {
errno: -4039,
code: 'ETIMEDOUT',
syscall: 'connect',
address: '16.2XX.X4.xxx',
port: 8088
}
I faced recently with the same issue and found that library has bug with authorization. I pushed PR in repo, but you can patch your local copy in the same way.
https://github.com/Streaminy/ksqldb-client/pull/4

connecting to postgres via ssh on nodejs

I have a connection on DBeaver using an ssh tunnel as follows:
sshHostname;
sshPort;
sshUser;
sshPassword;
on the actual connection to the database I have:
dbHost;
dbPort;
dbName;
dbUsername;
dbPassword;
my node js code looks something like this:
const { Pool, Client } = require('pg')
const ssh2 = require('ssh2');
const dbServer = {
host: dbHost,
port: dbPort,
database: dbName,
username: dbUser,
password: dbPassword
}
var c = new ssh2();
c.connect({
host: sshHostname,
port: 22,
username: sshUser,
password: sshPassword
});
c.on('ready', function () {
c.forwardOut(sshHostname, '22', dbHost, dbPort , function(err, data) {
const client = new Client({
host: 'localhost',
port: dbPort,
database: dbName,
user: dbUser,
password: dbPassword,
})
client.connect(function (err) {
if (err) {console.log(err)}
else {console.log('connected...')}
});
client.end();
})
});
I get the following error:
Error: connect ECONNREFUSED 127.0.0.1:dbPort
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1133:16) {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: dbPort
}
I tried various configurations and various libraries with no success.
do you have any idea how to connect via nodejs to a database over a tunnel?
I have a feeling that I am actually not connecting to the ssh tunnel.
It could various of reasons why there is a connection refusion.
When making connections via SSH, you should have a SSH key ready on your computer and make it a part of your SSH white list. This is a common error many developers may run into, but since your critical information is hidden, we may need you to provide more details in that regard.

mongoose failing to connect in docker-compose

I've got a simple docker-compose.yml that looks like this:
services:
mongodb-service:
image: mongo:latest
command: "mongod --bind_ip_all"
mongodb-seeder:
image: mongo:latest
depends_on:
- mongodb-service
command: "mongoimport --uri mongodb://mongodb-service:27017/auth --drop --file /tmp/users.json"
myapp:
image: myapp:latest
environment:
DATABASEURL: mongodb://mongodb-service:27017/auth
depends_on:
- mongodb-service
myapp is a nodejs app that uses mongoose to connect to a mongodb database like so:
const databaseURL = process.env.DATABASEURL;
async function connectToMongo() {
try {
return await mongoose.connect(databaseURL, {
useUnifiedTopology: true,
useNewUrlParser: true,
useCreateIndex: true,
});
}
catch (error) {
logger.error('MongoDB connect failure.', error);
}
}
mongodb-seeder works perfectly fine. I can kick it off, it connects to mongodb-service and runs the import without any problems. However, myapp starts up, tries to connect to mongodb-service for 30 seconds, then dies:
ERROR 2020-09-16T12:13:21.427 [MAIN] MongoDB connection error! [Arguments] {
'0': MongooseError [MongooseTimeoutError]: Server selection timed out after 30000 ms
...stacktrace snipped...
at Function.Module._load (internal/modules/cjs/loader.js:724:14) {
message: 'Server selection timed out after 30000 ms',
name: 'MongooseTimeoutError',
reason: Error: connect ECONNREFUSED 127.0.0.1:27017
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1128:14) {
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {}
},
[Symbol(mongoErrorContextSymbol)]: {}
}
}
Note: The IP address in this log says it tried to connect to 127.0.0.1, not mongodb://mongodb-service:27017/auth. No matter what value I put in for DATABASEURL, it keeps printing 127.0.0.1. Any ideas why I can't get mongoose to recognize the hostname I'm giving it? And why would mongoose not be able to connect to a service that's clearly network-visible, since another container (mongodb-seeder) can see it without any problems?
Edit: I'm using mongoose 5.8.7
I was able to solve this on my own, turns out it was a pretty stupid miss on my part.
The DockerFile for my app defined an entrypoint that executed a script that exported the DATABASEURL prior to running. Removing that export allowed my environment: settings to flow down to the nodejs app.

could connect to kafka instance in node?

I have two computer running in the local network, one computer is installed with kafka instance on 192.168.1.3:9092. In another computer a short test program is running a kafka client to connect kafka instance and subscribe a topic.
my kafka-node is latest version, v4.1.3
const kafka = require('kafka-node');
const bp = require('body-parser');
//const config = require('./config');
try {
const Consumer = kafka.HighLevelConsumer;
const client = new kafka.KafkaClient("kafkaHost: '192.168.1.3:9092'");
let consumer = new kafka.Consumer(
client,
[{ topic: "dbserver1", partition: 0 }],
{
autoCommit: true,
fetchMaxWaitMs: 1000,
fetchMaxBytes: 1024 * 1024,
encoding: 'utf8',
fromOffset: false
}
);
consumer.on('message', async function(message) {
console.log('here');
console.log(
'kafka-> ',
message.value
);
})
consumer.on('error', function(err) {
console.log('error', err);
});
}
catch(e) {
console.log(e);
}
The code is shown above. However the code is always telling me
{ Error: connect ECONNREFUSED 127.0.0.1:9092
at Object._errnoException (util.js:992:11)
at _exceptionWithHostPort (util.js:1014:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1186:14)
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 9092 }
why it is showing 127.0.0.1 not 192.168.1.3?
Based on the comments, you're running the Debezium Kafka docker container... From the Debezium Docker tutorial...
If we wanted to connect to Kafka from outside of a Docker container, then we’d want Kafka to advertise its address via the Docker host, which we could do by adding -e ADVERTISED_HOST_NAME= followed by the IP address or resolvable hostname of the Docker host, which on Linux or Docker on Mac this is the IP address of the host computer (not localhost).
Sounds like your node code is not running in a container, or at least not on the same machine / Docker network
So you'll have to add -e ADVERTISED_HOST_NAME=192.168.1.3 to your docker run command

Resources