Unable to retrieve data from PG DB (in Azure) using Sequelize - node.js

I am unable to retrieve data from a PG DB resource hosted in Azure. I am using Sequelize and Node.
I am able to connect to the DB hosted in Azure using the terminal and a GUI, I can create a new DB with a table and some prepopulated fields to do a proof of concept.
However, when I try to connect in my local and get the data, I get an empty array response ([ ]). If I hit the same endpoint in production, I get a 502 (after a while) with the following message displayed on the client:
Server Error.
There was an unexpected error in the request processing.
Some code below (it works with my local db configured the same way):
This is my DB config:
'use strict';
var Sequelize = require('sequelize');
var cfg = require('../config');
var sequelize = new Sequelize(cfg.db, cfg.username, cfg.password, {
define: {
timestamps: false
},
host: cfg.host,
dialect: 'postgres',
port: 5432
});
And this is my router code:
'use strict';
const express = require('express');
const router = express.Router();
var User = require('../../models/users-model');
router.get('/', (req, res) => {
User.findAll().then(user => {
res.json(user);
});
});
module.exports = router;
Both in local and prod I expect to get the JSON response with an array of User objects.
In my local, as explained, I get an empty array.
In production, as mentioned as well, it seems to timeout and finally I get a 502 err response.
Any help is much appreciated!
Update!: I managed to activate the app logs on Azure (it took me a bit to find it as I'm quite new to the platform!) and got this now when I hit the endpoint in prod:
2019-08-12T12:52:06.355595892Z Unhandled rejection SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:5432
2019-08-12T12:52:06.355632393Z at connection.connect.err (/usr/src/app/server/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:170:24)
2019-08-12T12:52:06.355637793Z at Connection.connectingErrorHandler (/usr/src/app/server/node_modules/pg/lib/client.js:191:14)
2019-08-12T12:52:06.355641493Z at emitOne (events.js:116:13)
2019-08-12T12:52:06.355645293Z at Connection.emit (events.js:211:7)
2019-08-12T12:52:06.355648693Z at Socket.reportStreamError (/usr/src/app/server/node_modules/pg/lib/connection.js:72:10)
2019-08-12T12:52:06.355652093Z at emitOne (events.js:116:13)
2019-08-12T12:52:06.355655393Z at Socket.emit (events.js:211:7)
2019-08-12T12:52:06.355658393Z at emitErrorNT (internal/streams/destroy.js:64:8)
2019-08-12T12:52:06.355661493Z at _combinedTickCallback (internal/process/next_tick.js:138:11)
2019-08-12T12:52:06.355664693Z at process._tickCallback (internal/process/next_tick.js:180:9)

After hours and hours, I have hardcoded the data rather than getting that dynamically from my config files, probably I did not set up my Dockerfile properly and was not setting the ENV variable correctly.
Now I attacked the PROD DB from my local and it seems to work! Would really appreciate if someone can affirm my problem lies at the configuration level and the NODE_ENV env node var.
Dockerfile
# Node server serving Angular App
FROM node:8.11-alpine as node-server
WORKDIR /usr/src/app
COPY /server /usr/src/app/server
WORKDIR /usr/src/app/server
ENV NODE_ENV=prod
RUN npm install --production --silent
EXPOSE 80 443
CMD ["node", "index.js"]
Then in /config/index.js I have:
var env = process.env.NODE_ENV || 'global'
, cfg = require('./config.' + env);
module.exports = cfg;
So I understand that by setting the NODE_ENV to prod in Docker, when starting the Node app in Azure it should get the config.prod.js file rather than the config.global.js file, right?
You can see how I implement this on the db.js file on the question.

Related

Connection timed out while connecting to AWS DocumentDB outside the VPC

I'm trying create a very simple node app that can use DocumentDB. I'm not using Cloud9 neither Lambda, I'm coding locally. I was following this link https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html and this link https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-ec2.html
I created a poorly secured EC2 instance with the following inbound rules
port range
protocol
source
security group
22
TCP
0.0.0.0/0
demoEC2
This demoEC2 security group has the following inbound rules
type
protocol
port range
source
SSH
TCP
22
0.0.0.0/0
Then I created a DocumentDB cluster with 1 instance available that belongs to a security group that has the following inbound rules
type
protocol
port range
source
custom tcp
TCP
27017
demoEC2
After that, I open my terminal and created a tunnel:
ssh -i "mykeypair.pem" -L 27017:<CLUSTER ENDPOINT>:27017 ec2-user#<EC2 PUBLIC IPV4 DNS> -N
And, to test if my tunnel is working, I connect using mongoshell:
> mongo "mongodb://<MASTER USERNAME>:<MASTER PASSWORD>#localhost:27017/<DATABASE>" --tls --tlsAllowInvalidHostnames --tlsCAFile rds-combined-ca-bundle.pem
MongoDB shell version v4.2.13
connecting to: mongodb://localhost:27017/<DATABASE>?compressors=disabled&gssapiServiceName=mongodb
2021-07-29T10:10:59.309+0200 W NETWORK [js] The server certificate does not match the host name. Hostname: localhost does not match docdb-2021-07-27-10-32-49.ctuxybn342pe.eu-central-1.docdb.amazonaws.com docdb-2021-07-27-10-32-49.cluster-ctuxybn342pe.eu-central-1.docdb.amazonaws.com docdb-2021-07-27-10-32-49.cluster-ro-ctuxybn342pe.eu-central-1.docdb.amazonaws.com , Subject Name: C=US,ST=Washington,L=Seattle,O=Amazon.com,OU=RDS,CN=docdb-2021-07-27-10-32-49.ctuxybn342pe.eu-central-1.docdb.amazonaws.com
Implicit session: session { "id" : UUID("63340995-54ad-471b-aa8d-85763f3c7281") }
MongoDB server version: 4.0.0
WARNING: shell and server versions do not match
Warning: Non-Genuine MongoDB Detected
This server or service appears to be an emulation of MongoDB rather than an official MongoDB product.
Some documented MongoDB features may work differently, be entirely missing or incomplete, or have unexpected performance characteristics.
To learn more please visit: https://dochub.mongodb.org/core/non-genuine-mongodb-server-warning.
rs0:PRIMARY>
However, when I try to connect in my node app:
const mongoose = require('mongoose');
const fs = require('fs');
const path = require('path');
const username = ...
const password = ...
const database = ...
const connstring = `mongodb://${username}:${password}#localhost:27017/${database}?tls=true&replicaSet=rs0&readPreference=secondaryPreferred`;
const certFile = path.resolve(__dirname, './rds-combined-ca-bundle.pem');
const certFileBuf = fs.readFileSync(certFile); //I tried this one in tlsCAFile option as well
mongoose.connect(connstring,
{
tlsCAFile: certFile,
useNewUrlParser: true,
tlsAllowInvalidHostnames: true,
}
).then(() => console.log('Connection to DB successful'))
.catch((err) => console.error(err, 'Error'));
I get a connection timeout error after a while:
> > node .\index.js
(node:12388) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
MongoNetworkError: failed to connect to server [<CLUSTER ENDPOINT WITHOUT HAVING .cluster->:27017] on first connect [MongoNetworkTimeoutError: connection timed out
at connectionFailureError (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:345:14)
at TLSSocket.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:313:16)
at Object.onceWrapper (events.js:421:28)
at TLSSocket.emit (events.js:315:20)
at TLSSocket.Socket._onTimeout (net.js:481:8)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)]
at Pool.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\topologies\server.js:441:11)
at Pool.emit (events.js:315:20)
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\pool.js:564:14
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\pool.js:1013:9
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:32:7
at callback (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:283:5)
at TLSSocket.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:313:7)
at Object.onceWrapper (events.js:421:28)
at TLSSocket.emit (events.js:315:20)
at TLSSocket.Socket._onTimeout (net.js:481:8)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7) Error
Since I could connect using mongoshell, I think the tunnel is working and I can even do some inserts on it, but why Mongoose can't connect? I tried also using the MongoClient (const MongoClient = require('mongodb').MongoClient and MongoClient.connect(same everything)) but it didn't worked, I'm still getting the same timeout error.
Turns out all I need to do is to pass the username and password through the options, not in the connection string:
const connstring = `mongodb://localhost:27017/${database}`;
const certFile = path.resolve(__dirname, './rds-combined-ca-bundle.pem');
const certFileBuf = fs.readFileSync(certFile);
mongoose.connect(connstring,
{
tls: true,
tlsCAFile: certFile,
useNewUrlParser: true,
tlsAllowInvalidHostnames: true,
auth: {
username,
password
}
}
)

Managed DigitalOcean Redis instance giving Redis AbortError

I setup managed redis and managed postgres on digital ocean. Digital ocean gave me a .crt file, I don't know what to do with this, so didn't do anything with it. Can this be the root of the problem below:
Or do I have to allow docker container to reach outside of the container on the rediss protocol?
I dockerized a node app and then put this container onto my droplet. I have my droplet and managed redis and postgres in same region (SFO2). It connects to redis using this url:
url: 'rediss://default:REMOVED_THIS_PASSWORD#my-new-app-sfo2-do-user-5053627-0.db.ondigitalocean.com:25061/0',
I then did ran my docker container with docker run.
It then gives me error:
node_redis: WARNING: You passed "rediss" as protocol instead of the "redis" protocol!
events.js:186
throw er; // Unhandled 'error' event
^
AbortError: Connection forcefully ended and command aborted. It might have been processed.
at RedisClient.flush_and_error (/opt/apps/mynewapp/node_modules/redis/index.js:362:23)
at RedisClient.end (/opt/apps/mynewapp/node_modules/redis/lib/extendedApi.js:52:14)
at RedisClient.onPreConnectionEnd (/opt/apps/mynewapp/node_modules/machinepack-redis/machines/get-connection.js:157:14)
at RedisClient.emit (events.js:209:13)
at RedisClient.connection_gone (/opt/apps/mynewapp/node_modules/redis/index.js:590:14)
at Socket.<anonymous> (/opt/apps/mynewapp/node_modules/redis/index.js:293:14)
at Object.onceWrapper (events.js:298:28)
at Socket.emit (events.js:214:15)
at endReadableNT (_stream_readable.js:1178:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
Emitted 'error' event on RedisClient instance at:
at /opt/apps/mynewapp/node_modules/redis/index.js:310:22
at Object.callbackOrEmit [as callback_or_emit] (/opt/apps/mynewapp/node_modules/redis/lib/utils.js:89:9)
at Command.callback (/opt/apps/mynewapp/node_modules/redis/lib/individualCommands.js:199:15)
at RedisClient.flush_and_error (/opt/apps/mynewapp/node_modules/redis/index.js:374:29)
at RedisClient.end (/opt/apps/mynewapp/node_modules/redis/lib/extendedApi.js:52:14)
[... lines matching original stack trace ...]
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
code: 'NR_CLOSED',
command: 'AUTH',
args: [ 'REMOVED_I_DONT_KNOW_IF_THIS_IS_SENSITIVE' ]
The redis protocol is different from rediss because the latter uses TLS connection. DigitalOcean Managed Redis requires the connections to be made over TLS, so you have to use rediss. However, I couldn't find any info about the TLS certificate provided by DigitalOcean to connect to the Managed Redis service.
Based on your error message, I presumed you're using this redis package. If that's the case, you can pass empty TLS object option in the connection string like so:
const Redis = require('redis')
const host = 'db-redis.db.ondigitalocean.com'
const port = '25061'
const username = 'user'
const password = 'secret'
const url = `${username}:${password}#${host}:${port}`
const client = Redis.createClient(url, {tls: {}})
Further reading/source:
SSL connections arrive for Redis on Compose
Connecting to IBM Cloud Databases for Redis from Node.js
I solved this. Below are snippets from config/env/production.js
Sockets
For sockets, to enable rediss you have to pass in all options through adapterOptions like this:
sockets: {
onlyAllowOrigins: ['https://my-website.com'],
// pass in as adapterOptions so it gets through to redis-adapter
// as i need it "rediss" but this url is not supported i get an error.
// so i need to pass in `tls` empty object. and i see he moves things into
// `adapterOptions` here here - https://github.com/balderdashy/sails-hook-sockets/blob/master/lib/configure.js#L128
adapterOptions: {
user: 'username',
pass: 'password',
host: 'host',
port: 9999,
db: 2, // pick a number
tls: {},
},
adapter: '#sailshq/socket.io-redis',
},
Session
For session, pass tls: {} empty object to config:
session: {
pass: 'password',
host: 'host',
port: 9999,
db: 1, // pick a number not used by sockets
tls: {},
cookie: {
secure: true,
maxAge: 24 * 60 * 60 * 1000, // 24 hours
},
},

MongoDB Auth Fails to find username on Bitnami MEAN Stack Image

Trying to run a web app (MEAN) on Amazon EC2 Instance but am encountering the following problem. Can anyone help me with this?
node app.js The Server has started on 9091
/opt/bitnami/apps/YelpCamp/node_modules/mongodb-core/lib/auth/scram.js:128
username = username.replace('=', "=3D").replace(',', '=2C');
^
TypeError: Cannot read property 'replace' of undefined
at executeScram (/opt/bitnami/apps/SomeApp/node_modules/mongodb-core/lib/auth/scram.js:128:24)
at /opt/bitnami/apps/SomeApp/node_modules/mongodb-core/lib/auth/scram.js:277:7
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickCallback (internal/process/next_tick.js:104:9)
Mongoose can do auth in 2 ways:
1, Connection string:
mongoose.connect('mongodb://username:password#host:port(usually 27017)/db')
Where username and password are the respective username and password for that specific db, host would be the host where your db is hosted (so localhost or some domain/IP), port is the port mongo listens on, and db is the name of the db you want to connect to
2, Using options. From the docs:
var options = {
useMongoClinet: true,
auth: {authdb: 'admin'},
user: 'myUsername',
pass: 'myPassword',
}
mongoose.connect(uri, options);
I also faced the 'username undefined' error in the first approach, but I succeeded in the second approach.
[Reference] https://github.com/Automattic/mongoose/issues/4891

Error while testing NodeJS and MongoDB stack using Mocha and Chai

Right now, I'm running Mocha tests and am getting the following error:
Error: connect ECONNREFUSED 127.0.0.1:27017
at Object.exports._errnoException (util.js:873:11)
at exports._exceptionWithHostPort (util.js:896:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:14)
I assume it's because I am unable to connect to port 27017 because I did not include:
var express = require('express')
var app = express()
However, what is particularly confusing to me is how I connect by test to MongoDB so I can create fake records for testing and then destroy them. If anyone can show me (with an example please) how to do it, that would be awesome!
Thanks again.
The error is coming may be the mongo server is not running or from more than one server trying to listen on same port. Also for test environment only can create different folder or use different port. So that can delete the folder once test case is over
In server.js
if(process.env === 'test')
{
mongoport = 57017;
}
else
{
mongoport = 27017;
}
mongoUrl = "mongodb://localhost:"+mongoport+"/student"
// use the mongodb url
In test.js
//on start of test case
var fs = require('fs-extra');
fs.removeSync("test/db/");
fs.ensureDirSync("test/db/");
//ur test case definition

node.js and Redis on Heroku for IODocs

I'm trying to get IODocs running on Heroku. It requires node.js and Redis. Admittedly, I'm new to all of these technologies. Nonetheless, I've managed to get it running locally. However, I receive the following error when deploying to Heroku.
2011-12-01T11:55:18+00:00 app[web.1]: Redis To Go - port: 9030 hostname: dogfish.redistogo.com
2011-12-01T11:55:18+00:00 app[web.1]: Express server listening on port 9694
2011-12-01T11:55:19+00:00 heroku[web.1]: State changed from starting to up
2011-12-01T11:55:21+00:00 app[web.1]: ^
2011-12-01T11:55:21+00:00 app[web.1]: Error: Redis connection to localhost:6379 failed - ECONNREFUSED, Connection refused
2011-12-01T11:55:21+00:00 app[web.1]: at Socket.<anonymous> (/app/node_modules/redis/index.js:123:28)
2011-12-01T11:55:21+00:00 app[web.1]: at Socket.emit (events.js:64:17)
2011-12-01T11:55:21+00:00 app[web.1]: at Array.<anonymous> (net.js:828:27)
2011-12-01T11:55:21+00:00 app[web.1]: at EventEmitter._tickCallback (node.js:126:26)
2011-12-01T11:55:23+00:00 heroku[web.1]: State changed from up to crashed
The only time I received a similar warning on my local mating was when Redis was not running. From what I can tell the Redis add-on is enabled for my app and running:
$ heroku config --long
NODE_ENV => production
PATH => bin:node_modules/.bin:/usr/local/bin:/usr/bin:/bin
REDISTOGO_URL => redis://redistogo:52847221366cb677460c306e4f482c5b#dogfish.redistogo.com:9030/
I've also tried some configuration suggestions. Neither seem to work.
// redis connection in app.js
var db;
if (process.env.REDISTOGO_URL) {
var rtg = require("url").parse(process.env.REDISTOGO_URL);
// tried this line as well... gave a different error on .connect();
// db = require('redis-url').connect(process.env.REDISTOGO_URL);
db = redis.createClient(rtg.port, rtg.hostname);
db.auth(rtg.auth.split(":")[1]);
// debug
sys.puts('Redis To Go - port: ' + rtg.port + ' hostname: ' + rtg.hostname);
} else {
db = redis.createClient(config.redis.port, config.redis.host);
db.auth(config.redis.password);
}
From the difference in my Redis To Go debug line and Error, I'm sure this is a configuration issue. But don't know how to fix it. Any help is greatly appreciated.
According to this line:
2011-12-01T11:55:21+00:00 app[web.1]: Error: Redis connection to localhost:6379 failed - ECONNREFUSED, Connection refused
You are trying to connect to localhost:6379, but the redis server is running at redis://redistogo:52847221366cb677460c306e4f482c5b#dogfish.redistogo.com:9030/. Can you try connecting to that URL manually and see if that works?
This indeed had to do with the configuration for Redis on Heroku. There were additional lines that required updates in I/O Docs app.js.
In the end, I piggy-backed the global config object at the top (~ line 60) after sniffing out the production (Heroku) environment.
if (process.env.REDISTOGOURL) {
// use production (Heroku) redis configuration
// overwrite config to keep it simple
var rtg = require(‘url’).parse(process.env.REDISTOGOURL);
config.redis.port = rtg.port;
config.redis.host = rtg.hostname;
config.redis.password = rtg.auth.split(“:”)[1];
}
I created a blog post for installing, configuring, and deploying I/O Docs that includes this as well as other changes that were required to run I/O Docs. I recommend you review it if you're interested in this project.
Thanks to Jan Jongboom and Kirsten Jones for helping me get started. In addition, I believe the project has been updated on GitHub to include Heroku configuration out of the box. However, I've yet to test it.
I actually have a blog post about how to get IODocs working on Heroku. It's got the config changes needed to get the REDIS working on Heroku with IODocs.
http://www.princesspolymath.com/princess_polymath/?p=489
Here's the code changes needed:
Add the following block under “var db;” to app.'s:
if (process.env.REDISTOGO_URL) {
var rtg = require("url").parse(process.env.REDISTOGO_URL);
db = require("redis").createClient(rtg.port, rtg.hostname);
db.auth(rtg.auth.split(":")[1]);
} else {
db = redis.createClient(config.redis.port, config.redis.host);
db.auth(config.redis.password);
}
And then this in the Load API Configs section, after reading the config file:
var app = module.exports = express.createServer();
var hostname, port, password
if (process.env.REDISTOGO_URL) {
var rtg = require("url").parse(process.env.REDISTOGO_URL);
hostname = rtg.hostname;
port = rtg.port;
password = rtg.auth.split(":")[1];
} else {
hostname = config.redis.host;
port = config.redis.port;
password = config.redis.password;
}
Recently, a cleaner way would be to use the redis-url module which handles the configuration.
I'm personnaly using Express with Redis (via the Redis To Go addon) as a sessionStore, and it works well on Heroku.
Exemple :
const express = require('express')
, redis = process.env.REDISTOGO_URL
? require('redis-url').connect(process.env.REDISTOGO_URL)
: require('redis').createClient()
, RedisStore = require('connect-redis')(express)
, sessionStore = new RedisStore({ client: redis })
, app = express.createServer();
[...]
app.configure(function() {
this
.use(express.session({
secret: 'mySecretHash',
store: sessionStore // Set redis as the sessionStore for Express
}));
});

Resources