Creating sub connections with azure-mobile-apps and NodeJS - node.js

I'm trying to create an API using nodeJS, express and azure-mobile-apps to do some data synchronisation between an Ionic3 mobile app (which use an SQLite local database) and a Microsoft SQL Database.
My API has to create a synchronisation connection for each mobile application. Each application will be linked to a distant database. For example, if user_01 wants to synchronise his data, he's going to be linked to his client_01 database. So each time it'll have to, the API will create a new process running on a different port.
here is an example : https://zupimages.net/up/19/36/szhu.png
The problem is that i'm not able to create more than one connection with azure-mobile-apps. The first one always works, but the second, third etc are still using the first connection that i have instantiated. I've looked into the app stack and everything seems fine.
Is that an issue with azure-mobile-app, or did I misunderstand something with express ?
Thanks for your responses !
var azureMobileApps = require('azure-mobile-apps');
var express = require('express');
module.exports = {
async createConnection(client) {
try {
let app = express();
mobileApp = azureMobileApps({
homePage: true,
swagger: true,
data: {
server: '****',
user: client.User,
password: client.Password,
port: '1443',
database: client.Database,
provider: 'mssql',
dynamicSchema: false,
options: {
encrypt: false
}
}
});
await mobileApp.tables.import('./tables');
await mobileApp.tables.initialize();
app.listen(global.portCounter);
app.use(mobileApp);
console.log(app._router.stack);
console.log('Listening on port ',global.portCounter);
global.portCounter++;
} catch (error) {
console.log(error);
}
}
}

It's working now. The thing is, it's impossible to do multiple connection with the azure-mobile-apps SDK for nodeJS.
I had to use worker-thread which seems to isolate the memory in a sub-proccess.
Hope it can help somebody one day

Related

Is declaring a Node.js redis client as a const in multiple helpers a safe way to use it?

This is a little hard articulate so I hope my title isn't too terrible.
I have a frontend/backend React/Node.js(REST API) Web app that I want to add Redis support to for storing retrieving app global settings and per-user specific settings (like language preference, last login, etc... simple stuff) So I was considering adding a /settings branch to my backend REST API to push/pull this information from a redis instance.
This is where my Node.js inexperience comes through. I'm looking at using the ioredis client and it seems too easy. If I have a couple of helpers (more than one .js which will call upon redis) will constructing the client as a const in each be safe to do? Or is there another recommended approach to reusing a single instance of it be the way to go?
Here's a sample of what I'm thinking of doing. Imagine if I had 3 helper modules that require access to the redis client. Should I declare them as const in each? Or centralize them in a single helper module, and get the client from it? Is there a dis-advantage to doing either?
const config = require('config.json');
const redis_url = config.redis_url;
//redis setup
const Redis = require('ioredis');
const redis = new Redis(redis_url);
module.exports = {
test
}
async function test(id) {
redis.get(id, function (err, result) {
if (err) {
console.error(err);
throw(err);
} else {
return result;
}
});
Thank you.
If no redis conflicts...
If the different "helper" modules you are referring to have no conflicts when interacting with redis, such as overwriting / using the same redis keys, then I can't see any reason not to use the same redis instance (as outlined by garlicman) and export this to the different modules in which it is used.
Otherwise use separate redis databases...
If you do require separate redis database connections, redis ships with 16 databases so you can specify which to connect to when creating a new instance - see below:
const redis = new Redis({ // SET UP CONFIG FOR CONNECTION TO REDIS
port: 6379, // Redis port
host: 127.0.0.1, // Redis host
family: 4, // 4 (IPv4) or 6 (IPv6)
db: 10, // Redis database to connect to
});
Normally what I would do (in Java say) is implement any explicit class with singleton access the hold the connection and any connection error/reconnect handling.
All modules in Node.js are already singletons I believe, but what I will probably go with will be a client class to hold it and my own access related methods. Something like:
const config = require('config.json');
const Redis = require("ioredis");
class Redis {
constructor(){
client = new Redis(config.redis_url);
}
get(key) {
return this.client.get(key);
}
set(key, value, ttl) {
let rp;
if (ttl === 0) {
rp = this.client.set(key, value);
}
else {
rp = this.client.set(key, value)
.then(function(res) {
this.client.expire(key, ttl);
});
}
return rp;
}
}
module.exports = new Redis;
I'll probably include a data_init() method to check and preload an initial key/value structure on first connect.

MongoDB queries are taking 2-3 seconds from Node.js app on Heroku

I am having major performance problems with MongoDB. Simple find() queries are sometimes taking 2,000-3,000 ms to complete in a database with less than 100 documents.
I am seeing this both with a MongoDB Atlas M10 instance and with a cluster that I setup on Digital Ocean on VMs with 4GB of RAM. When I restart my Node.js app on Heroku, the queries perform well (less than 100 ms) for 10-15 minutes, but then they slow down.
Am I connecting to MongoDB incorrectly or querying incorrectly from Node.js? Please see my application code below. Or is this a lack of hardware resources in a shared VM environment?
Any help will be greatly appreciated. I've done all the troubleshooting I know how with Explain query and the Mongo shell.
var Koa = require('koa'); //v2.4.1
var Router = require('koa-router'); //v7.3.0
var MongoClient = require('mongodb').MongoClient; //v3.1.3
var app = new Koa();
var router = new Router();
app.use(router.routes());
//Connect to MongoDB
async function connect() {
try {
var client = await MongoClient.connect(process.env.MONGODB_URI, {
readConcern: { level: 'local' }
});
var db = client.db(process.env.MONGODB_DATABASE);
return db;
}
catch (error) {
console.log(error);
}
}
//Add MongoDB to Koa's ctx object
connect().then(db => {
app.context.db = db;
});
//Get company's collection in MongoDB
router.get('/documents/:collection', async (ctx) => {
try {
var query = { company_id: ctx.state.session.company_id };
var res = await ctx.db.collection(ctx.params.collection).find(query).toArray();
ctx.body = { ok: true, docs: res };
}
catch (error) {
ctx.status = 500;
ctx.body = { ok: false };
}
});
app.listen(process.env.PORT || 3000);
UPDATE
I am using MongoDB Change Streams and standard Server Sent Events to provide real-time updates to the application UI. I turned these off and now MongoDB appears to be performing well again.
Are MongoDB Change Streams known to impact read/write performance?
Change Streams indeed affect the performance of your server. As noted in this SO question.
As mentioned in the accepted answer there,
The default connection pool size in the Node.js client for MongoDB is 5. Since each change stream cursor opens a new connection, the connection pool needs to be at least as large as the number of cursors.
const mongoConnection = await MongoClient.connect(URL, {poolSize: 100});
(Thanks to MongoDB Inc. for investigating this issue.)
You need to increase your pool size to get back your normal performance.
I'd suggest you do more log works. Slow queries after restarted for a while might be worse than you might think.
For a modern database/web app running on a normal machine, it's not very easy to encounter with performance issues if you are doing right. There might be a memory leak or other unreleased resources, or network congestion.
IMHO, you might want to determine whether it's a network problem first, and by enabling slow query log on MongoDB and logging in your code where the query begins and ends, you could achieve this.
If the network is totally fine and you see no MongoDB slow queries, that means something goes wrong in your own application. Detailed logging might really help where query goes slow.
Hope this would help.

How does one correctly set up a server based deepstream RPC provider?

I am building a SOA with deepstream and I want to use a deepstream client server to perform API-KEY based look ups that the user should not know. How do I actually set up an RPC client provider? I have looked in the deepstream docs and on google, but there is not a full code example on how to do this. I have created a file like below and run it with node. The output I get is below it:
var deepstream = require('deepstream.io-client-js')
const client = deepstream('localhost:6020').login()
console.log('Starting up')
client.on('error', (error,event,topic) => {
console.log(error, event, topic);
})
client.on('connectionStateChanged', connectionState => {
console.log(connectionState);
})
client.login({username: 'USER', password: 'PASSWORD'}, (success, data) => {
if (success) {
client.rpc.provide('the-rpc', function( data, response ){
response.send(data);
});
} else {
console.log(data);
}
})
--
Starting up
AWAITING_CONNECTION
As you can see it runs the code, but does not actually connect to the deepstream server. I already have the deepstream server running, and a browser client that connects to it, so the config is correct. Please help!
I think your issue is based on the fact your trying to connect node via the webport. Try using port 6021 instead for tcp ( used by the node client ).
const client = deepstream('localhost:6021').login()
You should also only call .login() once, so the line would be:
const client = deepstream('localhost:6021')
We are working on a 2.0 release coming out very soon which will remove tcp entirely and only require a single port to make life easier in terms of deployment and performance.

Hapi.js Catbox Redis returning "server.cache is not a function"

So I'm like 99% sure I'm just screwing up something dumb here.
I'm trying to set up catbox to cache objects to redis. I have redis up and running and I can hit it with RDM (sql pro like utility for redis) but Hapi is not cooperating.
I register the redis catbox cache like so:
const server = new Hapi.Server({
cache: [
{
name: 'redisCache',
engine: require('catbox-redis'),
host: 'redis',
partition: 'cache',
password: 'devpassword'
}
]
});
I am doing this in server.js After this block of code I go on to register some more plugins and start the server. I also export the server at the end of the file
module.exports = server;
Then in my routes file, I am attempting to set up a testing route like so:
{
method: 'GET',
path: '/cacheSet/{key}/{value}',
config: { auth: false },
handler: function(req, res) {
const testCache = server.cache({
cache: 'redisCache',
expireIn: 1000
});
testCache.set(req.params.key, req.params.value, 1000, function(e) {
console.log(e);
res(Boom.create(e.http_code, e.message));
})
res(req.params.key + " " + req.params.value);
}
},
Note: My routes are in an external file, and are imported into server.js where I register them.
If I comment out all the cache stuff on this route, the route runs fine and returns my params.
If I run this with the cache stuff, at first I got "server not defined". So I then added
const server = require('./../server.js');
to import the server.
Now when I run this, I get "server.cache is not a function" and a 500 error.
I don't understand what I'm doing wrong. My guess is that I'm importing server, but perhaps it's the object without all the configs set so it's unable to use the .cache method. However this seems wrong because .cache should always be a default method with the default memory cache, so even if my cache registration isn't active yet, server.cache should theoretically still be a method.
I know it has to be something basic I'm messing up, but what?
I was correct. I was doing something stupid. It had to do with how I was exporting my server. I modified my structure to pull out the initial server creation and make it more modular. Now I am simply exporting JUST the server like so:
'use strict';
const Hapi = require('hapi');
const server = new Hapi.Server({
cache: [
{
name: 'redisCache',
engine: require('catbox-redis'),
host: 'redis',
partition: 'cache',
password: 'devpassword'
}
]
});
module.exports = server;
I then import that into my main server file (now index.js previously server.js) and everything runs fine. I can also import this into any other file (in this case my routes file) and access the server for appropriate methods.
Redis is happily storing keys and Hapi is happily not giving me errors.
Leaving here in case anyone else runs into a dumb mistake like this.

Where do I put database connection information in a Node.js app?

Node.js is my first backend language and I am at the point where I am asking myself "where do I put the database connection information?".
There is a lot of good information regarding this issue. Unfortunately for me all the examples are in PHP. I get the ideas but I am not confident enough to replicate it in Node.js.
In PHP you would put the information in a config file outside the web root, and include it when you need database data.
How would you do this in Node.js? using the Express.js framework.
So far I have this:
var express = require('express'), app = express();
var mysql = require('mysql');
app.get('/', function(req,res) {
var connection = mysql.createConnection({
host: 'localhost',
user: 'root',
password: 'password',
database: 'store'
});
var query = connection.query('SELECT * from customers where email = "deelo42#gmail.com"');
query.on('error', function(err) {
throw err;
});
query.on('fields', function(fields) {
console.log('this is fields');
});
query.on('result', function(row) {
var first = row.first_name;
var last = row.last_name;
res.render('index.jade', {
title: "My first name is " + first,
category: "My last name is " + last
});
});
});
app.listen(80, function() {
console.log('we are logged in');
});
As you can see I have a basic express application with 1 GET route. This route sets off the function to go to the database and pull out information based on an email address.
At the top of the GET route is the database connection information. Where do I put that? How do I call it? How do I keep it out of web root, and include it like PHP ? Can you please show me in a working example. Thanks!
I use the Express Middleware concept for same and that gives me nice flexibility to manage files.
I am writing a detailed answer, which includes how i am use the config params in app.js to connect to DB.
So my app structure looks something this:
How i connect to DB? (I am using MongoDB, mongoose is ORM, npm install mongoose)
var config = require('./config/config');
var mongoose = require("mongoose");
var connect = function(){
var options = {
server: {
socketOptions:{
keepAlive : 1
}
}
};
mongoose.connect(config.db,options);
};
connect();
under the config folder i also have 'env' folder, which stores the environment related configurations in separate files such as development.js, test.js, production.js
Now as the name suggests, development.js stores the configuration params related to my development environment and same applies to the case of test and production. Now if you wish you can have some more configuration setting such as 'staging' etc.
project-name/config/config.js
var path = require("path");
var extend = require("util")._extend;
var development = require("./env/development");
var test = require("./env/test");
var production = require("./env/production");
var defaults = {
root: path.normalize(__dirname + '/..')
};
module.exports = {
development: extend(development,defaults),
test: extend(test,defaults),
production: extend(production,defaults)
}[process.env.NODE_ENV || "development"]
project-name/config/env/test.js
module.exports = {
db: 'mongodb://localhost/mongoExpress_test'
};
Now you can make it even more descriptive by breaking the URL's into, username, password, port, database, hostname.
For For more details have a look at my repo, where you can find this implementation, in fact now in all of my projects i use the same configuration.
If you are more interested then have a look at Mean.js and Mean.io, they have some better ways to manage all such things. If you are beginner i would recommend to keep it simple and get things going, once you are comfortable, you can perform magic on your own. Cheers
I recommend the 12-factor app style http://12factor.net which keeps all of this in env vars. You never should have this kind of information hard-coded or in the app source-code / repo, so you can reuse it in different environments or even share it publicly without breaking security.
However, since there are lots of environment vars, I tend to keep them together in a single env.js like the previous responder wrote - although it is not in the source code repo - and then source it with https://www.npmjs.org/package/dotenv
An alternative is to do it manually and keep it in, e.g. ./env/dev.json and just require() the file.
Any of these works, the important point is to keep all configuration information separate from code.
I agree with the commenter, put it in a config file. There is no ultimate way, but nconf is also one of my favourites.
The important best practise is that you keep the config separate if you have a semi-public project, so your config file will not overwrite other developers.
config-sample.json (has to be renamed and is tracked with for example git)
config.json (not tracked / ignored by git)

Resources