keep fetching data up to date - node.js

I have just a question I want to ask if anybody have an idea about it.
I'm building a full stack application backed by nodejs and using typescript for it, in my nodejs app I'm making a fetch for an API that later on I will serve it to the user but I have one small issue, I'm using node-fetch for now but the data which are fetched are changing all the time eg. now I have 10 entries, after 5 seconds I have 30 entries, so is there a way or mechanism to make my fetching to the data with nodejs up to date by fetching them in the background?
Thanks in advance!

Easiest solution to implement and good in actual sense for making your web app realtime https://pusher.com/
This is how you can handle pusher within your NodeJS App
import Pusher from 'pusher'
//Below are the keys that you will get from pusher when you go to getting started
// within your Dashboard
const pusher = new Pusher({
appId: "<Your app id provided by pusher>",
key: "<Key id provided by pusher>",
secret: "<Secret key given by pusher>",
cluster: "<cluster given by pusher",
useTLS: true
});
Now you want to setup a changeStream for your Collection in MongoDB
const db = mongoose.collection;
db.once('open', ()=>{
const postCollection = db.collection('posts')//This will be dependent on the collection you want to watch
const changeStream = postCollection.watch()//Make sure the collection name above are acurate
changeStream.on('change', (change)=>{
const post = change.fullDocument;//Change bring back content that change in DB Collection
if (change.operationType === 'insert'){
pusher.triger('<write channel for your pusher>', '<event in this case inser>', {
newPost:post
})
}
})
})
By that setup your pusher and backend is working now is time to setup frontend
If your usin VanillaJS the Pusher getting started has code for you
If your using ReactJS here's is the code below
import Pusher from 'pusher-js'
useEffect(()=>{
Pusher.logToConsole = true;
var pusher = new Pusher('<Key received from pusher>', {
cluster: '<cluster received from pusher>'
});
var channel = pusher.subscribe('<channel name that you wrote in server');
channel.bind('<event that you wrote in sever',(data)=> {
alert(JSON.stringify(data)); // This will be the data entries coming as soon as they enter DB then you can update your state by using spread operators to maintain what you have and also add new contents
});
//Very important to have a clean-up function to render this once
return ()=>{
pusher.unbind();
pusher.unsubscribe_all();
}
})
Now like this you have everything being realtime

Related

optimize number of redis connections with a node.js-application

I have a question about redis connections.
I'm developing an app in react native which will use websockets for chat messages. My backend consists of a node.js-app with redis as pubsub mechanism for socket.io.
I'm planning on deploying on heruko. I'm currently on the free hobby plan, which has a limit of 20 connections to redis.
My question now is: how can I optimize my code so that a minimum of connections are used. I'm ofc planning to upgrade my heroku plan once I launch, but then still I want to optimize.
My node.js-code looks like this (simplified):
const Redis = require('ioredis');
const pubClient = new Redis(/* redis url */);
const subClient = new Redis(/* redis url */);
const socketClient = new Redis(/* redis url */);
const io = require('socket.io')(server);
io.on('connection', async (socket) => {
// store socket.id in redis so I can send messages to individual users
// based on the user ID
const userId = socket.handshake.query.userId;
await socketClient.hset('socketIds', userId, socket.id);
socket.on('message', async (data) => {
/**
* data {
* userId,
* message
* }
*/
const data2 = JSON.parse(data);
// get the socket.id based on the user ID
const socketId = await socketClient.hget('socketIds', data2.userId);
// send the message to the correct socket.id
io.to(socketId).emit('message', data.message);
};
});
So when I deploy this code to heroku, when started, it will create 3 connections to the same redis server. But what if 2-3-4-... people connect to this node.js-server? If 2 people connect, will there be 6 redis-connections, or only 3? Like: will the node.js-server initiate every time a users accesses the server 3 new redis connections, or will it always be 3 connections?
I'm trying to track all connections with CLIENT LIST in redis-cli, but I does not give me the correct thing I guess. I was just testing my code with only one user connection to the socket server and it gave me 1 client in redis (instead of 3 connections).
Thanks in advance.
It doesn't matter how many people are using the app, each client instance will have only 1 socket at any time, which means you'll see at most 3 clients per node process.
You see only 1 connection because by default ioredis initiates the connection when the first command is executed, and not when the client is created. You can call client.connect() in order to initiate the socket without executing a command.

Can you keep a PostgreSQL connection alive from within a Next.js API?

I'm using Next.js for my side project. I have a PostrgeSQL database hosted on ElephantSQL. Inside the Next.js project, I have a GraphQL API set up, using the apollo-server-micro package.
Inside the file where the GraphQL API is set up (/api/graphql), I import a database helper-module. Inside that, I set up a pool connection and export a function which uses a client from the pool to execute a query and return the result. This looks something like this:
// import node-postgres module
import { Pool } from 'pg'
// set up pool connection using environment variables with a maximum of three active clients at a time
const pool = new Pool({ max: 3 })
// query function which uses next available client to execute a single query and return results on success
export async function queryPool(query) {
let payload
// checkout a client
try {
// try executing queries
const res = await pool.query(query)
payload = res.rows
} catch (e) {
console.error(e)
}
return payload
}
The problem I'm running into, is that it appears as though the Next.js API doesn't (always) keep the connection alive but rather opens up a new one (either for every connected user or maybe even for every API query), which results in the database quickly running out of connections.
I believe that what I'm trying to achieve is possible for example in AWS Lambda (by setting context.callbackWaitsForEmptyEventLoop to false).
It is very possible that I don't have a proper understanding of how serverless functions work and this might not be possible at all but maybe someone can suggest me a solution.
I have found a package called serverless-postgres and I wonder if that might be able to solve it but I'd prefer to use the node-postgres package instead as it has much better documentation. Another option would probably be to move away from the integrated API functionality entirely and build a dedicated backend-server, which maintains the database connection but obviously this would be a last resort.
I haven't stress-tested this yet, but it appears that the mongodb next.js example, solves this problem by attaching the database connection to global in a helper function. The important bit in their example is here.
Since the pg connection is a bit more abstract than mongodb, it appears this approach just takes a few lines for us pg enthusiasts:
// eg, lib/db.js
const { Pool } = require("pg");
if (!global.db) {
global.db = { pool: null };
}
export function connectToDatabase() {
if (!global.db.pool) {
console.log("No pool available, creating new pool.");
global.db.pool = new Pool();
}
return global.db;
}
then in, eg, our API route, we can just:
// eg, pages/api/now
export default async (req, res) => {
const { pool } = connectToDatabase();
try {
const time = (await pool.query("SELECT NOW()")).rows[0].now;
res.end(`time: ${time}`);
} catch (e) {
console.error(e);
res.status(500).end("Error");
}
};

How can I verify I don't need the mLab add-on for my Heroku node.js app?

After reading through the mLab -> Atlas migration plan a few times, I decided I'd try a different way. My coding background is mainly asm on mcs51 so I'm something of a n00b in the node.js/mongo/heroku world. I barely understood half of the migration process.
So I wrote a small test app following this blog entry and then used what I'd learned to modify my actual app to talk to Atlas directly. I exported the collections from the old db to JSON, then imported them into the Atlas version to recreate the database. Everything appears to be working correctly; I don't see any data going into the old db and it looks like the new Atlas db is getting all the action.
But I'm leery of deleting the mLab add-on from Heroku until I've verified that it's truly not needed any more, because I'm pretty sure that I won't be able to recreate it if it turns out I've missed something.
So my question is, how can I ensure I'm no longer using the mLab add-on? I don't really understand what it was doing for me in the first place so I'm not sure how to verify I'm not using it any more.
Here are the relevant code snippets I'm using to access the Atlas db...
function myEncode(str) { // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent
return encodeURIComponent(str).replace(/[!'()*]/g, function(c) {
return '%' + c.charCodeAt(0).toString(16);
});
}
const ATLASURI = process.env.ATLASURI;
const ATLASDB = process.env.ATLASDB;
const ATLASUSER = process.env.ATLASUSER;
const ATLASPW = myEncode(process.env.ATLASPW); // wrapper needed to handle strong paswords...
const dbURL = "mongodb+srv://"+ATLASUSER+":"+ATLASPW+"#"+ATLASURI+"/"+ATLASDB+"?retryWrites=true&w=majority";
var GoogleStrategy = require('passport-google-oauth20').Strategy;
const {MongoClient} = require('mongodb');
const client = new MongoClient(dbURL, { useNewUrlParser: true, useUnifiedTopology: true });
var store = new MongoDBStore({uri: dbURL,collection: 'Sessions'});
var db = undefined;
client.connect(async function(err) {
if(err) {console.log("Error:\n"+String(err));}
db = await client.db(ATLASDB);
console.log("Connected to db!");
banner();
});

Can't determine Firebase Database URL when trying to read Firebase Database from within Node.js Firebase function

I am using Flutter to build an app that accesses the Firebase database. All good from the Flutter side....but I am new to Node.js and Cloud Functions. I am trying to create a Node.js function to react to a deletion event of a record on one Firebase Database node and then delete records from two other Firebase Database nodes and image files from Firestore.
I am reacting to the trigger event with a functions.database.onDelete call, no problem, but falling at the very next hurdle i.e.trying to read admin.database to get a snapshot.
I have created a dummy function that uses .onUpdate to pick up a trigger event (don't want to keep having to recreate my data as I would if I used .onDelete) and then tries to read my Firebase Database to access a different node. The trigger event is picked up fine but I don't seem to have a database reference Url to do the read...yet it is the same database. Output on the console log from a call to process.env.FIREBASE_CONFIG shows the Url is present.
The included function code also has commenting to show the various outputs I get on the console log.
I am going crazy over this.....PLEASE can anyone tell me where I am going wrong. Been searching Google, Stackoverflow, Firebase docs for the last two days :-(
const admin = require("firebase-admin"); // Import Admin SDK
const functions = require("firebase-functions"); // Import Cloud Functions
admin.initializeApp({
credential: admin.credential.cert(
require("./user-guy-firebase-adminsdk.json")
)
});
exports.testDeleteFunction = functions.database
.ref("/user-guys/{userGuyId}")
// Using .onUpdate as I don't want to keep restoring my data if I use .onDelete
.onUpdate((snapshot, context) => {
const userData = snapshot.after.val();
const userId = userData.userId;
console.log('userId: ' + userId); // Gives correct console log output: userId: wajzviEcUvPZEcMUbbkPzcw64MB2
console.log(process.env.FIREBASE_CONFIG);
// Gives correct console log output:
// {"projectId":"user-guy","databaseURL":"https://user-guy.firebaseio.com","storageBucket":"user-guy.appspot.com","cloudResourceLocation":"us-central"}
// I have tried each of the four following calls and received the console log message as indicated after each.
//
// var root = admin.database.ref(); // Console Log Message: TypeError: admin.database.ref is not a function
// var root = admin.database().ref(); // Console Log Message: Error: Can't determine Firebase Database URL.
// var root = admin.database.ref; // Fails at line 31 below with the message indicated.
// var root = admin.database().ref; // Console Log Message: Error: Can't determine Firebase Database URL.
console.log(root.toString); // Console Log Message: TypeError: Cannot read property 'toString' of undefined.
// This is intended to read a chat thread for two users but processing never gets here.
var database = root.child('chats').child('0cSLt3Sa0FS26QIvOLbin6MFsL43GUPYmmAg9UUlRLnW97jpMCAkEHE3');
database
.once("value")
.then(snapshot => {
console.log(snapshot.val());
}, function (errorObject) {
console.log("The read failed: " + errorObject.code);
});
return null; // Will do stuff here once working.
});
Error messages shown in code comments.
If you want to use the configuration in FIREBASE_CONFIG, you should initialize the Admin SDK with no parameters:
admin.initializeApp();
This will use the default service account for your project, which should have full read and write access to the database.
You need to add your database url in admin.initializeApp
admin.initializeApp({
databaseURL:"your_database_url"
});
select Realtime Database in firebase and copy your url add in settings in fire config or watch this video https://www.youtube.com/watch?v=oOm_9y3vb80
config example
const config = {
apiKey: "",
authDomain: "",
projectId: "",
databaseURL: "https://youUrl.firebaseio.com/",
storageBucket: "",
messagingSenderId: "",
appId: "",
measurementId: ""
};
See:
https://firebase.google.com/docs/admin/setup#initialize_without_parameters
https://firebase.google.com/docs/functions/database-events
Try initialize without parameters.
The SDK can also be initialized with no parameters. In this case, the SDK uses Google Application Default Credentials and reads options from the FIREBASE_CONFIG environment variable. If the content of the FIREBASE_CONFIG variable begins with a { it will be parsed as a JSON object. Otherwise the SDK assumes that the string is the name of a JSON file containing the options.
const admin = require("firebase-admin"); // Import Admin SDK
const functions = require("firebase-functions"); // Import Cloud Functions
admin.initializeApp();
Same issue 3 years later...
After DAYS! I found out all of the Flutter documents have code for Firestore instead of Firebase.
Basically there are two products. Firestore Database, and Real-time Database. You are calling the Real-time Database methods, but probably have a Firestore database.
Try admin.firebase().ref('/some_collection').push();
Basically everywhere you're calling .database(), replace it with .firebase() if you are using Firebase. Yes, the Flutter tutorials are mis-leading!!!

Can't receive redis data from socket io

I'm building a realtime visualization using redis as pubsub messenger between python and node. There's a python script always running which sets a redis hash with hmset. That side of the app is working fine, if I enter the following example command: "HGETALL 'sellers-80183917'" in a redis client I end up getting the proper data.
The problem is in the js side. I'm using socketio and redis nodejs libraries to listen to the redis instance and publish the results online through a d3js viz.
I run the following code with node:
var express = require('express');
var app = express();
var redis = require('redis');
app.use(express.static(__dirname + '/public'));
var http = require('http').Server(app);
var io = require('socket.io')(http);
var sredis = require('socket.io-redis');
io.adapter(sredis({ host: 'localhost', port: 6379 }));
redisSubscriber = redis.createClient(6379, 'localhost', {});
redisSubscriber.on('message', function(channel, message) {
io.emit(channel, message);
});
app.get('/sellers/:seller_id', function(req, res){
var seller_id = req.params.seller_id;
redisSubscriber.subscribe('sellers-'.concat(seller_id));
res.render( 'seller.ejs', { seller:seller_id } );
});
http.listen(3000, '127.0.0.1', function(){
console.log('listening on *:3000');
});
And this is the relevant part of the seller.ejs file that's receiving the user requests and outputting the viz:
var socket = io('http://localhost:3000');
var stats;
var seller_key = 'sellers-'.concat(<%= seller %>);
socket.on(seller_key, function(msg){
stats = [];
console.log('Im in');
var seller = $.parseJSON(msg);
var items = seller['items'];
for(item in items) {
var item_data = items[item];
stats.push({'title': item_data['title'], 'today_visits': item_data['today_visits'], 'sold_today': item_data['sold_today'], 'conversion_rate': item_data['conversion_rate']});
}
setupData(stats);
});
The problem is that the socket_on() method never receives anything and I don't see where the problem is as everything seems to be working fine besides this.
I think that you might be confused as to what Pub/Sub in Redis actually is. It's not a way to listen to changes on hashes; you can have a Pub/Sub channel called sellers-1, and you can have a hash with the key sellers-1, but those are unrelated to each other.
As documented here:
Pub/Sub has no relation to the key space.
There is a thing called keyspace notifications that can be used to listen to changes in the key space (through Pub/Sub channels); however, this feature isn't enabled by default because it'll take up more resources.
Perhaps an easier method would be to publish a message after the HMSET, so any subscribers would know that the hash got changed (they would then retrieve the hash contents themselves, or the published message would contain the relevant data).
This brings us to the next possible issue: you only have one subscriber connection, redisSubscriber.
From what I understand from the Node.js Redis driver, calling .subscribe() on such a connection would remove any previous subscriptions in favor of the new one. So if you were previously subscribed to the sellers-1 channel and subscribe to sellers-2, you wouldn't be receiving messages from the sellers-1 channel anymore.
You can listen on multiple channels by either passing an array of channels, or by passing them as a arguments:
redisSubscriber.subscribe([ 'sellers-1', 'sellers-2', ... ])
// Or:
redisSubscriber.subscribe('sellers-1', 'sellers-2', ... )
You would obviously have to track each "active" seller subscription. Either that, or create a new connection for each subscription, which also isn't ideal.
It's probably a better idea to have a single Pub/Sub channel on which all changes would get published, instead of a separate channel for each seller.
Finally: if your seller id's aren't hard to guess (for instance, if it's based on an incremental integer value), it would be trivial for someone to write a client that would make it possible to listen in on any seller channel they'd like. It might not be a problem, but it is something to be aware of.

Resources