Creating a Dashboard with a Livestream option - node.js

As the title says I am trying to am creating a Dashboard.
The Dashboard should include an option to view Data inserted in a Database, live or at least "live" with minimal delay.
I was thinking about 2 approaches:
When the option is used the Back-End creates a Trigger in the Database(its only certain Data so i would have to change the Trigger according to the Data). Said trigger should then send the new Data via http to the Back-End.
What i see as a problem is that the delay of sending the Data and possible errors could block the whole database.
1.1. Same as 1. but the trigger puts the new Data in a seperate Table where i can then query and delete the Data.
Just query for the newest data every 1-5 sec. or so. This just seems extremly bad and avoidable.
Which of those is the best way to do this? Am i missing something? How is this usually done?
The Database is a pgsql Database,Back and Front-end are in NodeJs.

Related

Node.js: Is there an advantage to populating page data using Socket.io vs res.render(), or vice-versa?

Let's say, hypothetically, I am working on a website which provides live score updates for sporting fixtures.
A script checks an external API for updates every few seconds. If there is a new update, the information is saved to a database, and then pushed out to the user.
When a new user accesses the website, a script queries the database and populates the page with all the information ingested so far.
I am using socket.io to push live updates. However, when someone is accessing the page for the first time, I have a couple of options:
I could use the existing socket.io infrastructure to populate the page
I could request the information when routing the user, pass it into res.render() as an argument and render the data using, for example, Pug.
In this circumstance, my instinct would be to utilise the existing socket.io infrastructure; purely because it would save me writing additional code. However, I am curious to know whether there are any other reasons for, or against, using either approach. For example, would it be more performant to render the data, initially, using one approach or the other?

Logic App to push data from Cosmosdb into CRM and perform an update

I have created a logic app with the goal of pulling data from a container within cosmosdb (with a query), looping over the results and then pushing this data into CRM (or Common Data Service). When the data is pushed to CRM, an ID will be generated. I wish to then update cosmosdb with this new ID. Here is what I have so far:
This next step is querying for the data within our cosmosdb database and selecting all IDS with a length that is greater than 15. (This tells us that the ID is not yet within the CRM database)
Then we loop over the results and push this into CRM (Dynamics365 or the Common Data Service)
Dilemma: The first part of this process appears to be correct, however, I want to make sure that I am on the right track with this. Furthermore, once the data is successfully pushed to CRM, CRM automatically generates an ID for each record. How would I then update cosmosDB with the newly generated IDs?
Any suggestion is appreciated
Thanks
I see a red flag in your approach here with this query with length(c.id) > 15. This is not something I would do. I don't know how big your database is going to be but generally not very performant to do high volumes of cross partition queries, especially if the database is going to keep growing.
Cosmos DB already provides an awesome streaming capability so rather than doing this in a batch I would use Change Feed and use that to accomplish whatever your doing here in your Logic App. This will likely give you better control of the process and likely allow you to get the id back out of your CRM app to insert back into Cosmos DB.
Because you will be writing back to Cosmos DB, you will need a flag to ignore the update in Change Feed when the item is updated.

Need suggestion for using Redis with right implementation for the social network platform post

I want to use Redis by which every time the user will see the post that is coming from the database, I want to process it by Redis.
I have a multiple posts on a database. On front-end, we use the method of data chunking.
For that Front-end call the API and send the time == null, so back end understands that it requires the fresh 20 data. For that pick the latest post and use the limit() to send first latest 20 data.
After getting the data when the user scrolls down then again front end calls the API and send the time == last post created date time. Back end will find data which date is less than the front end send date. By this way again send the 20 data to front-end.
Now I want this to be with Redis. I am confused about whether I will store the complete 20 sets of data or store one post data at a time.
Problem is if store the data on blocks like an array of the object where 20 data will store on an array then if I want to modify the data then how will I do it because Redis will not update the single entity on a block. It will update the whole block
If I go to single entry, then how will I share the 20 data to front-end because by this way redis will store the single entity where each key is bind with single post.
Please tell me that how social network like facebook, Instagram or twitter handle this.
Also, suggest me that is it beneficial for using redis on the post. Any help or suggestion is really appreciated for that

Scan AWS DynamoDB records only when there is new information

I am struggling to work out something that seems like it would be so simple.
Here is some context:
I have a web app, which has 6 graphs powered by D3 and this data is stored in one table in DynamoDB. I am using AWS and NodeJS with the awssdk.
I need to have the graphs updating in real-time when new information is added.
I currently have it set so that the scan function runs every 30 seconds for each graph, however, when I have multiple users it causes the db to be hit so many times that it maxes out the reads.
I want it so that when data in the database is updated, potentially the server will save that data to a document so that the users can poll that instead of the database itself and that doc will simply update when new info is added to the database.
Basically, any way to have it where dynamodb is only scanned when there is new information.
I was looking into using streams however I am completely lost on where to start and if that is the best approach to take.
You would want to configure a DynamoDB Stream of your table to trigger something like an AWS Lambda function. That function could then scan the table and generate your new file and store it somewhere like S3.

Real-Time Database Messaging

We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)

Resources