I have a very large nodejs controller that runs thru huge tables and checks data against other huge tables. This takes quite a while to run.
Today my page only shows the final result after 20 minutes.
Is there anyway to stream out the result continuously from the controller to the web page? Like a real time scrolling log of whats going on?
(or is Socket I.O. my only option)
You can try with node-cron and set the dynamic query for fetching data with different limits and append data in the front-end side.
I am not sure is it a proper way or not.
Related
Let's say, hypothetically, I am working on a website which provides live score updates for sporting fixtures.
A script checks an external API for updates every few seconds. If there is a new update, the information is saved to a database, and then pushed out to the user.
When a new user accesses the website, a script queries the database and populates the page with all the information ingested so far.
I am using socket.io to push live updates. However, when someone is accessing the page for the first time, I have a couple of options:
I could use the existing socket.io infrastructure to populate the page
I could request the information when routing the user, pass it into res.render() as an argument and render the data using, for example, Pug.
In this circumstance, my instinct would be to utilise the existing socket.io infrastructure; purely because it would save me writing additional code. However, I am curious to know whether there are any other reasons for, or against, using either approach. For example, would it be more performant to render the data, initially, using one approach or the other?
As the title says I am trying to am creating a Dashboard.
The Dashboard should include an option to view Data inserted in a Database, live or at least "live" with minimal delay.
I was thinking about 2 approaches:
When the option is used the Back-End creates a Trigger in the Database(its only certain Data so i would have to change the Trigger according to the Data). Said trigger should then send the new Data via http to the Back-End.
What i see as a problem is that the delay of sending the Data and possible errors could block the whole database.
1.1. Same as 1. but the trigger puts the new Data in a seperate Table where i can then query and delete the Data.
Just query for the newest data every 1-5 sec. or so. This just seems extremly bad and avoidable.
Which of those is the best way to do this? Am i missing something? How is this usually done?
The Database is a pgsql Database,Back and Front-end are in NodeJs.
I was wondering if it would be possible to save large amounts of data on the client side with node.js.
The data contain some images or videos. On the first page visit all the data should be
stored in browser cache, so that if there is no internet connection it would not have effect on the displayed content .
Is there a way to do this ?
Thanks for your help :D
You can use the Web Storage API from your client-side (browser-side) Javascript.
If your application stores a megabyte or so, it should work fine. It has limits on size that vary by browser. But, beware, treat the data in Web Storage as ephemeral, that is like a cache rather than permanent storage. (You could seriously annoy your users if you rely on local storage to hold hours of their work.)
You can't really get a node.js program itself to store this information, because those programs run on servers, not in browsers. But you can write your browser-side Javascript code to get data from node.js and then store it.
From browser Javascript you can say stuff like
window.localStorage.setItem ('itemName', value)
and
var retrievedValue = window.localStorage['itemName')
window.localStorage items survive browser exits and restarts. window.sessionStorage items last as long as the browser session continues.
The goal of the app is to generate a pdf using puppeteer, we fetch the data, build the html template then using chrome headless generate the pdf, we then, return a link to the newly generated pdf.
The issue, is it takes about 7000 ms to generate a pdf, mainly because of the three puppeteer functions : launch (launch the headless broweser), goto (navigate to the html template) and pdf (generates the pdf).
So having around 7~8 seconds to answer one request, with more incoming requests or a sudden spike, it could easily takes about 40 to 50 seconds for 30 simultaneous requests, which I find unacceptable.
After many time spent on research, I will implement the cluster module to take advantage of multiple processes.
But besides clustering, are there any other possible options to optimize the time on a single instance?
There are something to consider ...
Consider to call puppeteer.launch once per application start. Your conversion script will just check is browser instance already exists and use it by calling newPage(), which basically create new tab, instead of every time creating the browser.
You may consider to intercept Request as page.on('request', this.onPageRequest); when calling goto() and filter out certain types of the files which page is loading right now, but you don't need them for PDF rendering; you may filter out external resources as well if this is your case.
When use pdf() you may return back Buffer from your service, instead of using file system and return link to the location of PDF file created. this may or may not speed up things, depend on your service setup; anyway less IO should be better.
This is probably all you can do for single instance of your app; With the implementation above regular (couple of pages) PDF with a few images render for me in 1-2 sec.
To speed up things use clustering. Other than embed it inside your application you may consider to use PM2 manager to start and scale multiple instances of your service.
I have a website which can have up to 500 concurrent viewers, with data updated every three seconds. Currently each user has an AJAX object which calls a web-page every three seconds which queries a DB and returns with the results.
What I would love to do is have each client get a socket to a node.js object, this node.js would poll the DB every 3 seconds for updated data, if it had updated data it would then be announced (ideally through JSON) and each client would then have the data pushed to it and update the page accordingly.
If this is possible, does anyone have a recommendation as to where I start? I am fairly familiar with JS but node.js seems to confuse me.
Thanks
I myself have quite few experience with node.js.
It is absolutely doable and looks like the perfect use case for node.js.
I recommend starting with an Express Tutorial and later on use socket.io.
I don't know which DBMS you are using, but there probably is a nice package for that as well. Just look through this list.