show progress during processing big data - node.js

I have a question I have a web app that to run need to process some big file. It can take a 5secund so meantime I want to show user that file is processing or the best will be to send information how many time to end. This is the first page and I cannot send twice res.render so how to do this?
var fileinarray = require('./readfile');
app.get('/', function(req, res){
var dataready = fileinarray;
res.render('index', {data: dataready});
});
So how I can do his? I read a little about socke.io but I don't now how to used in my case?
Thank you for your help

If you are loading a page (which it looks like) with this request, then you can't show progress with the way you have it structured because the browser is waiting to download your page and thus can't show anything in that window until you render the page for it. But, you want to show progress before the page gets there.
So, you have to restructure the way things work. First off, in order to show progress, the browser has to have a working page that you can show progress in. That means the browser can't be waiting for the page to arrive. The most straightforward way I can think of to do this is to render a shell page initially that your server returns immediately (no waiting for the original content). In that shell page, you have code to show progress and you initiate a request to your server to fetch the long running data. There are several ways this could be done using Ajax and webSockets. I'll outline some combinations:
All with Ajax
Browser requests the / page and your server renders the shell page
Meanwhile, after rendering the page, the server starts the long running process of fetching the desired data.
Rendered inside the shell page is a Javascript variable that contains a transaction ID
Meanwhile, client-side Javascript can regularly send Ajax requests to the server to check on the progress of the given transaction id (say every 10 seconds). The server returns whatever useful progress info and client-side Javascript displays the progress info.
At some point the server is done with the data and one of the regular Ajax requests that was checking on progress returns with all the data. The client-side Javascript then inserts this data into the current page and the operation is complete.
Mostly with WebSocket
Browser requests the / page and your server renders the shell page
Client-side code inside the shell page makes a webSocket or socket.io connection to the server and sends an initial request for the data that belongs with this page.
The server receives the webSocket connection and the initial request for the data and starts the long running process of fetching the data.
As the server has meaningful progress information, it sends that to the client over the webSocket/socket.io connection and when the client receives that information, it renders appropriate progress in the page.
At some point the server is done fetching the data and sends the client a message containing the final data. The client-side Javascript receives this data and inserts it into the page.
The client can then close the webSocket/socket.io connection.

Related

Update HTML page on mongodb data change

I want to update the HTML/pug page of multiple users when particular data changes in my MongoDB database. A user(A) follows another user(B) (whose data is stored in a different collection). When user(B) updates something, user(A) should see the change. There can be multiple people following the same user, and all the people following the user should see live updates.
I tried looking into socket.io but it doesn't look like it is the right thing for my purpose.
Your main options are either websockets (socket.io), server side push notifications with http2, or polling with http.
If socket.io seems overkill, server push notifications probably will too.
You can poll instead. Ie, send an http request from the client at regular intervals, like every 10 (or whatever seems suitable) seconds and update the page based on the response data
You’ll need to use JavaScript on the client for this. Pug templates render just once on page load. If you want dynamic updates after the initial render, you need client side JavaScript in all cases.

Is there a difference in a fetch request body when its created from a user typing the url and when its created from a clicked link?

I have a project that uses React but server rendered. When a client requests an initial page of the website by writing the URL in the search field, the server.js uses renderToString() to stringify the react App.js (with some initial data added) and send it to the client along with the bundle.js and css. From this point onwards, React takes over, the client will navigate through the app without needing an initial data anymore. Whenever they navigate to a different component, componentDidMount() will request for the corresponding data from the same server.js.
The problem is that I cannot distinguish between a GET request from a componentDidMount() and from the user typing the URL in the search bar. This is crucial for knowing when to send a new markup with an initial data and when to just send a response object.
Right now I am using a very crude method of attaching a querystring in the GET request from the componentDidMount() to identify that it requires a non-initial data/that the request is not an initial request on entering the website.
This method is very messy as in one instance for example, upon refreshing, the non-initial query stays in the url but since its refreshed, it throws all the cached react app, and it then receives the non-initial data displayed nakedly in the browser.
Is there a better way of doing this? Maybe there is an attached information in the fetch get request that shows where the GET request is generated (whether from a clicked link or a typed URL)?

is it possible to load offline version of site, while sending request to server

I have a single page app written in node.js which has a fair amount of javascript and css.
now is it possible to load the offline version of webpage as soon as the url is entered and at the same time send the request to server and wait for response while the offline version is showing a nice splash screen?
in other words, instead of waiting for the response from server and then render the application, I prefer that browser would render the app while the request is being sent.
this way the page is loaded instantly(with the splash page) and the time that requests are being sent and responses are being returned, the javascript and css is being loaded which saves some time.
is this possible with modern technologies? and is it even a good idea?

Prevent post request timeout

I'm making a POST request to a nodejs server that contains just a file name in the body. Upon receiving the request, node uploads the corresponding local file to an Amazon S3 bucket. The files can sometimes take awhile to upload, and sometimes the request times out. How can I lengthen or prevent the timeout from happening?
You can send the response back to the browser while you still continue to work on the upload. You don't have to wait until the upload is done before finishing the response. For example, you can do:
res.end("done"); // or put whatever HTML you want in the response
And, that will end the post response and the browser will be happy. Meanwhile, your server continues to finish the upload.
The upside to this is that the browser and user goes on their merry way to the next thing they want to do without waiting for the server to finish its business. The only downside I'm aware of is that if the upload fails, you don't have a simple means of informing the browser that it failed (because the response is already done).
The timeout itself is controlled by the browser. It is not something that you can directly control from the server. If you are continually sending data to the browser, then it will likely not timeout. So, as long as the upload was continuing, you could do something like this:
res.write(" ");
every 15 seconds or so as sort of a keep-alive that keeps the browser happy and keeps it from timing out. You'd probably set an interval timer for 15 seconds to do the small send of data and then when the upload finishes you would stop the timer.
If you control the Javascript in the browser that makes the post, then you can set the timeout value on an Ajax call from the Javascript in the browser. When you create an XMLHttpRequest object in the browser, it has a property called timeout which you can set for asynchronous ajax calls. See MDN for more info.

How multiple requests happens from a web browser for a simple URL?

While trying to serve a page from node.js app I hit with this question. how multiple files are serving from a server with a simple request from user?
For eg:
A user enters www.google.co.in in address bar
browser make a request to that url and it should end there with a response. But whats happening is, few more requests are sending from that page to server like a chain.
What I am thinking now is, how my web browser(chrome) is sending those extra requests... Or who is prompting chrome to do it? and ofcourse, how can I do the same for my node.js app.
Once chrome gets the html back from the website, it will try to get all resources (images, javascript files, stylesheets) mentioned on the page, as well as favicon.ico That's what I think you are seeing
You can have as many requests as you want, from the client side (browser), using ajax. Here's a basic example using jQuery.get, an abstraction of an ajax call:
$.get(
"http://site.com",
function(data) { alert(data); }
);

Resources