is it possible to load offline version of site, while sending request to server - node.js

I have a single page app written in node.js which has a fair amount of javascript and css.
now is it possible to load the offline version of webpage as soon as the url is entered and at the same time send the request to server and wait for response while the offline version is showing a nice splash screen?
in other words, instead of waiting for the response from server and then render the application, I prefer that browser would render the app while the request is being sent.
this way the page is loaded instantly(with the splash page) and the time that requests are being sent and responses are being returned, the javascript and css is being loaded which saves some time.
is this possible with modern technologies? and is it even a good idea?

Related

Using NodeJS functions in html

So I have made a back-end in NodeJS but I ran into one problem, how is it possible to link my back-end to my front end html/css page and use my NodeJS functions as scripts?
In case this wasn't clear to you, your nodejs back-end runs on your server. The server's job (in a webapp) is to deliver data to the browser. It delivers HTML pages. It delivers resources referenced in those HTML pages such as scripts, images, fonts, style sheets, etc.. It can answer programmatic requests for data also.
The scripts in those web pages run inside the browser which is nearly always (except for some developer testing scenarios) running on a completely different computer on a completely different local network (only connected via some network - usually the internet).
As such, a script in the browser cannot directly reference variables that exist in the server or call functions that exist on the server. They are completely different computers.
The kinds of things you can do in order to work-around this architectural limitation are as follows:
The server can dynamically modify the web page it is sending the browser and it can insert data into that web page. That data can be in the form of already rendered HTML or it can be variables inside of script tags that your web page Javascript can then use.
The javascript in the web page can make network requests to your server asking it for data. These are often called AJAX calls. In this scenario, some Javascript in your page sends a request to the server to retrieve some data or cause some action on the server. The server receives that request, carries out the desired operation and then returns a result back to the client Javascript running in the browser. That client Javascript receives the result and can then act on it, inserting data into the page, making the browser go to a new web page, prompting the user, etc...
There are some other ways that the web page javascript can communicate with the server such as webSocket connections, but we'll put those aside for now as they are just more ways for remote communication to happen - the structure of the communication doesn't really change.
how is it possible to link my back-end to my front end html/css page and use my NodeJS functions as scripts?
You can't directly use your nodejs functions as scripts in the front-end. You can make Ajax calls to the server and ask the server to execute it's own server code on your behalf to carry out some operation or retrieve some data.
If appropriate, you can also insert scripts into the web page and run Javascript directly in the browser, but whether you can do that for your particular situation depends entirely upon what the scripts are doing. If the scripts are accessing some resource that is only available from the server (like a database or a server storage system), then you won't be able to run those types of scripts in the browser. You will have to use ajax calls to ask the server to run them for you and then retrieve the results.

show progress during processing big data

I have a question I have a web app that to run need to process some big file. It can take a 5secund so meantime I want to show user that file is processing or the best will be to send information how many time to end. This is the first page and I cannot send twice res.render so how to do this?
var fileinarray = require('./readfile');
app.get('/', function(req, res){
var dataready = fileinarray;
res.render('index', {data: dataready});
});
So how I can do his? I read a little about socke.io but I don't now how to used in my case?
Thank you for your help
If you are loading a page (which it looks like) with this request, then you can't show progress with the way you have it structured because the browser is waiting to download your page and thus can't show anything in that window until you render the page for it. But, you want to show progress before the page gets there.
So, you have to restructure the way things work. First off, in order to show progress, the browser has to have a working page that you can show progress in. That means the browser can't be waiting for the page to arrive. The most straightforward way I can think of to do this is to render a shell page initially that your server returns immediately (no waiting for the original content). In that shell page, you have code to show progress and you initiate a request to your server to fetch the long running data. There are several ways this could be done using Ajax and webSockets. I'll outline some combinations:
All with Ajax
Browser requests the / page and your server renders the shell page
Meanwhile, after rendering the page, the server starts the long running process of fetching the desired data.
Rendered inside the shell page is a Javascript variable that contains a transaction ID
Meanwhile, client-side Javascript can regularly send Ajax requests to the server to check on the progress of the given transaction id (say every 10 seconds). The server returns whatever useful progress info and client-side Javascript displays the progress info.
At some point the server is done with the data and one of the regular Ajax requests that was checking on progress returns with all the data. The client-side Javascript then inserts this data into the current page and the operation is complete.
Mostly with WebSocket
Browser requests the / page and your server renders the shell page
Client-side code inside the shell page makes a webSocket or socket.io connection to the server and sends an initial request for the data that belongs with this page.
The server receives the webSocket connection and the initial request for the data and starts the long running process of fetching the data.
As the server has meaningful progress information, it sends that to the client over the webSocket/socket.io connection and when the client receives that information, it renders appropriate progress in the page.
At some point the server is done fetching the data and sends the client a message containing the final data. The client-side Javascript receives this data and inserts it into the page.
The client can then close the webSocket/socket.io connection.

Submit form via headless browser - NodeJS

Due to an archaic stack, the response from a form submission is HTML. When rendered on my clients domain, the relevant information is injected in via a portlet. If you render the markup locally, the necessary content is missing. This makes it impossible for me to simply post data to the relevant form endpoint.
As a result of this, I need to submit a form and scrape the success/fail page for the necessary information in a headless browser.
I'm planning on wiring an API endpoint in my NodeJS application that I can post the form data to which in turn will submit the form in the headless browser and respond with the scraped content.
Are there any frameworks that would support this? I've looked at Nightwatch and Web Driver but they all seem to be aimed at automated testing rather than what I'm after.
Try using casper.js
It is a scripting and testing utility for use with Slimer.js (headless browser)

How web browsers decide which resource should be requested

I have a fundamental question and I am searching for that for a long but I still don't know the exact response for that.
I am working with browsers and web applications. I am wondering how and based on what a web browser decide to send a particular request to the web server.
For example when you enter http://www.google.com inside the address bar of your web browser. the Browser will send a bunch of request to the web server for rendering the web page properly.
Now, my question is that how the web browser decide which request it needs to send to the web server.
does it related to some tags like 'link' or 'script' inside the body of the responses.
does the browser parse the javascript functions to see if it should send a request based on those functions?
Lets take an example to explain this one.
Consider you want to search for something and you hit http://www.google.com on your browser. These are the events that unfold to fetch you the page that will let you type in your query.
First, the networking stack on your machine will try to figure out which actual internet address matches www.google.com. This is called a DNS lookup. Once it receives a response for this lookup in form of an IP address, it can make a connection to the actual server that is serving google.com.
The machine makes a socket connection and uses the HTTP protocol to communicate with the server. It queries for the resource at / (which is the root) of the address you are trying to reach. This is called a GET request. The request is normally described like so: GET /
Google will respond with an HTML page. normally "index.html", which gets downloaded by the browser.
Once the HTML is downloaded, all linked resources, such as images to render the HTML as well as javascript referenced by the HTML page gets downloaded.
The downloaded HTML page is parsed and an in-memory tree is created called the "DOM Tree". This tree contains the elements of the HTML page in a hierarchy. Once the DOM is created, you can see the page being rendered on the browser.
During this parsing, the browser discovers more resources to be downloaded, such as images, stylesheets, javascript files. The HTML page references these resources via different tags such as <img> for images, <script> for javascript.
All detected resources are downloaded. Browsers download many of these resources in parallel, but apply them (javascript and stylesheets) sequentially in the order they where found on the page.
Stylesheets are parsed, and the styles are applied to the DOM of the HTML page. Sometimes, if stylesheets take longer to download, you can see the "raw" HTML page being rendered before the styles are applied. This happens sometimes over a slow connection.
Once the HTML page and related javascript files have been downloaded, the browser calls the "onload" callback function of javascript. Most Javascript heavy applications are started at this time.
Once onload is called, Javascript takes over and can attach handlers for different elements on the web page. Once the handlers have all been installed, interacting with the webpage could call one or more javascript functions that are listening for these events.
Javascript can also manipulate the DOM (the elements on the page), which results in UI updates (what the user sees) and therefore can be used to build a complete app on a single page.
Here is some more reading on the process: http://friendlybit.com/css/rendering-a-web-page-step-by-step/
The best way to examine this interaction is to use Developer tools on Chrome/FireFox or IE and view the network activity when you visit a web page.

How multiple requests happens from a web browser for a simple URL?

While trying to serve a page from node.js app I hit with this question. how multiple files are serving from a server with a simple request from user?
For eg:
A user enters www.google.co.in in address bar
browser make a request to that url and it should end there with a response. But whats happening is, few more requests are sending from that page to server like a chain.
What I am thinking now is, how my web browser(chrome) is sending those extra requests... Or who is prompting chrome to do it? and ofcourse, how can I do the same for my node.js app.
Once chrome gets the html back from the website, it will try to get all resources (images, javascript files, stylesheets) mentioned on the page, as well as favicon.ico That's what I think you are seeing
You can have as many requests as you want, from the client side (browser), using ajax. Here's a basic example using jQuery.get, an abstraction of an ajax call:
$.get(
"http://site.com",
function(data) { alert(data); }
);

Resources