Using NodeJS functions in html - node.js

So I have made a back-end in NodeJS but I ran into one problem, how is it possible to link my back-end to my front end html/css page and use my NodeJS functions as scripts?

In case this wasn't clear to you, your nodejs back-end runs on your server. The server's job (in a webapp) is to deliver data to the browser. It delivers HTML pages. It delivers resources referenced in those HTML pages such as scripts, images, fonts, style sheets, etc.. It can answer programmatic requests for data also.
The scripts in those web pages run inside the browser which is nearly always (except for some developer testing scenarios) running on a completely different computer on a completely different local network (only connected via some network - usually the internet).
As such, a script in the browser cannot directly reference variables that exist in the server or call functions that exist on the server. They are completely different computers.
The kinds of things you can do in order to work-around this architectural limitation are as follows:
The server can dynamically modify the web page it is sending the browser and it can insert data into that web page. That data can be in the form of already rendered HTML or it can be variables inside of script tags that your web page Javascript can then use.
The javascript in the web page can make network requests to your server asking it for data. These are often called AJAX calls. In this scenario, some Javascript in your page sends a request to the server to retrieve some data or cause some action on the server. The server receives that request, carries out the desired operation and then returns a result back to the client Javascript running in the browser. That client Javascript receives the result and can then act on it, inserting data into the page, making the browser go to a new web page, prompting the user, etc...
There are some other ways that the web page javascript can communicate with the server such as webSocket connections, but we'll put those aside for now as they are just more ways for remote communication to happen - the structure of the communication doesn't really change.
how is it possible to link my back-end to my front end html/css page and use my NodeJS functions as scripts?
You can't directly use your nodejs functions as scripts in the front-end. You can make Ajax calls to the server and ask the server to execute it's own server code on your behalf to carry out some operation or retrieve some data.
If appropriate, you can also insert scripts into the web page and run Javascript directly in the browser, but whether you can do that for your particular situation depends entirely upon what the scripts are doing. If the scripts are accessing some resource that is only available from the server (like a database or a server storage system), then you won't be able to run those types of scripts in the browser. You will have to use ajax calls to ask the server to run them for you and then retrieve the results.

Related

Is code within the #code block compiled and visible through browser debugging?

I'm relatively new to Blazor WASM. I've used Blazor Server quite a bit, and have a ton of experience using ASP.NET.
Blazor WASM seems to be divided into two separate projects - OOTB, a fresh projects spawns with ProjectName.Client and ProjectName.Server. I would expect everything under the Server project is residing on the server, and thus not visible to the browser.
I'm unsure about the code in the Client project. Specifically, the code within the #code blocks. I would expect that those blocks of code execute on the client browser, but are they visible to the client, or are they encrypted/hidden/etc and thus not visible?
Specifically, I'm looking to create a page with a form that is submitted to a HTTP endpoint for processing. The contents of that form are, of course, plainly visible to the client, as they just filled it in, but specifically, I'd like to hide the outgoing HTTP call.
So, tl;dr:
If I put code within the #code block, can the client somehow decompile and see it?
If I make an outgoing HTTP request within the #code block, can the client see the request using tools like Fiddler or within the browser Network tab?
Just to be sure, If I put the code within the Server project instead, and I have the Client project send the request to the Server project with the form data, will the outgoing HTTP request from the Server project be completely hidden to the client?
In Blazor WASM all the code required by the WASM project is downloaded to the browser. i.e. treat it as public domain code. It's needs decompiling but with the right tools everything in there is visible.
Any calls from the client code to an API are also viewable.
Any code within the API controllers is pure ASPNetCore server side code and can't be seen unless compromised.
So
Yes
Yes
Yes

(Design question) How to decouple front- and back-end to protect routing (backend) code? (Node.js - Express - React)

Context:
I'm making a React website that draws information from the Google Sheets API and formats specific rows into a data visualization. There are columns I don't want to share because of sensitivity of information, and fortunately there are ways to share only specified columns, but that isn't why I'm asking the following:
Problem:
I want to have a Node API that handles requests from a React front-end, but whose code isn't available on the client's browser (for example, in the bundle.js file created during build).
Clarification: I have noticed that when running most Node-React application examples locally and when building them with webpack, you end up with one bundle.js file that contains Node request-handling code being delivered to the browser on page load.
Proposal:
Do I need to deploy two separate apps (one for Node, the other for React), or can I keep them together without the server code being visible to the client?
EDIT POST ANSWER:
you end up with one bundle.js file that contains Node request-handling code being delivered to the browser on page load.
This was untrue. The code I had assumed to be request-handling code was client side request-calling code.
It is already decoupled. There is nothing you need to do.
Note that the security of your node.js server code depends on your server configuration, not node.js. If you access your server via unencrypted file sharing or FTP then your node server code is still not safe.
Even when using encryption, avoid compromised protocols such as SSL or TLSv1.0 (use TLSv1.3 instead for things like FTPS)
You can add a simple authentication system. There are plenty packages out there for Node already, so no need to implement it yourself.
Specifically, this would prevent the backend from sending sensitive data to a unauthorized request.
EDIT: Just for clarification, code run on a Node.js server is not sent out publicly, it will run on your server and send its output to the frontend.
EDIT 2: Looks like I misunderstood your question.
If your code is not decoupled at the moment it will need to be. All code of a React.js project is sent to the browser. Since there is no backend to handle any kind of access logic, any such logic would have to be in the frontend (React.js), where it could easily be circumvented.

What is the purpose of React Router?

Given that we can do routing with Express on the server, why do need a client side router?
What are the benefits, and is it only significant to SPA?
Client side routing is required to keep your application in sync with the browser URL.
It is mainly useful for Single Page Applications where the backend will be used for RESTful API calls via XHR or AJAX calls.
Being a SPA uses can book mark your URL and when they hit the URL again , your application should load that page with the data and its state.
The main difference between Server side routing and client side routing:
1. In Server Side routing you download(serve) the entire page.
2. In client Side routing along with the entire page, you can serve a specific portion of a page, reuse the DOM, manually manage the URL and history states. eg.
www.something.com/page1/tab1 will show tab1 in the UI
www.something.com/page1.tab2 will show tab2 in the UI
In this way the url can get more complex and you can have sub-routes with states.
Those who need a client-side router, need it for state management. Say you have server-rendered pages, but with some client-side widgets - e.g. a calendar, set of filters or collapsed or open sidebar. Router helps you initialize these components of the page in the exact state you want them. Granted, you could do most of it and all of the use cases I've named on the server, too. But it's usually a lot easier to handle these on the client. You might render it faster on the server, but sometimes, especially when doing partial page updates, it's cheaper and faster to handle that client-side.

How web browsers decide which resource should be requested

I have a fundamental question and I am searching for that for a long but I still don't know the exact response for that.
I am working with browsers and web applications. I am wondering how and based on what a web browser decide to send a particular request to the web server.
For example when you enter http://www.google.com inside the address bar of your web browser. the Browser will send a bunch of request to the web server for rendering the web page properly.
Now, my question is that how the web browser decide which request it needs to send to the web server.
does it related to some tags like 'link' or 'script' inside the body of the responses.
does the browser parse the javascript functions to see if it should send a request based on those functions?
Lets take an example to explain this one.
Consider you want to search for something and you hit http://www.google.com on your browser. These are the events that unfold to fetch you the page that will let you type in your query.
First, the networking stack on your machine will try to figure out which actual internet address matches www.google.com. This is called a DNS lookup. Once it receives a response for this lookup in form of an IP address, it can make a connection to the actual server that is serving google.com.
The machine makes a socket connection and uses the HTTP protocol to communicate with the server. It queries for the resource at / (which is the root) of the address you are trying to reach. This is called a GET request. The request is normally described like so: GET /
Google will respond with an HTML page. normally "index.html", which gets downloaded by the browser.
Once the HTML is downloaded, all linked resources, such as images to render the HTML as well as javascript referenced by the HTML page gets downloaded.
The downloaded HTML page is parsed and an in-memory tree is created called the "DOM Tree". This tree contains the elements of the HTML page in a hierarchy. Once the DOM is created, you can see the page being rendered on the browser.
During this parsing, the browser discovers more resources to be downloaded, such as images, stylesheets, javascript files. The HTML page references these resources via different tags such as <img> for images, <script> for javascript.
All detected resources are downloaded. Browsers download many of these resources in parallel, but apply them (javascript and stylesheets) sequentially in the order they where found on the page.
Stylesheets are parsed, and the styles are applied to the DOM of the HTML page. Sometimes, if stylesheets take longer to download, you can see the "raw" HTML page being rendered before the styles are applied. This happens sometimes over a slow connection.
Once the HTML page and related javascript files have been downloaded, the browser calls the "onload" callback function of javascript. Most Javascript heavy applications are started at this time.
Once onload is called, Javascript takes over and can attach handlers for different elements on the web page. Once the handlers have all been installed, interacting with the webpage could call one or more javascript functions that are listening for these events.
Javascript can also manipulate the DOM (the elements on the page), which results in UI updates (what the user sees) and therefore can be used to build a complete app on a single page.
Here is some more reading on the process: http://friendlybit.com/css/rendering-a-web-page-step-by-step/
The best way to examine this interaction is to use Developer tools on Chrome/FireFox or IE and view the network activity when you visit a web page.

Pitfalls of accessing a webserver on 127.0.0.1 from js with a public site

I'm thinking about exploring the idea of having our client software run as a service on a high port and listen for simple http GET requests from 127.0.0.1. The theory is that I would be able to access this service via js from a web page that is served from my site.
1) User installs client software that installs itself as a service and waits for authenticated requests on 127.0.0.1:8080
2) When the user hits my home page js on the page makes an xhtml request to 127.0.0.1:8080 and asks for the status
3) The home page then makes another js request back to my web server sending the status that it received.
This would allow my users to upload/download and edit files on a USB attached device in real-time from a browser. Polling could be the fallback method which is close to what we do today.
Has anyone done this and what potential pitfalls are there? Will this even work?
I can't see any potential pitfalls. I do have a couple of points however.
1/ You probably want to make sure your service only accepts incoming connection from the local machine (127.0.0.1). Otherwise, anyone could look at your JavaScript and figure out that it's talking to [your-ip]:8080. They could then try that themselves from a remote site (security hole).
2/ I wouldn't use port 8080 as it's commonly used for other things (alternate HTTP servers, etc.). Make it configurable and choose a nice high random-type value.
3/ I'm not sure what you're trying to do with point 3 but I think you're trying to send the status back to the user. In which case, why wouldn't the JavaScript on your home page just get the status in a single session and output/update the HTML to be presented to the user? Your "another js request back to my web server" doesn't make sense to me.
You may not be able to do a xml http request to 127.0.0.1 as XMLHTTPRequest is usually limited to the same domain as the main content is being served from. I'm not sure if this restriction applies if the server is on the client's machine. That being said, you could still create a <script> tag that had the src pointing to 127.0.0.1, and have the web server return some Javascript to run. If you only need a simple response, this could work well.
I think it is much better for you to avoid implementation of application logic in JavaScript and html. Once user clicks button on a web page JavaScript should send request to your service and allow it do the rest of the work.
You could have problems with step 1 (Client installs itself) depending on your target user base.
You will need a customised install for each supported environment (Win2K, Vista, Linux, MAC OS 9.0/10.0 etc.).
If your user is on a locked down at work PC this simply wont be allowed.
To some users this might look distressingly similar to a trojan unless you explicitly point out you will be installing software that runs as a service.
You didnt mention an unistall procedure. Users resent "Adobe" like software which installs itself and provides no sensible un-install options
Ohterwise the approach is sound, and, there are are couple of commercial products out there that use exactly this approach!

Resources