I have been studying and doing hands-on practice with NodeJS since few weeks now.
I understand that it is a single-threaded, event based, javascript runtime environment.
It uses an event loop to process javascript statements and any incoming IO requests from clients.
But I am getting a hard time understanding that when a request comes to NodeJS from an external client like ReactJS or Postman. How it reaches the event loop and in which phase of event loop does it get processed. So far I have read many articles which go on repeating the same thing about various phases but I am not convinced whether my understanding of how request is handled by NodeJS is correct.
So here is my understanding:
NodeJS listens on a port for any incoming requests
When a client sends the request to that port, then the request is picked up by NodeJS
Now it goes to Event Queue and waits there until it gets picked up in the Poll phase by the event loop
Also there are a couple of doubts I have around my understanding:
I know that we setup various routes in our node app that trigger certain logic when a request comes in. But at what point/phase does NodeJS do this route matching of incoming requests.
What happens after event loop picks up a request from Event Queue? How does it get processed by the V8 engine?
Related
In my current project, my nodeJs/express will receive a HTTP request through a route.
Once received, node will then use NightmareJS to perform webscraping and subsequently execute a python script that further process the data.
Lastly, it would then update this data into MongoDB.
Everything takes about 5mins.
What I am trying to achieve is to allow my front-end to somehow receive an acknowledgement that the request was put through. But also receive an update when the above process is completed and the database is updated.
I have looked into using Long polling or socket.io. However, I don't know which one I should use or how. Or should I use rabbitMQ instead? Putting the response that it is complete into the queue while my front-end constantly querying this queue.
Long polling or socket.io are similar, socket.io has Long polling fallback if WS not supported
rabbitMQ is quite different, you cannot use rabbitMQ protocal in browser, so you need a client app, not a web
socket.io is excellent and go well along express, still there are other options, SSE (server send events), firebase. You need to feel them before you choose one, they are not that hard if you follow their official guide
4.some of my opensource might help
https://github.com/postor/sse-notify-suite
https://github.com/postor/node-realtime-db
benefits of each solution
ajax + server cache: simple
long pull: low latency
SSE: low latency, event based
socket.io: low latency, event based, high throug put, double direction, long pull fall back
I have a windows service that has a self hosted web api. There is a single thread outside of the web api (call it Main thread) that is running who's sole responsibility is to check (somewhere) if there is data to be processed, sends it down stream to be processed, then put a response (somewhere) on the success (or not success) of the processing. That part of the design cannot be changed.
What I need to do is accept an HTTP request on the web api that will put the incoming data somewhere for the Main thread to pick up (I was thinking ConcurrentQueue), and then wait for that data to be processed before sending an HTTP Response.
The part I'm running into is how best for the HTTP request thread to wait for the data to be processed before responding? Doing a while loop watching for a response doesn't seem very efficient. My current thought is to setup a message bus where the HTTP request thread subscribes by passing a reference to its ManualResetEvent then waiting one. The Main thread will then publish to the bus after its done which will set the ManualResetEvent, allowing the HTTP request thread to grab the result and send an HTTP response.
Is using ManualResetEvents in this way a good idea? It feels like there should already be a way to handle this on a web api but I haven't found anything.
im really new to node.js and i have a beginners question.
I plan to creat a node server that will execute a http request for a json file every 1-2 seconds.
The reason for doing a request so fast is because the json file im requesting is changing constantly.
What is the correct way doing that and not blocking the event loop?
Is it safe to put the request code in a function and call it in a setTimeout() function?
Should i run the requests in a child process?
The whole point of asynchronicity is that it is asynchronous. When you issue the HTTP request, it is sent off more or less instantly and node returns to other business, waking up only when the response is received. The only thing that could cause a "blocking" of the event loop in your case is very intensive processing either preparing the request or processing the response.
That is why on the front page of the node website it says "node uses an event-driven, non-blocking I/O model".
I am using Sails js (node js framework) and running it on Heroku and locally.
The API function reads from an external file and performs long computations that might take hours on the queries it read.
My concern is that after a few minutes it returns with timeout.
I have 2 questions:
How to control the HTTP request / response timeout (what do I really need to control here?)
Is HTTP request considered best practice for this target? or should I use Socket IO? (well, I have no experience on Socket IO and not sure if I am not talking bullshit).
You should use the worker pattern to accomplish any work that would take more than a second or so:
"Web servers should focus on serving users as quickly as possible. Any non-trivial work that could slow down your user’s experience should be done asynchronously outside of the web process."
"The Flow
Web and worker processes connect to the same message queue.
A process adds a job to the queue and gets a url.
A worker process receives and starts the job from the queue.
The client can poll the provided url for updates.
On completion, the worker stores results in a database."
https://devcenter.heroku.com/articles/asynchronous-web-worker-model-using-rabbitmq-in-node
I want to understand more exactly what happens when a server receives a client request on a Node.js server. With a more traditional server, a new thread is created to handle the new client session.
But in Node.js and other event-loop style servers, what exactly happens? What part of the codebase first gets executed? With node, I am almost certain something in the http module handles the new request first.
I want to know a little more about the details of how this works in a sort of compare and contrast style between the two types of handling of client connections.
In a nutshell:
Node uses libuv to manage incoming connections and data events
Events are placed in a queue to be handled on the next tick of the event loop
When bytes start arriving, they are fed in to the native-code http parser
The parser calls a callback in JS-land with the header contents
The rest of the JS HTTP code dispatches the request to user code, which may be Express