When we hit app server(apache tomcat) on ,it creates a thread to process our request and connect with tomcat ,build connection and tomcat creates another thread to process request and deliver it to the connection and connection thread delivers it to client.
But we nodejs has event loop(on task at a time till completion).When request comes to nodejs server ,event loop picks request from listener queue and delegates the task to worker threads that runs on background.
now event loop is free to pick other requests,when worker thread has completed the processing it send the data to call back and event loop picks call back from callback queue if there is nothing to else to do in main stack.
I want to clear my doubt regarding app server and node server
App server : thread created by server to connect tomcat is responsible for delivering data to client for that particular request ? Am i right?
But how nodejs knows to which request it needs to deliver response?How it is maintaining the connection for every request?
Im my understanding of request processing is right for both kind of servers?
node.js server is where your node program runs where as apache/nginx is just a reverse proxy server. a reverse proxy server is often used with node.js server.
Related
As NodeJS is single threaded runtime platform, how to run the following servers in parallel from within single NodeJS app:
NodeJS's http server: to serve HTML5 app
A WebSocket server: to serve WebSocket connection to HTML5 app using same http connection opened at http server.
UDP server: to expose service discovery endpoint for other independently running NodeJS apps on same machine or on other machines/docker containers.
I was thinking about somehow achieving the above by using RxJS, but would rather want to listen to the community about their solution/experiences.
Node.js is not single threaded. The developer only has access to a thread. But under the hoods, node.js is multi-threaded.
Specifically for your question, You can start multiple servers in the same process. Socket.io getting started example shows running websockets with http server. Same thing can also be done with UDP.
Hope that helps.
First off, you can have as many listening servers as you want in your node.js process. As long as you write proper asynchronous code in your handlers and don't have any CPU-hogging algorithms to run, you should be just fine.
Second, your webSocket and http server can be the exact same server process as that's how webSocket was designed to work.
Your UDP listener then just needs to be on some different port from your web server.
The single-threaded aspect of node.js applies only to your Javascript. You can run multiple server listeners just fine. If two requests on different servers come in at the same time, the one that arrives slightly before the other will get its handler called and the one arrive just a bit later will be queued until the handler for the first is done or returns while waiting for an asynchronous operation itself. In this way, the single threaded node.js can handle many requests.
I am using service stack to build the create RESTful services, not have depth knowledge of it. This works as sending request and getting response back. I have scenario and my question is depends on it.
Scenario: I am sending request from browser or any client where I am able to send request to server. Consider server will take 3 seconds to process single request and send back response to browser. After one second, I have sent another request to server from same browser(client). Now I am getting response of second request which I sent later.
Question 1: What is happening behind with the first request which I did not get response.
Question 2: How I can stop processing of orphan request.
Edit : I have used IIS server to host services.
ServiceStack executes requests concurrently on multithreaded web servers, whether you're hosting on ASP.NET/IIS or self-hosted so 2 concurrent requests are running concurrently on different threads. There are different scenarios possible if you're executing async tasks in your Services in which it frees up the thread to execute different tasks, but the implementation details are largely irrelevant here.
HTTP Web Requests are each executed to their end, even when its client connection is lost your Services are never notified and no Exceptions are raised.
But for long running Services you can enable the high-level ServiceStack's Cancellable Requests Feature which enables a way for clients to cancel long running requests.
I am going to design a system where there is a two-way communication between clients and a web application. The web application can receive data from the client so it can persist it to a DB and so forth, while it can also send instructions to the client. For this reason, I am going to use Node.JS and Socket.IO.
I also need to use RabbitMQ since I want that if the web application sends an instruction to a client, and the client is down (hence the socket has dropped), I want it to be queued so it can be sent whenever the client connects again and creates a new socket.
From the client to the web application it should be pretty straightforward, since the client uses the socket to send the data to the Node.JS app, which in turn sends it to the queue so it can ultimately be forwarded to the web application. From this direction, if the socket is down, there is no internet connection, and hence the data is not sent in the first place, or is cached on the client.
My concern lies with the other direction, and I would like an answer before I design it this way and actually implement it, so I can avoid hitting any brick walls. Let's say that the web application tries to send an instruction to the client. If the socket is available, the web app forwards the instruction to the queue, which in turn forwards it to the Node.JS app, which in turn uses the socket to forward it to the client. So far so good. If on the other hand, the internet connection from the client has dropped, and hence the socket is currently down, the web app will still send the instruction to the queue. My question is, when the queue forwards the instruction to Node.JS, and Node.JS figures out that the socket does not exist, and hence cannot send the instruction, will the queue receive a reply from Node.JS that it could not forward the data, and hence that it should remain in the queue? If that is the case, it would be perfect. When the client manages to connect to the internet, it will perform a handshake once again, the queue will once again try to send to Node.JS, only this time Node.JS manages to send the instruction to the client.
Is this the correct reasoning of how those components would interact together?
this won't work the way you want it to.
when the node process receives the message from rabbitmq and sees the socket is gone, you can easily nack the message back to the queue.
however, that message will be processed again immediately. it won't sit there doing nothing. the node process will just pick it up again. you'll end up with your node / rabbitmq thrashing as it just nacks a message over and over and over and over, waiting for the socket to come back online.
if you have dozens or hundreds of messages for a client that isn't connected, you'll have dozens or hundreds of messages thrashing round in circles like this. it will destroy the performance of both your node process and rabbitmq.
my recommendation:
when the node app receives the message from rabbitmq, and the socket is not available to the client, put the message in a database table and mark it as waiting for that client.
when the client re-connects, check the database for any pending messages and forward them all at that point.
I have 2 servers running 24/7 in a single process (file server.js):
TCP Socket Server (from NodeJS's net module)
ExpressJS HTTP Server
My TCP Socket Server receives messages from multiple gps trackers and inserts them on the DB (MySQL). It does that by instantiating a worker (a child process created by .fork() --> file worker.js) whenever it receives a socket connection; the worker will then take care of every following message coming by that socket.
My HTTP Server receives HTTP/AJAX requests from browser clients and answers them. Some of these requests require that my http server send the request to a worker so it can handle it better. The worker will then, later on, send back some information to the server.
Now, here it lies my problem. How can I send the response to the initial request from the browser? Should I just pass around the (req,res) objects from Express and when it comes back to the server, with the worker's information, I send whatever I want to the browser?
I am using Sails js (node js framework) and running it on Heroku and locally.
The API function reads from an external file and performs long computations that might take hours on the queries it read.
My concern is that after a few minutes it returns with timeout.
I have 2 questions:
How to control the HTTP request / response timeout (what do I really need to control here?)
Is HTTP request considered best practice for this target? or should I use Socket IO? (well, I have no experience on Socket IO and not sure if I am not talking bullshit).
You should use the worker pattern to accomplish any work that would take more than a second or so:
"Web servers should focus on serving users as quickly as possible. Any non-trivial work that could slow down your user’s experience should be done asynchronously outside of the web process."
"The Flow
Web and worker processes connect to the same message queue.
A process adds a job to the queue and gets a url.
A worker process receives and starts the job from the queue.
The client can poll the provided url for updates.
On completion, the worker stores results in a database."
https://devcenter.heroku.com/articles/asynchronous-web-worker-model-using-rabbitmq-in-node