handling nodejs http requests in a separate process - node.js

I want to handle specific http requests in a child process. These requests being identified by the URL path. There are several examples in the node documentation and elsewhere online that almost do this or that simply do not work.
The reason for this is that the main server must be reliable and that certain requests may be handled by code that is not necessarily of the same quality. For this reason the entire request should be handed over to an external process that can be resurrected if it dies.
Ideally the external process should look as much like a normal node http server as possible and the connection between parent and child processes should not be over a socket.
It seems that the fork function and messages might do what I require but I cannot see any way to pass the request and response to the child process for handling.

Have you looked at nodejs cluster module?
It is not for specific requests but basically the master forks multiple workers that can then handle http requests (1 worker per cpu core in general). If the worker dies, the master forks a new one.

Related

Can I run socket and http test In parallel and change http request body depend last socket message used Jmeter

I need to write a load test for a web application which uses the WebSocket protocol for sending some server states to users and in parallel users send requests with data from this server state.
I need a socket client which will listen to the socket and update the current server state and at the same time need to send HTTP requests with data from the current state.
Can implement it on Apache JMeter? If yes may you know some useful articles (not about BlazeMeter parallel controller) or examples?
Or can you advise some tools which better for this target and which can generate a concurrent load of at least 20k users (currently use Jmeter cluster)? Thank you in advance!
One JMeter thread (virtual user) can only execute one Sampler at a time, if you need to run 2 requests at the same moment you need to either need to use 2 different Thread Groups or if you prefer to stay with 1 thread group you can use i.e. If Controller so "even" users would run HTTP requests and "odd" WebSocket or vice versa.
If you need to pass data between threads (virtual users) either use JMeter Properties or Inter-Thread Communication Plugin

Does a web application built on Go's http package work as a single process using multiple threads to deal with incoming requests?

I read that a Go application receives connections directly from clients using a built-in web server, not running behind a web server such as Apache. Also, I read network servers, such as Apache, deal with incoming requests using multiple processes created by fork().
Is this also true for a Go application, or does it operate on a single process and handle incoming requests by multiple threads?
Go applications typically use the net/http package to implement a web server. The documentation for that package says:
Serve accepts incoming HTTP connections on the listener l, creating a new service goroutine for each. The service goroutines read requests and then call handler to reply to them.
Goroutines are scheduled on one or more OS threads.
The package does not use fork.

node js on heroku - request timeout issue

I am using Sails js (node js framework) and running it on Heroku and locally.
The API function reads from an external file and performs long computations that might take hours on the queries it read.
My concern is that after a few minutes it returns with timeout.
I have 2 questions:
How to control the HTTP request / response timeout (what do I really need to control here?)
Is HTTP request considered best practice for this target? or should I use Socket IO? (well, I have no experience on Socket IO and not sure if I am not talking bullshit).
You should use the worker pattern to accomplish any work that would take more than a second or so:
"Web servers should focus on serving users as quickly as possible. Any non-trivial work that could slow down your user’s experience should be done asynchronously outside of the web process."
"The Flow
Web and worker processes connect to the same message queue.
A process adds a job to the queue and gets a url.
A worker process receives and starts the job from the queue.
The client can poll the provided url for updates.
On completion, the worker stores results in a database."
https://devcenter.heroku.com/articles/asynchronous-web-worker-model-using-rabbitmq-in-node

multiple child_process with node.js / socket.io

This is more of a design question rather than implementation but I am kind of wondering if I can design something like this. I have an interactive app (similar to python shell). I want to host a server (lets say using either node.js http server or socket.io since I am not sure which one would be better) which would spawn a new child_process for every client that connects to it and maintains a different context for that particular client. I am a complete noob in terms of node.js or socket.io. The max I have managed is to have one child process on a socket.io server and connect the client to it.
So the question is, would this work ? If not is there any other way in node to get it to work or am I better off with a local server.
Thanks
Node.js - is single process web platform. Using clustering (child_process), you will create independent execution of same application with separate thread.
Each thread cost memory, and this is generally why most of traditional systems is not much scalable as will require thread per client. For node it will be extremely inefficient from hardware resources point of view.
Node is event based, and you dont need to worry much about scope as far as your application logic does not exploit it.
Count of workers is recommended to be equal of CPU Cores on hardware.
There is always a master application, that will create workers. Each worker will create http + socket.io listeners which technically will be bound to master socket and routed from there.
http requests will be routed for to different workers while sockets will be routed on connection moment, but then that worker will handle this socket until it gets disconnected.

Can I make a HTTP server listen on a shared socket, to allow no-downtime code upgrades?

I want to be able to kill my old process after a code update without any downtime.
In Ruby, I do this by using Unicorn. When you send the Unicorn master process a USR1 kill signal, it spawns a new copy of itself, which means all the libraries get loaded from file again. There's a callback available when a Unicorn master has loaded its libraries and its worker processes are ready to handle requests; you can then put code in here to kill the old master by its PID. The old master will then shut down its worker processes systematically, waiting for them to conclude any current requests. This means you can deploy code updates without dropping a single request, with 0 downtime.
Is it possible to do this in Node? There seem to be a lot of libraries vaguely to do with this sort of thing - frameworks that seem to just do mindless restarting of the process after a crash, and stuff like that - but I can't find anything that's a stripped-down implementation of this basic pattern. If possible I'd like to do it myself, and it wouldn't be that hard - I just need to be able to do http.createServer().listen() and specify a socket file (which I'll configure nginx to send requests to), rather than a port.
Both the net and http modules have versions of listen that take a path to a socket and fire their callback once the server has been bound.
Furthermore, you can use the new child_process.fork to launch a new Node process. This new process has a communication channel built in to its parent, so could easily tell its parent to exit once initialized.
net documentation
http documentation
child_process.fork documentation
(For the first two links, look just under the linked-to method, since they are all the same method name.)

Resources