node.exe process with IISNode process stops running - node.js

I am using iisnode to run my node.js app. However, after about an hour, the node.exe process stops running (I need it running since I have a setInterval() method that pulls data from the database every few seconds). Any advice?
Also, if I set up my server with process.env.PORT, how do I connect to it using socket.io on the client-side? I understand that I have to use
io.configure(function () {
io.set("transports", ["xhr-polling"]); // no websockets
io.set("polling duration", 10);
io.set("log level", 1); // no debug msg
});

This is a correct configuration for socket.io when the application is hosted in IIS using iisnode. On the client side, you connect to the server using the regular HTTP address of the endpoint exposed by IIS. Note that in order for socket.io to work out of the box, you need to host your node.js application as an IIS WebSite rather than a virtual directory within a web site: your app should be addressable with http://foobar.com/ rather than http://foobar.com/myapp/.
When you say the node.exe process stops running do you mean it is terminated and disappears or hangs? IIS will terminate worker processes (including any child processes they spawned, which is node.exe in this case) after a period of inactivity (when no HTTP requests arrive that target this server). The duration of that time period is configurable in the Application Pool settings.
If you rely on logic in your application that requires code to be run at intervals, IIS itself does not provide the best hosting model, as process lifetime is tied closely to HTTP messaging. You really need either a durable server (e.g. Windows Service, check out http://nssm.cc/), or some form of a web cron.

Related

how nodejs server is different from apache tomcat server(App server)?

When we hit app server(apache tomcat) on ,it creates a thread to process our request and connect with tomcat ,build connection and tomcat creates another thread to process request and deliver it to the connection and connection thread delivers it to client.
But we nodejs has event loop(on task at a time till completion).When request comes to nodejs server ,event loop picks request from listener queue and delegates the task to worker threads that runs on background.
now event loop is free to pick other requests,when worker thread has completed the processing it send the data to call back and event loop picks call back from callback queue if there is nothing to else to do in main stack.
I want to clear my doubt regarding app server and node server
App server : thread created by server to connect tomcat is responsible for delivering data to client for that particular request ? Am i right?
But how nodejs knows to which request it needs to deliver response?How it is maintaining the connection for every request?
Im my understanding of request processing is right for both kind of servers?
node.js server is where your node program runs where as apache/nginx is just a reverse proxy server. a reverse proxy server is often used with node.js server.

How to run HTTP server, UDP Server and WebSocket Server from a single NodeJS app?

As NodeJS is single threaded runtime platform, how to run the following servers in parallel from within single NodeJS app:
NodeJS's http server: to serve HTML5 app
A WebSocket server: to serve WebSocket connection to HTML5 app using same http connection opened at http server.
UDP server: to expose service discovery endpoint for other independently running NodeJS apps on same machine or on other machines/docker containers.
I was thinking about somehow achieving the above by using RxJS, but would rather want to listen to the community about their solution/experiences.
Node.js is not single threaded. The developer only has access to a thread. But under the hoods, node.js is multi-threaded.
Specifically for your question, You can start multiple servers in the same process. Socket.io getting started example shows running websockets with http server. Same thing can also be done with UDP.
Hope that helps.
First off, you can have as many listening servers as you want in your node.js process. As long as you write proper asynchronous code in your handlers and don't have any CPU-hogging algorithms to run, you should be just fine.
Second, your webSocket and http server can be the exact same server process as that's how webSocket was designed to work.
Your UDP listener then just needs to be on some different port from your web server.
The single-threaded aspect of node.js applies only to your Javascript. You can run multiple server listeners just fine. If two requests on different servers come in at the same time, the one that arrives slightly before the other will get its handler called and the one arrive just a bit later will be queued until the handler for the first is done or returns while waiting for an asynchronous operation itself. In this way, the single threaded node.js can handle many requests.

I'm confused with the meaning of WAS(Web Application Server) and web framework

I think I know what framework is and some famous framework like ruby on rails, spring, and I think I can distinguish between the meaning of web server and web application server.
but I don't know what is different between WAS and framework, for me I think framework is a kind of WAS because framework is doing many dynamic works related with database handling request from web server(Apache or nginx)
I'm confused with the relationship between these two part in Web programming.
Could you explain it?
Basically the framework is only responsible to provide a response to an http request (that includes handling the database as you said). But Rails isn't responsible to open a new thread (or in some implementations, a process) whenever a new http request arrives - this is done by the application server (such as Puma, Webrick, Unicorn etc). This is called concurrency (the ability to serve the app to multiple requests at the same time, in a nutshell) and is purely the job of the app server. Another thing is understanding (and parsing) the http request - Rails doesn't implement http, it receives a ready request from the app server who does implement http.
In ruby land the job of each part is defined by the rack protocol https://rack.github.io/. Rails, as a rack application, simply waits for "something" (the web application server) to 'call' it (with an http request), and it returns to it the response.
So to sum up: the application server needs to handle threading or multi processing to serve http requests to Rails (the app server is basically always listening on some socket for new requests, and provides concurrency either by forking processes, opening new threads or both. It depends on the app server). The app server therefore also needs to understand http (be able to parse an http request) so it can server that to Rails.
Rails, the web framework, only needs to handle an http request and return the response.
for those who want to understand the difference between web server and app server.
refer to What is the difference between application server and web server?

Keep a node application running on azure app service

I have deployed a node js web application on app service in azure. Issue is that my application occasionally getting killed for unknown reason. I have done exhaustive search through all the log fines using kudu.
If I restart app service, application starts working.
Is there any way I can restart my node application once it has crashed. Kind of run for ever no matter what. For example if any error happens in an asp.net code deployed in IIS, IIS never crashes, its keeps of serving other incoming request.
Something like using forever/pm2 in azure app service.
node.js in Azure App Services is powered by IISNode, which takes care of everything you described, including monitoring your process for failures and restarting it.
Consider the following POC:
var http = require('http');
http.createServer(function (req, res) {
if (req.url == '/bad') {
throw 'bad';
}
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('bye');
}).listen(process.env.PORT || 1337);
If I host this in a Web App and issue the following sequence of requests:
GET /
GET /bad
GET /
Then the first will yield HTTP 200, the second will throw on the server and yield HTTP 500, and the third will yield HTTP 200 without me having to do anything. IISNode will just detect the crash and restart the process.
So you shouldn't need PM2 or similar solution because this is built in with App Services. However, if you really want to, they now have App Services Preview on Linux, which is powered by PM2 and lets you configure PM2. More on this here. But again you get this out of the box already.
Another thing to consider is Always On setting which is on by default:
By default, web apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the app loaded all the time. If your app runs continuous web jobs, you should enable Always On, or the web jobs may not run reliably.
This is another possible root cause for your issue and the solution is to disable Always On for your Web App (see the link above).
I really want to thank itaysk for your support for this issue.
Issue was not what I was suspecting. Actually the node server was getting restarted on failure correctly.
There was a different issue. Why my website was getting non responsive is for a different reason. Here is what was happening-
We have used rethinkdbdash to connect to rethinkdb database and we was using connection pool. There was a coding/design issue. We have around 15 change feeds implemented with along with socket.io. And the change feed was getting initialised for every user logged in. This was increasing number of active connections in the pool. And rethinkdbdash has default limit of 1000 connection in the pool and as there were lots of live connections, all the available connection in the pool was getting exhausted resulting no more available connection. So, request was waiting for an open connection and it was not getting any available, hence waiting for ever blocking any new requests to be served.

Stopping node server that listening to specific port

Is it possible to stop express server that's listening to a specific port on the same machine? From a different script? Not the same script.
Let's say I start server in one terminal window by either directly calling node executable and passing the script, or via Grunt/Gulp task.
Now, is it possible to kill the same server by running another Grunt/Gulp task in a different terminal window?
One option is to have a path that causes the server to exit. This isn't too secure (you can lock it down, but the mere existence of such a path is a liability). I'd never recommend such a thing for anything facing the real world, but it can suffice for local grunt tasks or development.
app.get('/thisShouldBeLongAndComplicated', function() { process.exit(); });
Then send a request in your other task (using request)
request.get('localhost:3000/thisShouldBeLongAndComplicated');

Resources