How to inform Windows that the service has started? - node.js

I am running a nodejs HTTP server packaged as an exe binary with the pkg npm module. I need to run it as a windows service. It works perfectly when started normally. But here is what happens if I run it as a windows service:
I start the service.
Windows attempts to start the service.
During this time, the HTTP server is accessible, and works perfectly
Windows services times out with the error after 30 seconds: "The service did not respond to the start or control request in a timely fashion."
It seems to me that I somehow have to inform windows that my service has started, let me just stay open.
How can I do that?

A regular application can't work as Windows service. As the reference states, the implementation should satisfy certain requirements,
the interface requirements of the service control manager (SCM) that a service program must include:
Service Entry Point
Service ServiceMain Function
Service Control Handler Function
There is os-service package that allows to install a service that starts Node.js script. Current script is considered entry point by default:
const osService = require('os-service');
const [action] = process.argv.slice(2);
function errorHandler(err) {
if (!err) return;
console.error(err);
process.exit(1);
}
if (action === '--install') {
osService.add('Foo', errorHandler);
} else if (action === '--uninstall') {
osService.remove('Foo', errorHandler);
} else {
// report service as running
osService.run('Foo', () => {
osService.stop();
});
// app entry point
}

Related

Nodejs on azure : server not starting before the first request

I have a weird probleme, when starting listening I run the following function :
app.listen(process.env.PORT || 3000, async function () {
await db.init().catch(err => {
console.error(err);
console.error("Shutting down because there was an error setting up the database.");
process.exit(1);
});
await recordsDb.init().catch(err => {
console.error(err);
console.error("Shutting down because there was an error setting up the records database.");
process.exit(1);
});
//this db object will be used by controllers..
app.set('db',db);
});
the problem is that the function doesn't run before the first request (and so the first request always fail), am I missing something ?
Thank you!
You will need to first scale up your Web App Service plan, then you can enable Always On. By default, apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the app loaded all the time. If your app runs continuous WebJobs or runs WebJobs triggered using a CRON expression, you should enable Always On, or the web jobs may not run reliably. Free Azure web apps do not support Always On.
https://learn.microsoft.com/en-us/azure/app-service/web-sites-configure

Node JS message queue on Heroku

I need to move my Node JS server running on Heroku to a message queue architecture. Currently, the server receives a HTTP request, does some processing, and responds. The problem is that the processing takes some time, especially when there are lots of requests. This lengthy processing time causes the server to timeout, overload, and crash! My reading tells me a need a background worker to do the processing.
I have zero experience with message queues and background workers and I'm looking for a very simple example to get started. Can anyone suggest a simple, understandable module or example to get started?
I found some examples but they are complex and I'm getting lost! I want a barebones example I can build from.
Let's see how to do this with RabbitMQ.
First, you will need a RabbitMQ server to work with in your development environment.
If you don't already have it (check "sudo service rabbitmq-server status") you can install (on ubuntu or similar) as follows:
sudo su -c "echo 'deb http://www.rabbitmq.com/debian/ testing main' >> /etc/apt/sources.list"
wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
sudo apt-key add rabbitmq-signing-key-public.asc
sudo apt-get update
sudo apt-get install rabbitmq-server
rm rabbitmq-signing-key-public.asc
Then, get the server running with:
sudo service rabbitmq-server start
You also need to configure a RabbitMQ service for your Heroku deployment. Let's use CloudAMPQ for this example. You can add its Free Plan to your Heroku app with:
heroku addons:create cloudamqp:lemur
That will create a new CLOUDAMQP_URL environment variable in your Heroku app.
Next, you're going to need a suitable RabbitMQ client for your node.js app.
There are a few of them out there, but for this example, let's use ampqlib:
npm install ampqlib --save
That should add something like the following line in your package.json dependencies:
"amqplib": "^0.4.1",
Next thing is to add a background "worker" dyno to your Heroku app.
I assume that currently you only have a single Web dyno in your Procfile.
So, you need to add another line for instantiating a worker, such as:
worker: node myworker.js
Finally, you need to write the code that will enable your Web dyno to interact with your worker dyno via RabbitMQ.
For the sake of this example, I will assume that your Web dyno will be "publishing" messages to a RabbitMQ message queue, and your worker dyno will be "consuming" these messages.
So, let's start with writing code for publishing to a message queue. This code needs to run somewhere in your Web dyno:
// Define ampq_url to point to CLOUDAMPQ_URL on Heroku, or local RabbitMQ server in dev environment
var ampq_url = process.env.CLOUDAMQP_URL || "amqp://localhost";
var ampq_open = require('amqplib');
var publisherChnl;
function createPublisherChannel() {
// Create an AMPQ "connection"
ampq_open.connect(ampq_url)
.then(function(conn) {
// You need to create at least one AMPQ "channel" on your connection
var ok = conn.createChannel();
ok = ok.then(function(ch){
publisherChnl = ch;
// Now create a queue for the actual messages to be sent to the worker dyno
publisherChnl.assertQueue('my-worker-q');
})
})
}
function publishMsg() {
// Send the worker a message
publisherChnl.sendToQueue('my-worker-q', new Buffer('Hello world from Web dyno'));
}
You will need to call createPublisherChannel() during the initialisation of your Web dyno. Then, call publishMsg() whenever you want to send a message to the queue.
Finally, let's write the code for consuming the above message in the worker dyno. So, for example, add something like the following in myworker.js:
// Just like in Web dyno...
var amqp_url = process.env.CLOUDAMQP_URL || "amqp://localhost";
var open_ampq = require('amqplib').connect(amqp_url);
var consumerChnl;
// Creates an AMPQ channel for consuming messages on 'my-worker-q'
function createConsumerChannel() {
open_ampq
.then(function(conn) {
conn.createChannel()
.then(function(ch) {
ch.assertQueue('my-worker-q');
consumerChnl = ch;
});
});
}
function startConsuming() {
consumerChnl.consume('my-worker-q', function(msg){
if (msg !== null) {
console.log(msg.content.toString());
// Tell RabbitMQ server we have consumed the message
consumerChnl.ack(msg);
}
})
}
createConsumerChnl().then(startConsuming);
Finally, test with "heroku local". You should see that you now have 2 processes running in your app, "Web" and "worker". Whenever you call publishMsg() in your Web dyno, you should hopefully see the wroker dyno spit out the message contents to your console. To see what's happening in your RabbitMQ queues, you can use:
sudo rabbitmqctl list_queues
I found a very simple example (followed by deeper examples) here: https://www.rabbitmq.com/tutorials/tutorial-one-javascript.html

Ensuring that only a single instance of a nodejs application is running

Is there an elegant way to ensure that only one instance of a nodejs app is running?
I tried to use pidlock npm, however, it seems that it works only on *nix systems.
Is it possible by using mutex?
Thanks
I've just found single-instance library which is intended to work on all platforms. I can confirm that it works well on Windows.
You can install it by npm i single-instance and you need to wrap your application code like this:
const SingleInstance = require('single-instance');
const locker = new SingleInstance('my-app-name');
locker.lock().then(() => {
// Your application code goes here
}).catch(err => {
// This block will be executed if the app is already running
console.log(err); // it will print out 'An application is already running'
});
If I understand its source code correctly, it implements the lock using a socket: if it can connect to a socket, then the application is already running. If it can't connect, then it creates the socket.

Scheduled task only runs as expected if I run it once - never on its own (Azure mobile services)

I am running a simple script in azure mobile services scheduler:
function warmup() {
warmUpSite("http://safenoteit.ca/");
}
function warmUpSite(url) {
console.info("warming up: " + url);
var req = require('request');
req.get({ url: url }, function(error, response, body) {
if (!error) {
console.info("hot hot hot! " + url);
} else {
console.error('error warming up ' + url + ': ' + error);
}
});
}
This runs as expected when I manually run it (Run once button). However, despite scheduling it to run every 15 minutes, I don't see any console log messages coming from the script. Additionally, the portal tells me that the scheduler is enabled and running:
Anyone else see this issue? The mobile service is running on basic tier and I have very little load on it. I don't see what could cause this issue, which makes the whole scheduler service useless.
UPDATE: Tried the same scheduled script on another mobile service, and everything works! Something's messed up with the mobile service itself. Talking to Microsoft support to resolve this.
It was an issue only Microsoft can fix. They had to redeploy the mobile service.

Azure Worker Role restarted after receiving "changing" + "changed" event on Node.js

I am running a simple Node.js app in Azure Worker Role, with azure-sdk-for-node package.
var azure = require('azure'),
http = require('http'),
winston = require('winston'),
logger = new (winston.Logger)({
transports: [ new (winston.transports.File)({ filename: 'C:\\log.txt' }) ]
}),
http.createServer(function (req, res) {
res.writeHead(200);
res.end('Hello, World!');
}).listen(process.env.port || 1337);
azure.RoleEnvironment.on('changing', function (changes) {
winston.info('changing', changes);
// Got configuration changes here
// {
// "changes": [
// {
// "type": "ConfigurationSettingChange",
// "name": "MyApp.Settings1"
// },
// {
// "type": "ConfigurationSettingChange",
// "name": "Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration"
// }
// ]
// }
});
azure.RoleEnvironment.on('changed', function () {
// Also got this event
winston.info('changing');
});
azure.RoleEnvironment.on('stopping', function () {
// Never fired
winston.info('stopping');
});
The app runs fine on Worker Role without issues, until I modify the configuration thru Management Portal.
I updated the configuration thru Management Portal and clicked Save. Shortly after that, I got both changing and changed events on the app. But 6 minutes after receiving those events, the whole Worker Role was rebooted without any stopping events. I used package winston to log to C:\ and the log persisted thru the reboot.
The log shows something like this:
00:00 setup_worker.cmd
00:01 server.js with PID 1
00:06 "role changing"
00:06 "role changed"
00:12 setup_worker.cmd
00:13 server.js with PID 2
(Note: setup_worker.cmd is the startup script in CSDEF, server.js is my app)
Although there are no stopping events after configuration change, I got the stopping event if I manually reboot the instance thru Management Portal.
So there are few questions:
Why the role is rebooted after configuration change?
How to prevent the role get rebooted after configuration change?
Why there are no stopping events when the role get rebooted by configuration change?
Thanks!
Azure assumes that you want your servers rebooted after config changed, so that new settings can take effect properly. It does not know if you keep reading your configuration settings during runtime or only at startup. It also assumed that you have 2+ servers deployed in your Role and rebooting them one at a time will not harm you website
So, I'm not familiar with node.js, however, in .NET we can watch for RoleEnvironment.Changing event, trap it and ignore the reboot on it. Check this article out: http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx Can you do something similar with your function delegate after you trap the changing event?
I believe Stopping events only apply when you shut the role down/stop it. http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.stopping.aspx

Resources