Deploy only worker dyno to heroku (for firebase-queue) - node.js

I want to deploy a NodeJS server on a worker-only dyno on heroku. I've tried several approaches but I always get the error:
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
My server does not need to serve files or an API. What is the correct way to deploy to Heroku? Specifically, what is the correct way to deploy only a firebase-queue implementation to Heroku?
My server is dedicated to process work from a queue. It monitors a Firebase location and reacts on changes. Specifically, its a firebase-queue implementation, almost an exact copy of my-queue-worker.js as given in the guide
var Queue = require('firebase-queue');
var firebase = require('firebase');
firebase.initializeApp({
serviceAccount: '{projectId: 'xx', clientEmail: 'yy', privateKey: 'zz'}',
databaseURL: '<your-database-url>'
});
var ref = firebase.database().ref('queue');
var queue = new Queue(ref, function(data, progress, resolve, reject) {
// Read and process task data
console.log(data);
// Do some work
progress(50);
// Finish the task asynchronously
setTimeout(function() {
resolve();
}, 1000);
});

The first important part, as stated by Yoni, is to tell Heroku that you only need a background worker and not a web worker:
worker: node <path_to_your_worker>
The second important part is: Heroku will launch a web dyno by default. This causes the application to crash if your application does not bind to the port on which web traffic is received. To disable the web dyno and prevent the crash, run the following commands from the commandline in your directory:
$ heroku ps:scale web=0 worker=1
$ heroku ps:restart
This should fix the problem!

It looks like your Procfile contains a "web" process type.
Your Procfile should look something like this:
worker: node <path_to_your_worker>

Related

Nodejs on azure : server not starting before the first request

I have a weird probleme, when starting listening I run the following function :
app.listen(process.env.PORT || 3000, async function () {
await db.init().catch(err => {
console.error(err);
console.error("Shutting down because there was an error setting up the database.");
process.exit(1);
});
await recordsDb.init().catch(err => {
console.error(err);
console.error("Shutting down because there was an error setting up the records database.");
process.exit(1);
});
//this db object will be used by controllers..
app.set('db',db);
});
the problem is that the function doesn't run before the first request (and so the first request always fail), am I missing something ?
Thank you!
You will need to first scale up your Web App Service plan, then you can enable Always On. By default, apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the app loaded all the time. If your app runs continuous WebJobs or runs WebJobs triggered using a CRON expression, you should enable Always On, or the web jobs may not run reliably. Free Azure web apps do not support Always On.
https://learn.microsoft.com/en-us/azure/app-service/web-sites-configure

heroku node.js scheduler is not working

I am trying to schedule a task after 10 minutes in node.js heroku. I have created a file worker.js in program main directory. In worker.js I have just called the controller function that I want to schedule like this :
const reports = require('./app/controllers/reports');
reports.sendEmail();
sendEmail function use to send emails. in Heroku scheduler I have add worker.js as :
but my scheduler is not working. What I am missing in my configuration?
Edit your Heroku Scheduler dashboard, and type node worker.js for the command to be executed.

Service Fabric node.js guest application express.js server EADDRINUSE

Not sure if this is a service fabric issue, or issue with node.js.
Basically this is my problem. I deploy the node.js application, it works fine. I redeploy the node application it fails to work, with the server returning EADDRINUSE. When I run netstat -an the port isn't in use. It's as if node is still running somewhere, some how, but not appearing in tasklist etc..
Anyone got any ideas?
Not entirely sure, but I believe this is because the server I was using (express.js), or rather node, was not shutting down and closing existing connections causing windows to think the ports are still in use. At least, that's how it seems.
I can not find it "officially" documented, but from this (quoted below) it reads SF sends SIGINT to the application to attempt to end it before killing it.
The following code appears to fix my issue:
var app = express();
var server = app.listen(17500);
if (process.platform === "win32") {
var rl = require("readline").createInterface({
input: process.stdin,
output: process.stdout
});
rl.on("SIGINT", function () {
process.emit("SIGINT");
}
}
process.on("SIGINT", function() {
server.close(function () {
process.exit(0);
});
});
For Linux nodes, I suppose you'd want to listen for "SIGTERM" as well.
I would like to know if there's any sort of remediation for this though, in the previously mentioned scenario the VMSS was completely unusable -- I could not deploy, nor run, a node web server. How does one restart the cluster without destroying it and recreating it? I now realise you can't just restart VMSS instances willy-nilly because service fabric completely breaks if you do that, apparently irrevocably
Rajeet Nair [RajeetN#MSFT]
Service Fabric also sends a Ctrl-C to service processes and waits for service to terminate. If the service doesn't terminate for 3 minutes, the process is killed.

Node JS message queue on Heroku

I need to move my Node JS server running on Heroku to a message queue architecture. Currently, the server receives a HTTP request, does some processing, and responds. The problem is that the processing takes some time, especially when there are lots of requests. This lengthy processing time causes the server to timeout, overload, and crash! My reading tells me a need a background worker to do the processing.
I have zero experience with message queues and background workers and I'm looking for a very simple example to get started. Can anyone suggest a simple, understandable module or example to get started?
I found some examples but they are complex and I'm getting lost! I want a barebones example I can build from.
Let's see how to do this with RabbitMQ.
First, you will need a RabbitMQ server to work with in your development environment.
If you don't already have it (check "sudo service rabbitmq-server status") you can install (on ubuntu or similar) as follows:
sudo su -c "echo 'deb http://www.rabbitmq.com/debian/ testing main' >> /etc/apt/sources.list"
wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
sudo apt-key add rabbitmq-signing-key-public.asc
sudo apt-get update
sudo apt-get install rabbitmq-server
rm rabbitmq-signing-key-public.asc
Then, get the server running with:
sudo service rabbitmq-server start
You also need to configure a RabbitMQ service for your Heroku deployment. Let's use CloudAMPQ for this example. You can add its Free Plan to your Heroku app with:
heroku addons:create cloudamqp:lemur
That will create a new CLOUDAMQP_URL environment variable in your Heroku app.
Next, you're going to need a suitable RabbitMQ client for your node.js app.
There are a few of them out there, but for this example, let's use ampqlib:
npm install ampqlib --save
That should add something like the following line in your package.json dependencies:
"amqplib": "^0.4.1",
Next thing is to add a background "worker" dyno to your Heroku app.
I assume that currently you only have a single Web dyno in your Procfile.
So, you need to add another line for instantiating a worker, such as:
worker: node myworker.js
Finally, you need to write the code that will enable your Web dyno to interact with your worker dyno via RabbitMQ.
For the sake of this example, I will assume that your Web dyno will be "publishing" messages to a RabbitMQ message queue, and your worker dyno will be "consuming" these messages.
So, let's start with writing code for publishing to a message queue. This code needs to run somewhere in your Web dyno:
// Define ampq_url to point to CLOUDAMPQ_URL on Heroku, or local RabbitMQ server in dev environment
var ampq_url = process.env.CLOUDAMQP_URL || "amqp://localhost";
var ampq_open = require('amqplib');
var publisherChnl;
function createPublisherChannel() {
// Create an AMPQ "connection"
ampq_open.connect(ampq_url)
.then(function(conn) {
// You need to create at least one AMPQ "channel" on your connection
var ok = conn.createChannel();
ok = ok.then(function(ch){
publisherChnl = ch;
// Now create a queue for the actual messages to be sent to the worker dyno
publisherChnl.assertQueue('my-worker-q');
})
})
}
function publishMsg() {
// Send the worker a message
publisherChnl.sendToQueue('my-worker-q', new Buffer('Hello world from Web dyno'));
}
You will need to call createPublisherChannel() during the initialisation of your Web dyno. Then, call publishMsg() whenever you want to send a message to the queue.
Finally, let's write the code for consuming the above message in the worker dyno. So, for example, add something like the following in myworker.js:
// Just like in Web dyno...
var amqp_url = process.env.CLOUDAMQP_URL || "amqp://localhost";
var open_ampq = require('amqplib').connect(amqp_url);
var consumerChnl;
// Creates an AMPQ channel for consuming messages on 'my-worker-q'
function createConsumerChannel() {
open_ampq
.then(function(conn) {
conn.createChannel()
.then(function(ch) {
ch.assertQueue('my-worker-q');
consumerChnl = ch;
});
});
}
function startConsuming() {
consumerChnl.consume('my-worker-q', function(msg){
if (msg !== null) {
console.log(msg.content.toString());
// Tell RabbitMQ server we have consumed the message
consumerChnl.ack(msg);
}
})
}
createConsumerChnl().then(startConsuming);
Finally, test with "heroku local". You should see that you now have 2 processes running in your app, "Web" and "worker". Whenever you call publishMsg() in your Web dyno, you should hopefully see the wroker dyno spit out the message contents to your console. To see what's happening in your RabbitMQ queues, you can use:
sudo rabbitmqctl list_queues
I found a very simple example (followed by deeper examples) here: https://www.rabbitmq.com/tutorials/tutorial-one-javascript.html

When cluster end, “State changed from up to crashed”. It should fork()

I have a master and slave app.
Locally on my computer, everything works : when a slave dies because of a
process.exit(0) , I fork and it goes on.
I catch it with,
cluster.on('exit', function(worker, code, signal) {
if(code == 0){
cluster.fork();
}
});
Sadly on Heroku, as soon as the slave execute process.exit(0) the "State changed from up to crashed"
Any idea to bypass Heroku safeguards ?
Heroku dynos are a virtual application container built with required binaries from buildpacks and combined with a release of your application code. The only job of this app container is to run a process described by your Procfile.
If that process terminates unexpectedly, Heroku considers the dyno crashed and will attempt to restart the process. Also, if the app doesn't bind to the designated PORT within 60 seconds, the dyno will be crashed.
If your backend worker processes are not binding to the port when they come up, before issuing process.exit(0), they will never listen on the port. If the dyno doesn't bind to the port within the timeout, Heroku will consider it crashed.
My guess is that you are immediately exiting each child worker process, causing the master to fork N children, then re-fork each one, until the boot timeout is hit.

Resources