Node JS message queue on Heroku - node.js

I need to move my Node JS server running on Heroku to a message queue architecture. Currently, the server receives a HTTP request, does some processing, and responds. The problem is that the processing takes some time, especially when there are lots of requests. This lengthy processing time causes the server to timeout, overload, and crash! My reading tells me a need a background worker to do the processing.
I have zero experience with message queues and background workers and I'm looking for a very simple example to get started. Can anyone suggest a simple, understandable module or example to get started?
I found some examples but they are complex and I'm getting lost! I want a barebones example I can build from.

Let's see how to do this with RabbitMQ.
First, you will need a RabbitMQ server to work with in your development environment.
If you don't already have it (check "sudo service rabbitmq-server status") you can install (on ubuntu or similar) as follows:
sudo su -c "echo 'deb http://www.rabbitmq.com/debian/ testing main' >> /etc/apt/sources.list"
wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
sudo apt-key add rabbitmq-signing-key-public.asc
sudo apt-get update
sudo apt-get install rabbitmq-server
rm rabbitmq-signing-key-public.asc
Then, get the server running with:
sudo service rabbitmq-server start
You also need to configure a RabbitMQ service for your Heroku deployment. Let's use CloudAMPQ for this example. You can add its Free Plan to your Heroku app with:
heroku addons:create cloudamqp:lemur
That will create a new CLOUDAMQP_URL environment variable in your Heroku app.
Next, you're going to need a suitable RabbitMQ client for your node.js app.
There are a few of them out there, but for this example, let's use ampqlib:
npm install ampqlib --save
That should add something like the following line in your package.json dependencies:
"amqplib": "^0.4.1",
Next thing is to add a background "worker" dyno to your Heroku app.
I assume that currently you only have a single Web dyno in your Procfile.
So, you need to add another line for instantiating a worker, such as:
worker: node myworker.js
Finally, you need to write the code that will enable your Web dyno to interact with your worker dyno via RabbitMQ.
For the sake of this example, I will assume that your Web dyno will be "publishing" messages to a RabbitMQ message queue, and your worker dyno will be "consuming" these messages.
So, let's start with writing code for publishing to a message queue. This code needs to run somewhere in your Web dyno:
// Define ampq_url to point to CLOUDAMPQ_URL on Heroku, or local RabbitMQ server in dev environment
var ampq_url = process.env.CLOUDAMQP_URL || "amqp://localhost";
var ampq_open = require('amqplib');
var publisherChnl;
function createPublisherChannel() {
// Create an AMPQ "connection"
ampq_open.connect(ampq_url)
.then(function(conn) {
// You need to create at least one AMPQ "channel" on your connection
var ok = conn.createChannel();
ok = ok.then(function(ch){
publisherChnl = ch;
// Now create a queue for the actual messages to be sent to the worker dyno
publisherChnl.assertQueue('my-worker-q');
})
})
}
function publishMsg() {
// Send the worker a message
publisherChnl.sendToQueue('my-worker-q', new Buffer('Hello world from Web dyno'));
}
You will need to call createPublisherChannel() during the initialisation of your Web dyno. Then, call publishMsg() whenever you want to send a message to the queue.
Finally, let's write the code for consuming the above message in the worker dyno. So, for example, add something like the following in myworker.js:
// Just like in Web dyno...
var amqp_url = process.env.CLOUDAMQP_URL || "amqp://localhost";
var open_ampq = require('amqplib').connect(amqp_url);
var consumerChnl;
// Creates an AMPQ channel for consuming messages on 'my-worker-q'
function createConsumerChannel() {
open_ampq
.then(function(conn) {
conn.createChannel()
.then(function(ch) {
ch.assertQueue('my-worker-q');
consumerChnl = ch;
});
});
}
function startConsuming() {
consumerChnl.consume('my-worker-q', function(msg){
if (msg !== null) {
console.log(msg.content.toString());
// Tell RabbitMQ server we have consumed the message
consumerChnl.ack(msg);
}
})
}
createConsumerChnl().then(startConsuming);
Finally, test with "heroku local". You should see that you now have 2 processes running in your app, "Web" and "worker". Whenever you call publishMsg() in your Web dyno, you should hopefully see the wroker dyno spit out the message contents to your console. To see what's happening in your RabbitMQ queues, you can use:
sudo rabbitmqctl list_queues

I found a very simple example (followed by deeper examples) here: https://www.rabbitmq.com/tutorials/tutorial-one-javascript.html

Related

Redis Error "max number of clients reached"

I am running a nodeJS application using forever npm module.
Node application also connects to Redis DB for cache check. Quite often the API stops working with the following error on the forever log.
{ ReplyError: Ready check failed: ERR max number of clients reached
at parseError (/home/myapp/core/node_modules/redis/node_modules/redis-parser/lib/parser.js:193:12)
at parseType (/home/myapp/core/node_modules/redis/node_modules/redis-parser/lib/parser.js:303:14)
at JavascriptRedisParser.execute (/home/myapp/ecore/node_modules/redis/node_modules/redis-parser/lib/parser.js:563:20) command: 'INFO', code: 'ERR' }
when I execute the client list command on the redis server it shows too many open connections. I have also set the timeout = 3600 in my Redis configuration.
I do not have any unclosed Redis connection object on my application code.
This happens once or twice in a week depending on the application load, as a stop gap solution I am restarting the node server( it works ).
What could be the permanent solution in this case?
I have figured out why. This has nothing to do with Redis. Increasing the OS file descriptor limit was just a temporary solution. I was using Redis in a web application and the connection was created for every new request.
When the server was restarted occasionally, all the held-up connections by the express server were released.
I solved this by creating a global connection object and re-using the same. The new connection is created only when necessary.
You could do so by creating a global connection object, make a connection once, and make sure it is connected before every time you use that. Check if there is an already coded solution depending on your programming language. In my case it was perl with dancer framework and I used a module called Dancer2::Plugin::Redis
redis_plugin
Returns a Dancer2::Plugin::Redis instance. You can use redis_plugin to
pass the plugin instance to 3rd party modules (backend api) so you can
access the existing Redis connection there. You will need to access
the actual methods of the the plugin instance.
In case if you are not running a web-server and you are running a worker process or any background job process, you could do this simple helper function to re-use the connection.
perl example
sub get_redis_connection {
my $redis = Redis->new(server => "www.example.com:6372" , debug => 0);
$redis->auth('abcdefghijklmnop');
return $redis;
}
...
## when required
unless($redisclient->ping) {
warn "creating new redis connection";
$redisclient = get_redis_connection();
}
I was running into this issue in my chat app because I was creating a new Redis instance each time something connected rather than just creating it once.
// THE WRONG WAY
export const getRedisPubSub = () => new RedisPubSub({
subscriber: new Redis(REDIS_CONNECTION_CONFIG),
publisher: new Redis(REDIS_CONNECTION_CONFIG),
});
and where I wanted to use the connection I was calling
// THE WRONG WAY
getNewRedisPubsub();
I fixed it by just creating the connection once when my app loaded.
export const redisPubSub = new RedisPubSub({
subscriber: new Redis(REDIS_CONNECTION_CONFIG),
publisher: new Redis(REDIS_CONNECTION_CONFIG),
});
and then I passed the one-time initialized redisPubSub object to my createServer function.
It was this article here that helped me see my error: https://docs.upstash.com/troubleshooting/max_concurrent_connections

Call nodejs kafka script inside Angular 8 application

How to call nodejs kafka consumer script from Angular component.
I am able to run the same script in terminal using "npm filename.js" but it is throwing an error while calling same from Angular component.
An angular component runs on the client, the kafka script runs on the server. You'll have to create a proxy endpoint to call the script. Depending on which server side Web API Framework you're using this would be done different ways.
To help you understand this further, if you were using express.js, you'd need to have an endpoint to trigger the call (it should be a POST endpoint):
// POST method route
app.post('/triggerKafka', function (req, res) {
const exec = require('child_process').exec, child;
const testscript = exec('bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic yourTopic --from-beginning');
testscript.stdout.on('data', function(data){
console.log(data);
// sendBackInfo();
});
testscript.stderr.on('data', function(data){
console.log(data);
// triggerErrorStuff();
});
})
Now, I should say that this is probably not the way you want to do this. For one thing, you don't want to give people outside your control the ability to start and stop Kafka listening.
You should try to set up your Kafka consumer on the startup of your web application (server side); and provide a mechanism for connecting to it in a similar fashion as above -- proxying the calls through an API endpoint that you set up.
To start Kafka when your web application starts; you need to put the correct information in your node.js web application server startup. In your package.json if you have a "scripts" section you can add the following:
"scripts" : {
"start" : "<whateveryouhavenow>.sh && npm filename.js"
}
The important part here is the && npmfile.js; that says to run your command as well as the command that preceded it. There are more options here, but I provided just one.
Depending on what server side framework you're running and what task-running tool you're using, it would be different (whether it's npm, gulp, grunt, or something else).

How should a Node.js microservice survive a Rabbitmq restart?

I've been working on an example of using Rabbitmq for communication between Node.js microservices and I'm trying to understand the best way for these microservices to survive a restart of the Rabbitmq server.
Full example is available on Github: https://github.com/ashleydavis/rabbit-messaging-example
You can start the system up by changing to the broadcast sub-directory and using docker-compose up --build.
With that running I open another terminal and issue the following command to terminate the Rabbit server docker-compose kill rabbit.
This causes a Node.js unhandled exception to kill my sender and receiver microservices that were connected to the Rabbitmq server.
Now I'd like to be able to restart the Rabbitmq server (using docker-compose up rabbit) and have the original microservices come back online.
This is intended to run under Docker-Compose for development and Kubernetes for production. I could just set this up so that the microservices restart when they are terminated by the disconnection from Rabbitmq, but I'd prefer it if the microservices could stay online (they might be doing other work that shouldn't be interrupted) and then reconnect to Rabbitmq automatically when it becomes available again.
Anyone know how to achieve automatic reconnection to Rabbitmq using the ampq library?
Just picking the sender service as an example on how to deal with it.
The error that is causing node to exit is that here is no 'error' handler on the stream the writer users.
If you modify this part of the code.
https://github.com/ashleydavis/rabbit-messaging-example/blob/master/broadcast/sender/src/index.js#L13
Change the line in sender/src/index.js from
const messagingConnection = await retry(() => amqp.connect(messagingHost), 10, 5000);
to
const messagingConnection = await retry(() => amqp.connect(messagingHost), 10, 5000)
.then(x => {
return x.on('error', (err) => {
console.log('connect stream on error', err)
});
});
Just having the error handler means that the node process no longer exists with unhandled exception. This does not make the sender code correct, it now needs to be modified to know if it has a connection, only send data if it has a connection, retry to connect if it has no connection.
A similar fix for the receiver can be applied
This is a useful reference for when node requires setup to not exit.
https://medium.com/dailyjs/how-to-prevent-your-node-js-process-from-crashing-5d40247b8ab2

Service Fabric node.js guest application express.js server EADDRINUSE

Not sure if this is a service fabric issue, or issue with node.js.
Basically this is my problem. I deploy the node.js application, it works fine. I redeploy the node application it fails to work, with the server returning EADDRINUSE. When I run netstat -an the port isn't in use. It's as if node is still running somewhere, some how, but not appearing in tasklist etc..
Anyone got any ideas?
Not entirely sure, but I believe this is because the server I was using (express.js), or rather node, was not shutting down and closing existing connections causing windows to think the ports are still in use. At least, that's how it seems.
I can not find it "officially" documented, but from this (quoted below) it reads SF sends SIGINT to the application to attempt to end it before killing it.
The following code appears to fix my issue:
var app = express();
var server = app.listen(17500);
if (process.platform === "win32") {
var rl = require("readline").createInterface({
input: process.stdin,
output: process.stdout
});
rl.on("SIGINT", function () {
process.emit("SIGINT");
}
}
process.on("SIGINT", function() {
server.close(function () {
process.exit(0);
});
});
For Linux nodes, I suppose you'd want to listen for "SIGTERM" as well.
I would like to know if there's any sort of remediation for this though, in the previously mentioned scenario the VMSS was completely unusable -- I could not deploy, nor run, a node web server. How does one restart the cluster without destroying it and recreating it? I now realise you can't just restart VMSS instances willy-nilly because service fabric completely breaks if you do that, apparently irrevocably
Rajeet Nair [RajeetN#MSFT]
Service Fabric also sends a Ctrl-C to service processes and waits for service to terminate. If the service doesn't terminate for 3 minutes, the process is killed.

Deploy only worker dyno to heroku (for firebase-queue)

I want to deploy a NodeJS server on a worker-only dyno on heroku. I've tried several approaches but I always get the error:
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
My server does not need to serve files or an API. What is the correct way to deploy to Heroku? Specifically, what is the correct way to deploy only a firebase-queue implementation to Heroku?
My server is dedicated to process work from a queue. It monitors a Firebase location and reacts on changes. Specifically, its a firebase-queue implementation, almost an exact copy of my-queue-worker.js as given in the guide
var Queue = require('firebase-queue');
var firebase = require('firebase');
firebase.initializeApp({
serviceAccount: '{projectId: 'xx', clientEmail: 'yy', privateKey: 'zz'}',
databaseURL: '<your-database-url>'
});
var ref = firebase.database().ref('queue');
var queue = new Queue(ref, function(data, progress, resolve, reject) {
// Read and process task data
console.log(data);
// Do some work
progress(50);
// Finish the task asynchronously
setTimeout(function() {
resolve();
}, 1000);
});
The first important part, as stated by Yoni, is to tell Heroku that you only need a background worker and not a web worker:
worker: node <path_to_your_worker>
The second important part is: Heroku will launch a web dyno by default. This causes the application to crash if your application does not bind to the port on which web traffic is received. To disable the web dyno and prevent the crash, run the following commands from the commandline in your directory:
$ heroku ps:scale web=0 worker=1
$ heroku ps:restart
This should fix the problem!
It looks like your Procfile contains a "web" process type.
Your Procfile should look something like this:
worker: node <path_to_your_worker>

Resources