Node.js connection with RabbitMQ - node.js

I am having a node.js application that uses amqlib to connect with RabbitMQ.
I am trying to reproduce a connectivity error with RabbitMQ and I get two different errors by repeating the same flow.
What I am doing is:
Start a Docker container with RabbitMQ management.
Start a node.js application (either docker or with npm) that connects on the RabbitMQ.
Go on RabbitMQ management and with rabbitmqctl execute the stop_app
This flow produces, each time one of the below two exceptions (not sure how it decides each one):
OperationalError: connect ECONNREFUSED 172.24.0.3:5672
Error: Heartbeat timeout
Why does this happen? Also, what is the best approach to handle them?
This is my connect function on the connector that does not seem to cover the heartbeat exception:
async connect(): Promise<Connection> {
const conn = await amqp.connect({
protocol: AMQP_PROTOCOL,
hostname: RABBITMQ_HOST,
port: Number(RABBITMQ_PORT),
username: RABBITMQ_USER,
password: RABBITMQ_PASS,
vhost: RABBITMQ_VHOST
});
conn.on('error', this.onError);
conn.on('close', this.onClose);
logger.debug('Connected to amqp');
this.conn = conn;
this.emit('connect', conn);
return conn;
}

ECONNREFUSED means that the application could not connect to RabbitMQ inside the docker container.
The heartbeat error means that a connection was successfully established, but the client has stopped receiving heartbeats from the broker, indicating that the connection has been lost.
There is also another type of notification you might receive. If you have started consuming messages from your application, when you stop the broker, amqplib will deliver a null message to the consumer. If you're not expecting this it can often cause an error in your application.
Handling these different scenarios can be difficult. The simplest way is to attach handlers to all connections and channel, then to gracefully stop your application, and allowing whatever is managing it to automatically restart using a suitable backoff algorithm.
If this isn't acceptable, then you need to reconnect and reconsume from the handlers. You may also want to internally queue messages that are published, until the connection has been reestablished. I wrote Rascal to do just this. There's also amqp-connection-manager.
Other things you may consider using to test...
docker kill (rudely kill the connection)
docker pause (will cause a heartbeat timeout)
queue deletion (I believe this triggers a null message)

Related

Redis Error "max number of clients reached"

I am running a nodeJS application using forever npm module.
Node application also connects to Redis DB for cache check. Quite often the API stops working with the following error on the forever log.
{ ReplyError: Ready check failed: ERR max number of clients reached
at parseError (/home/myapp/core/node_modules/redis/node_modules/redis-parser/lib/parser.js:193:12)
at parseType (/home/myapp/core/node_modules/redis/node_modules/redis-parser/lib/parser.js:303:14)
at JavascriptRedisParser.execute (/home/myapp/ecore/node_modules/redis/node_modules/redis-parser/lib/parser.js:563:20) command: 'INFO', code: 'ERR' }
when I execute the client list command on the redis server it shows too many open connections. I have also set the timeout = 3600 in my Redis configuration.
I do not have any unclosed Redis connection object on my application code.
This happens once or twice in a week depending on the application load, as a stop gap solution I am restarting the node server( it works ).
What could be the permanent solution in this case?
I have figured out why. This has nothing to do with Redis. Increasing the OS file descriptor limit was just a temporary solution. I was using Redis in a web application and the connection was created for every new request.
When the server was restarted occasionally, all the held-up connections by the express server were released.
I solved this by creating a global connection object and re-using the same. The new connection is created only when necessary.
You could do so by creating a global connection object, make a connection once, and make sure it is connected before every time you use that. Check if there is an already coded solution depending on your programming language. In my case it was perl with dancer framework and I used a module called Dancer2::Plugin::Redis
redis_plugin
Returns a Dancer2::Plugin::Redis instance. You can use redis_plugin to
pass the plugin instance to 3rd party modules (backend api) so you can
access the existing Redis connection there. You will need to access
the actual methods of the the plugin instance.
In case if you are not running a web-server and you are running a worker process or any background job process, you could do this simple helper function to re-use the connection.
perl example
sub get_redis_connection {
my $redis = Redis->new(server => "www.example.com:6372" , debug => 0);
$redis->auth('abcdefghijklmnop');
return $redis;
}
...
## when required
unless($redisclient->ping) {
warn "creating new redis connection";
$redisclient = get_redis_connection();
}
I was running into this issue in my chat app because I was creating a new Redis instance each time something connected rather than just creating it once.
// THE WRONG WAY
export const getRedisPubSub = () => new RedisPubSub({
subscriber: new Redis(REDIS_CONNECTION_CONFIG),
publisher: new Redis(REDIS_CONNECTION_CONFIG),
});
and where I wanted to use the connection I was calling
// THE WRONG WAY
getNewRedisPubsub();
I fixed it by just creating the connection once when my app loaded.
export const redisPubSub = new RedisPubSub({
subscriber: new Redis(REDIS_CONNECTION_CONFIG),
publisher: new Redis(REDIS_CONNECTION_CONFIG),
});
and then I passed the one-time initialized redisPubSub object to my createServer function.
It was this article here that helped me see my error: https://docs.upstash.com/troubleshooting/max_concurrent_connections

Node.js and RabbitMQ - best way to init connection on server start

Trying to implement RabbitMQ on my already existing codebase written on koa.js, faced the problem that I actually don't know what is the best way to do that. Most tutorials I faced in the web left me with idea establishing connection to RabbitMQ server each time I want to send or receive the message. That makes sense when I am receiving message in worker, but how to establish connection on provider side?
I read that that is a bad thing to establish connection each time when I create channel or send message. So the idea is that I need to create connection when I am starting server, so atm I do it like this:
const server = app.listen(PORT, async () => {
await rabbit.createConnection(`amqp://localhost:5672`);
global.rabbit = rabbit;
console.log(
`\n Server listening on port: ${PORT} in ${process.env.NODE_ENV} mode \n`
);
});
Is this a good spot or not?
Thanks for your advices!
P.S. In my rabbit instance I save the connection
Start the RabbitMQ connection once and keep the connection alive. Only reconnect if the connection should die for some reason. Whether you do this in your index.js or when you start Koa depends on your app, but in general it doesn't really matter as long as you are able to connect and shutdown properly.
Making a new connection for each publish or consume is insane from a performance perspective.
To simplify reconnections, try a amqp connection manager. It handles reconnects transparently.

Redis connection lost without any indication

I'm using a very simple redis pub-sub application, in which I have a redis server in AWS and a nodejs based redis client that is located inside office LAN that subscribes to some channel.
This worked great until the network changed and it seems that some device is now interfering with outgoing connections (I also started receiving socket hangups on outbound SSH connections which I mitigated with the ServerAliveInterval 60 setting in the SSH config).
After the network change, whenever the redis client application is executed, it creates a redis client, subscribes to some channel and acts upon published messages in that channel.
It works okay for several minutes, but then it stops receiving any messages.
I registered the redis client to all known connection events (including the "error" event), I added a "retry_strategy" handler and also modified the configuration to have "socket_keepalive" and "socket_initialdelay" to 10 seconds (see code below).
Nevertheless, no event is triggered when the connection is interfered.
When the application stops receiving the messages, I see that the connection on the redis port is still valid:
dev#server:~> sudo netstat -tlnpua | grep 6379
tcp 0 0 10.43.22.150:52052 <server_ip>:6379 ESTABLISHED 27014/node
I also captured a PCAP on port 6379 on which I don't see any resets or TCP errors, and it seems that from the connection perspective everything is valid.
I tried running another nodejs application from within the LAN in which I create a client that connects to the AWS redis server, registers to all events and only publishes messages once in a while.
After several minutes (in which the connection breaks), I try publishing another command and the error event handler is indeed triggered:
> client.publish("channel", "ANOTHER TRY")
true
> Error: Redis connection to <server_hostname>:6379 failed - read ECONNRESET
Redis connection ended
Redis reconnecting
Redis connected
Redis connection is ready
So if I try publishing via the client after the connection was interfered, the connection event callbacks are indeed called and I can run some kind of reconnection logic.
But in the scenario in which I subscribe and wait for publishes to the channel, no connection event handler is called and the application is basically broken.
Application code:
const redis = require('redis');
const config = { "host": <hostname>, "port": 6379, "socket_keepalive": true,
"socket_initdelay": 10};
config.retry_strategy = function (options) {
console.log("retry strategy. error code: " + (options.error ?
options.error.code : "N/A"));
console.log("options.attempt", options.attempt, "options.total_retry_time",
options.total_retry_time);
return 2000;
}
const client = redis.createClient(config);
client.on('message', function(channel, message) {
console.log("Channel", channel, ", message", message);
});
client.on("error", function (err) {
console.log("Error " + err);
});
client.on("end", function () {
console.log("Redis connection ended");
});
client.on("connect", function () {
console.log("Redis connected");
});
client.on("reconnecting", function () {
console.log("Redis reconnecting");
});
client.on("ready", function () {
console.log("Redis connection is ready");
});
const channel = "channel";
console.log("Subscribing to channel", channel);
client.subscribe(channel);
I'm using redis#2.8.0 and node v8.11.3.
The solution for this issue is quite sad.
First, there is indeed some network device between the redis client and server, which drops inactive connections after some timeout. It seems that this timeout is really low (several minutes).
Redis has a socket_keepalive configuration which is enabled by default, and its default value is Node.js's default socket keep alive value (which is set for 2 hours if i'm not mistaken).
As can be seen above, I used a socket_initdelay configuration parameter that should have changed this default value, but unfortunately the code that uses this parameter isn't in the redis npm package but rather in node-redis.
To summarize:
There is no configuration setting to change the keep alive timeout value in redis#2.8.0 (latest version when writing this post).
You can either:
Use node-redis which accepts the socket_initdelay setting.
Modify the timeout manually by running the following:
const client = redis.createClient();
client.on("connect", function () {
client.stream.setKeepAlive(true, <timeout_value_in_milliseconds>);
}

How should a Node.js microservice survive a Rabbitmq restart?

I've been working on an example of using Rabbitmq for communication between Node.js microservices and I'm trying to understand the best way for these microservices to survive a restart of the Rabbitmq server.
Full example is available on Github: https://github.com/ashleydavis/rabbit-messaging-example
You can start the system up by changing to the broadcast sub-directory and using docker-compose up --build.
With that running I open another terminal and issue the following command to terminate the Rabbit server docker-compose kill rabbit.
This causes a Node.js unhandled exception to kill my sender and receiver microservices that were connected to the Rabbitmq server.
Now I'd like to be able to restart the Rabbitmq server (using docker-compose up rabbit) and have the original microservices come back online.
This is intended to run under Docker-Compose for development and Kubernetes for production. I could just set this up so that the microservices restart when they are terminated by the disconnection from Rabbitmq, but I'd prefer it if the microservices could stay online (they might be doing other work that shouldn't be interrupted) and then reconnect to Rabbitmq automatically when it becomes available again.
Anyone know how to achieve automatic reconnection to Rabbitmq using the ampq library?
Just picking the sender service as an example on how to deal with it.
The error that is causing node to exit is that here is no 'error' handler on the stream the writer users.
If you modify this part of the code.
https://github.com/ashleydavis/rabbit-messaging-example/blob/master/broadcast/sender/src/index.js#L13
Change the line in sender/src/index.js from
const messagingConnection = await retry(() => amqp.connect(messagingHost), 10, 5000);
to
const messagingConnection = await retry(() => amqp.connect(messagingHost), 10, 5000)
.then(x => {
return x.on('error', (err) => {
console.log('connect stream on error', err)
});
});
Just having the error handler means that the node process no longer exists with unhandled exception. This does not make the sender code correct, it now needs to be modified to know if it has a connection, only send data if it has a connection, retry to connect if it has no connection.
A similar fix for the receiver can be applied
This is a useful reference for when node requires setup to not exit.
https://medium.com/dailyjs/how-to-prevent-your-node-js-process-from-crashing-5d40247b8ab2

Sails V0.10-rc7 Get a record from the database using REST Blueprints via Socket.IO

Sails 0.10.0-rc7
Sails Socket IO : Client not receiving response from server.
Using sails built in blueprints I am attempting to get information from my server using this functionality. (Im looking to use the default behaviour)
Client
//Client on different server (localhost:8000)
//Sails server
var socket = io.connect('http://localhost:1337');
socket.get('/event',function serverSays(err,events){
if (err)
console.log(err)
console.log(JSON.stringify(events));
});
Server
Event Model
module.exports = {
schema : true,
attributes: {
name : {
type : 'STRING',
maxLength: 50,
required: true
}
}
};
In the server terminal (logs) :
verbose: client authorized
verbose: handshake authorized 4TGNw-ywabWYG9j-AHaC
verbose: setting request GET /socket.io/1/websocket/4TGNw-ywabWYG9j-AHaC?__sails_io_sdk_version=0.10.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript
verbose: set heartbeat interval for client 4TGNw-ywabWYG9j-AHaC
verbose: client authorized for
verbose: websocket writing 1::
verbose: A socket.io client (4TGNw-ywabWYG9j-AHaC) connected successfully!
BUT the callback on my client is never being called????
It seems as if the client connects with the server..
Any suggestions?
EDIT
I must stress that the client and the sails server are running on different servers. The handshake when performing io.connect(localhost:1337) talks with the server correctly based on the server logs.
Its the subsequent action socket.get("/Event") which does not result in anything. Based on the server logs, I would say that its not ever reaching the server....
I thought I would just leave a note as I have my implementation working now.
So as it turns out, I made a fairly embarrassing mistake/assumption.
using Sails js's browser SDK I was connecting to a remote server using:
io.connect("serverurl")
and then happily went about my business attempting to perform various socket functions such as socket.get..
What I did not do is after
io.connect("url")
I still had to ensure that my app had indeed connected to the server by listening on the socket for the connect event:
socket.on("connect",function())...
Once I had this little piece of the puzzle resolved all went and is going swimmingly!
I must also state that I believe the reason I was running into to this issue was because I was attempting to perform the initial connection and subsequent requests in the sails run (init) function. So my subsequent actions were more than likely executing before the app and the server had successfully established a connection.
I believe had the initial connect (io.connect) and the subsequent actions been executed in separate user flows, all would have been as the connection would have surely been established already.

Resources