In amqp's assertQueue API Documentation, it states:
Assert a queue into existence. This operation is idempotent given identical arguments; however, it will bork the channel if the queue already exists but has different properties (values supplied in the arguments field may or may not count for borking purposes; check the borker's, I mean broker's, documentation).
http://www.squaremobius.net/amqp.node/channel_api.html#channel_assertQueue
I am asking what it means by bork(ing) the channel. I tried google but can't find anything relevant.
Bork: English meaning is to obstruct something.
As per the documentation in the question, it says
however, it will bork the channel if the queue already exists but has
different properties
this means if you try to create a channel which has the same properties of a channel which already exits, nothing would happen cause it is idempotent (meaning repeating the same action with no different result, e.g. a REST API GET request which fetches data for id say 123, will return the same data every time unless updated, a pretty funny video explaining the impotent concept), but if you try to create a channel with the same name but different properties, the channel creation shall be "borked" i.e. obstructed.
In the code below, we create the channel again,
var ok0 = ch.assertQueue(q, {durable: false});// creating the first time
var ok1 = ch.assertQueue(q, {durable: true});// creating the second time again with different durable property value
it throws an error
"PRECONDITION_FAILED - inequivalent arg 'durable' for queue 'hello' in
vhost '/': received 'true' but current is 'false'"
This means the you are trying to make the same channel with different properties, i.e. the durable property is different to the existing channel and hence it has been borked.
[2]: Answer by #Like Bakken
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Having said that, did you try calling assertQueue twice, with different properties the second time? You would have answered your own question very quickly.
I used this code to create this test program:
#!/usr/bin/env node
var amqp = require('amqplib');
amqp.connect('amqp://localhost').then(function(conn) {
return conn.createChannel().then(function(ch) {
var q = 'hello';
var ok0 = ch.assertQueue(q, {durable: false});
return ok0.then(function(_qok) {
var ok1 = ch.assertQueue(q, {durable: true});
return ok1.then(function(got) {
console.log(" [x] got '%s'", got);
return ch.close();
});
});
}).finally(function() { conn.close(); });
}).catch(console.warn);
Then, start RabbitMQ and run your test code. You should see output like this:
$ node examples/tutorials/assert-borked.js
events.js:183
throw er; // Unhandled 'error' event
^
Error: Channel closed by server: 406 (PRECONDITION-FAILED) with message "PRECONDITION_FAILED - inequivalent arg 'durable' for queue 'hello' in vhost '/': received 'true' but current is 'false'"
at Channel.C.accept
Related
I have an application which checks for new entries in DB2 every 15 seconds on the iSeries using IBM's idb-connector. I have async functions which return the result of the query to socket.io which emits an event with the data included to the front end. I've narrowed down the memory leak to the async functions. I've read multiple articles on common memory leak causes and how to diagnose them.
MDN: memory management
Rising Stack: garbage collection explained
Marmelab: Finding And Fixing Node.js Memory Leaks: A Practical Guide
But I'm still not seeing where the problem is. Also, I'm unable to get permission to install node-gyp on the system which means most memory management tools are off limits as memwatch, heapdump and the like need node-gyp to install. Here's an example of what the functions basic structure is.
const { dbconn, dbstmt } = require('idb-connector');// require idb-connector
async function queryDB() {
const sSql = `SELECT * FROM LIBNAME.TABLE LIMIT 500`;
// create new promise
let promise = new Promise ( function(resolve, reject) {
// create new connection
const connection = new dbconn();
connection.conn("*LOCAL");
const statement = new dbstmt(connection);
statement.exec(sSql, (rows, err) => {
if (err) {
throw err;
}
let ticks = rows;
statement.close();
connection.disconn();
connection.close();
resolve(ticks.length);// resolve promise with varying data
})
});
let result = await promise;// await promise
return result;
};
async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
setTimeout(getNewData, 2000);// check again in 2 seconds
};
Any ideas on where the leak is? Am i using async/await incorrectly? Or else am i creating/destroying DB connections improperly? Any help on figuring out why this code is leaky would be much appreciated!!
Edit: Forgot to mention that i have limited control on the backend processes as they are handled by another team. I'm only retrieving the data they populate the DB with and adding it to a web page.
Edit 2: I think I've narrowed it down to the DB connections not being cleaned up properly. But, as far as i can tell I've followed the instructions suggested on their github repo.
I don't know the answer to your specific question, but instead of issuing a query every 15 seconds, I might go about this in a different way. Reason being that I don't generally like fishing expeditions when the environment can tell me an event occurred.
So in that vein, you might want to try a database trigger that loads the key to the row into a data queue on add, or even change or delete if necessary. Then you can just put in an async call to wait for a record on the data queue. This is more real time, and the event handler is only called when a record shows up. The handler can get the specific record from the database since you know it's key. Data queues are much faster than database IO, and place little overhead on the trigger.
I see a couple of potential advantages with this method:
You aren't issuing dozens of queries that may or may not return data.
The event would fire the instant a record is added to the table, rather than 15 seconds later.
You don't have to code for the possibility of one or more new records, it will always be 1, the one mentioned in the data queue.
yes you have to close connection.
Don't make const data. you don't need promise by default statement.exec is async and handles it via return result;
keep setTimeout(getNewData, 2000);// check again in 2 seconds
line outside getNewData otherwise it becomes recursive infinite loop.
Sample code
const {dbconn, dbstmt} = require('idb-connector');
const sql = 'SELECT * FROM QIWS.QCUSTCDT';
const connection = new dbconn(); // Create a connection object.
connection.conn('*LOCAL'); // Connect to a database.
const statement = new dbstmt(dbconn); // Create a statement object of the connection.
statement.exec(sql, (result, error) => {
if (error) {
throw error;
}
console.log(`Result Set: ${JSON.stringify(result)}`);
statement.close(); // Clean up the statement object.
connection.disconn(); // Disconnect from the database.
connection.close(); // Clean up the connection object.
return result;
});
*async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
setTimeout(getNewData, 2000);// check again in 2 seconds
};*
change to
**async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
};
setTimeout(getNewData, 2000);// check again in 2 seconds**
First thing to notice is possible open database connection in case of an error.
if (err) {
throw err;
}
Also in case of success connection.disconn(); and connection.close(); return boolean values that tell is operation successful (according to documentation)
Always possible scenario is to pile up connection objects in 3rd party library.
I would check those.
This was confirmed to be a memory leak in the idb-connector library that i was using. Link to github issue Here. Basically there was a C++ array that never had it's memory deallocated. A new version was added and the commit can viewed Here.
I have a problem. I'm new to node red, I want to inject many payloads with different topics at once. I wanted to do it with function like in first node. It's function looks like so:
msg.topic="ns=2;s=Target01.Nazwa.Nazwa[0];datatype=String"
msg.payload=global.get("nazwa")
return msg
msg.topic="ns=2;s=Target01.Nazwa.Nazwa[1];datatype=String"
msg.payload=global.get("nazwa2")
return msg
...
msg.topic="ns=2;s=Target01.Nazwa.Nazwa[9];datatype=String"
msg.payload=global.get("nazwa9")
return msg
However it doesn't work. The 2nd node is working but in total I would have like 150+ blocks connected to OPC UA Client block. So my question is: does anyone know if there's a way to inject multiple payloads with different topics, favorabily with function, instead of doing it one by one with inject blocks?
The documentation explains how to send multiple messages from a status node.
With the code you have currently, as soon as it reaches the first return statement, the Function node stops processing any further so only one message is sent.
To send multiple messages from a Function node you have two options.
return an array of message objects to send.
call node.send(msg); for each message you want to send.
For example:
return [
[
{ topic: "ns=2;s=Target01.Nazwa.Nazwa[0];datatype=String", payload: global.get("nazwa")},
{ topic: "ns=2;s=Target01.Nazwa.Nazwa[1];datatype=String", payload: global.get("nazwa2")},
{ topic: "ns=2;s=Target01.Nazwa.Nazwa[9];datatype=String", payload: global.get("nazwa9")}
]
]
I'm using socket.io like this
Client:
socket.on('response', function(i){
console.log(i);
});
socket.emit('request', whateverdata);
Server:
socket.on('request', function(whateverdata){
for (i=0; i<10000; i++){
console.log(i);
socket.emit('response', i);
}
console.log("done!");
});
I need output like this when putting the two terminals side by side:
Server Client
0 0
1 1
. (etc) .
. .
9998 9998
9999 9999
done!
But instead I am getting this:
Server Client
0
1
. (etc)
.
9998
9999
done!
0
1
.
. (etc)
9998
9999
Why?
Shouldn't Socket.IO / Node emit the message immediately, not wait for the loop to complete before emitting any of them?
Notes:
The for loop is very long and computationally slow.
This question is referring to the socket.io library, not websockets in general.
Due to latency, waiting for confirmation from the client before sending each response is not possible
The order that the messages are received is not important, only that they are received as quickly as possible
The server emits them all in a loop and it takes a small bit of time for them to get to the client and get processed by the client in another process. This should not be surprising.
It is also possible that the single-threaded nature of Javascript in node.js prevents the emits from actually getting sent until your Javascript loop finishes. That would take detailed examination of socket.io code to know for sure if that is an issue. As I said before if you want to 1,1 then 2,2 then 3,3 instead of 1,2,3 sent, then 1,2,3 received you have to write code to force that.
If you want the client to receive the first before the server sends the 2nd, then you have to make the client send a response to the first and have the server not send the 2nd until it receives the response from the first. This is all async networking. You don't control the order of events in different processes unless you write specific code to force a particular sequence.
Also, how do you have client and server in the same console anyway? Unless you are writing out precise timestamps, you wouldn't be able to tell exactly what event came before the other in two separate processes.
One thing you could try is to send 10, then do a setTimeout(fn, 1) to send the next 10 and so on. That would give JS a chance to breathe and perhaps process some other events that are waiting for you to finish to allow the packets to get sent.
There's another networking issue too. By default TCP tries to batch up your sends (at the lowest TCP level). Each time you send, it sets a short timer and doesn't actually send until that timer fires. If more data arrives before the timer fires, it just adds that data to the "pending" packet and sets the timer again. This is referred to as the Nagle's algorithm. You can disable this "feature" on a per-socket basis with socket.setNoDelay(). You have to call that on the actual TCP socket.
I am seeing some discussion that Nagle's algorithm may already be turned off for socket.io (by default). Not sure yet.
In stepping through the process of socket.io's .emit(), there are some cases where the socket is marked as not yet writable. In those cases, the packets are added to a buffer and will be processed "later" on some future tick of the event loop. I cannot see exactly what puts the socket temporarily in this state, but I've definitely seen it happen in the debugger. When it's that way, a tight loop of .emit() will just buffer and won't send until you let other events in the event loop process. This is why doing setTimeout(fn, 0) every so often to keep sending will then let the prior packets process. There's some other event that needs to get processed before socket.io makes the socket writable again.
The issue occurs in the flush() method in engine.io (the transport layer for socket.io). Here's the code for .flush():
Socket.prototype.flush = function () {
if ('closed' !== this.readyState &&
this.transport.writable &&
this.writeBuffer.length) {
debug('flushing buffer to transport');
this.emit('flush', this.writeBuffer);
this.server.emit('flush', this, this.writeBuffer);
var wbuf = this.writeBuffer;
this.writeBuffer = [];
if (!this.transport.supportsFraming) {
this.sentCallbackFn.push(this.packetsFn);
} else {
this.sentCallbackFn.push.apply(this.sentCallbackFn, this.packetsFn);
}
this.packetsFn = [];
this.transport.send(wbuf);
this.emit('drain');
this.server.emit('drain', this);
}
};
What happens sometimes is that this.transport.writable is false. And, when that happens, it does not send the data yet. It will be sent on some future tick of the event loop.
From what I can tell, it looks like the issue may be here in the WebSocket code:
WebSocket.prototype.send = function (packets) {
var self = this;
for (var i = 0; i < packets.length; i++) {
var packet = packets[i];
parser.encodePacket(packet, self.supportsBinary, send);
}
function send (data) {
debug('writing "%s"', data);
// always creates a new object since ws modifies it
var opts = {};
if (packet.options) {
opts.compress = packet.options.compress;
}
if (self.perMessageDeflate) {
var len = 'string' === typeof data ? Buffer.byteLength(data) : data.length;
if (len < self.perMessageDeflate.threshold) {
opts.compress = false;
}
}
self.writable = false;
self.socket.send(data, opts, onEnd);
}
function onEnd (err) {
if (err) return self.onError('write error', err.stack);
self.writable = true;
self.emit('drain');
}
};
Where you can see that the .writable property is set to false when some data is sent until it gets confirmation that the data has been written. So, when rapidly sending data in a loop, it may not be letting the event come through that signals that the data has been successfully sent. When you do a setTimeout() to let some things in the event loop get processed that confirmation event comes through and the .writable property gets set to true again so data can again be sent immediately.
To be honest, socket.io is built of so many abstract layers across dozens of modules that it's very difficult code to debug or analyze on GitHub so it's hard to be sure of the exact explanation. I did definitely see the .writable flag as false in the debugger which did cause a delay so this seems like a plausible explanation to me. I hope this helps.
I can't make following code to work:
"use strict";
let kafka = require('kafka-node');
var conf = require('./providers/Config');
let client = new kafka.Client(conf.kafka.connectionString, conf.kafka.clientName);
let consumer = new kafka.HighLevelConsumer(client, [ { topic: conf.kafka.readTopic } ], { groupId: conf.kafka.clientName, paused: true });
let threads = 0;
consumer.on('message', function(message) {
threads++;
if (threads > 10) consumer.pause();
if (threads > 50) process.exit(1);
console.log(threads + " >>> " + message.value);
});
consumer.resume();
I see 50 messages in console and process exits by termination statement.
What I'm trying to understand, is that is it my code broken or package broken? Or maybe I'm just doing something wrong? Does anyone was able to make kafka consumer work with pause/resume? I tried several versions of kafka-node, but all of them behave same way. Thanks!
You are already using pause and resume in your code, so obviously they work. ;)
It's because pause doesn't pause the consumption of messages. It pauses the fetching of messages. I'm guessing you already fetched the first 50 in one throw before you receive the first message and call pause.
For kicks, I just tested pause() and resume() in the Node REPL and they work as expected:
var kafka = require('kafka-node');
var client = new kafka.Client('localhost:2181');
var consumer = new kafka.HighLevelConsumer(client, [{topic: 'MyTest'}]);
consumer.on('message', (msg) => { console.log(JSON.stringify(msg)) });
Then I go into another window and run:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic MyTest
And type some stuff, and it shows up in the first window. Then in the first window, I type: consumer.pause(); And type some more in the second window. Nothing appears in the first window. Then I run consumer.resume()in the first window, and the delayed messages appear.
BTW, you should be able to play with the Kafka config property fetch.message.max.bytes and control how many messages can be fetched at one time. For example, if you had fixed-width messages of 500 bytes, set fetch.message.max.bytes to something less than 1000 (but greater than 500!) to only receive a single message per fetch. But note that this might not fix the problem entirely -- I am fairly new to Node, but it is asynchronous, and I suspect a second fetch could get kicked off before you processed the first fetch completely (or at all).
I got this code for monitoring sockets in the zmq bindings for nodejs. So far it works but my problem is I dnt know what events the monitoring socket has. The code I got only did that, I will continue looking for more code but this is what I have so far..
``
var zmq = require('zmq');
var socket = zmq.socket('pub');
socket.connect('tcp://127.0.0.1:10001');
socket.monitor();
I tried adding an "onmessage" event handler but it showed nothing, so.. I dnt know whats up..
socket.on("message",function(msg){
console.log(msg);
});
I printed the object that I got back from the monitor() function and from it I was able to get some monitor events, I think it is unelegant though, I got this link that tests the monitor function of the socket ( https://github.com/JustinTulloss/zeromq.node/blob/master/test/socket.monitor.js ) but some things are not working but...
mon.monitor();
console.log(mon);
mon.on("message",function(msg){
console.log(msg);
});
mon.on('close',function(){console.log("Closed");});
mon._zmq.onMonitorEvent = function(evt){
if (evt == 1)
console.log("Should be 1 : "+ evt);
else
console.log(evt);
};
I haven't worked with the PUB/SUB handlers in 0mq. I have used some of the other types and am fairly familiar. Having not tested this code, my recommendation would be
SCRIPT 1: Your existing PUB script, needs to send a message
socket.send('TEST_MESSAGES', 'BLAH')
SCRIPT 2: This needs to be added:
var zmq = require('zmq');
var sub_socket = zmq.socket('sub');
sub_socket.connect('tcp://127.0.0.1:10001');
sub_socket.subscribe('TEST_MESSAGES')
sub_socket.on("message",function(msg){
console.log(msg);
});
The trick here is timing. 0mq doesn't give you retries or durable messages. You need to build those elements on your own. Still if you put your publish in a timer (for the sake of getting an example running) you should see the messages move.