Event emitter loop not working in node-red function - node.js

I am attempting to use a node-red function to read Azure IoT Hub messages using AMQP. I have imported the azure-iothub module.
The code below connects ok and getFeedbackReceiver returns an AmqpReceiver object (I can see the object output to debug tab) which acts as an event emitter. However, the emitter loop (msgReceiver.on) doesn't seem to run. I don't even get a null output to debug tab.
Any help appreciated.
var azureamqp = global.get('azureamqp');
var iothub = azureamqp.Client.fromConnectionString(cnct);
try
{
iothub.open(function()
{
node.send({type: "connection", payload: util.inspect(iothub)});
node.status({ fill: "green", shape: "ring", text: "listening" });
iothub.getFeedbackReceiver(function(err,msgReceiver)
{
if (!err)
{
node.send({type: "receiver", payload: util.inspect(msgReceiver)});
msgReceiver.on('message',function(message)
{
node.send({payload:message});
});
}
else
{
node.send({payload: err});
}
});
node.send({type: "streamend"});
node.status({ fill: "red", shape: "ring", text: "disconnected" });
});
}
catch (err)
{}

Ok I think I see what's happening thanks to the additional comments: First of, some reference docs about messaging with IoT Hub:
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messaging
There are 2 types of messages that are sent through IoT Hub, depending on the direction in which they go:
Cloud to Device (C2D, or commands): these messages are sent by a cloud app to one or more devices. They are stored in a device-specific queue, and delivered to the device as soon as the device connects and starts listening for messages. Once received by the device, the device can choose to send feedback about these to IoT Hub. This feedback is received using the azure-iothub.Client.getFeedbackReceiver() API. I've added more about this at the end of the answer.
Device-to-Cloud (D2C, or telemetry): these messages are sent by the devices to the cloud application. These messages are stored in Event Hubs partitions and can be read from these partitions. This happens with the Event Hubs SDK
From the question and the comments it looks like you're trying to receive D2C messages (telemetry) using the feedback API - and it's not working, because it's not meant to work that way. It's good feedback for (bad?) API design and lack of docs though.
How to receive messages sent by devices:
This sample on github shows how to set up an Event Hubs client and can be used to listen to messages sent by devices to IoT Hub.This command in iothub-explorer is also an easy reference.
Please find below a simple example:
'use strict';
var EventHubClient = require('azure-event-hubs').Client;
var Promise = require('bluebird');
var connectionString = '[IoT Hub Connection String]';
var client = EventHubClient.fromConnectionString(connectionString);
var receiveAfterTime = Date.now() - 5000;
var printError = function (err) {
console.error(err.message);
};
var printEvent = function (ehEvent) {
console.log('Event Received: ');
console.log(JSON.stringify(ehEvent.body));
console.log('');
};
client.open()
.then(client.getPartitionIds.bind(client))
.then(function (partitionIds) {
return Promise.map(partitionIds, function (partitionId) {
return client.createReceiver('$Default', partitionId, { 'startAfterTime' : receiveAfterTime}).then(function(receiver) {
receiver.on('errorReceived', printError);
receiver.on('message', printEvent);
});
});
}).catch(printError);
A little more about that code sample:
Basically, the Event Hubs client provide an AMQP connection can be used to open receivers on each partition of the Event Hub. Partitions are used to store messages sent by devices. Each partition gets its own receiver, and each receiver has a message event. Hence the need to open one receiver per partition to never miss any message from any devices. Here's a little bit more about Event Hubs and the nature of partitions: https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-what-is-event-hubs
More about C2D Feedback:
There are 3 types of feedback that a device can send about a C2D message:
accept (or complete) means the message is taken care of by the device and the IoT Hub will remove this message from the device queue
reject means that the device doesn't want this message (maybe because it's malformed, or irrelevant, that's up to you to device) and the IoT Hub will remove the message from the queue.
abandon means that the device cannot "take care" of this message right now and wants IoT Hub to resend it later. The message remains in the queue.
accept and reject are both going to be observed using the getFeedbackReceiver API, as well as timeouts if the message is never received or is abandonned too many times.

Related

Sends all messages with the same sessionId to the dead letter queue on Azure Service Bus Queue

I am working with an azure service bus queue configured to be FIFO (First input first output). I work on an order application with the following states "Pending", "Received" and "Sent". therefore I have grouped the messages by the "SessionId" service bus option, setting the orderId as sessionId so that it processes the messages in order in case of horizontal scaling.
So far it works perfectly, the problem I have found is when a message in "pending" or "Received" status fails due to a timeout and goes to the dead letter queue. The message in "sent" status is processed correctly and then when the support team re-sends the "Pending" or "Received" status message to the queue it is processed correctly marking the order in a previous status instead of "sent" ".
I can think of several ways to control this, for example that the support team looks at the status of the order before reprocessing the message from the dead letter queue :) but I would like to know if service bus offers the possibility that if there is a message in the dead letter queu all the messages in the session queue that have the same sessionId go to the dead letter queu. Finallly, my question is:
Is there a way to configure azure service bus so that if there are any messages in the dead letter queue it sends all messages with the same sessionId to the dead letter queue?
Thank you very much!!!
I would like to know if service bus offers the possibility that if there is a message in the dead letter queue all the messages in the session queue that have the same sessionId go to the dead letter queue.
No, there is no such offering by Service Bus by default.
Is there a way to configure azure service bus so that if there are any messages in the dead letter queue it sends all messages with the same sessionId to the dead letter queue?
Yes, you can do that. You can first peek the messages in your dead-letter queue to fetch all the session ids. Then you can receive the messages in your main queue whose session id is in the DLQ, and then move those messages to DLQ. Here's one such logic I've implemented in dot net using the latest version of Service Bus SDK.
var queueName = "<queue>";
var connectionString = "<connection-string>";
var client = new ServiceBusClient(connectionString);
var sessionIdInDLQList = new List<string>();
var receiver = client.CreateReceiver(queueName, new ServiceBusReceiverOptions { SubQueue = SubQueue.DeadLetter });
var message = await receiver.PeekMessageAsync();
while (message != null)
{
if (!sessionIdInDLQList.Contains(message.SessionId))
sessionIdInDLQList.Add(message.SessionId);
message = await receiver.PeekMessageAsync();
}
foreach (var sessionId in sessionIdInDLQList)
{
var session = await client.AcceptSessionAsync(queueName, sessionId);
message = await session.ReceiveMessageAsync(TimeSpan.FromSeconds(20));
while (message != null)
{
await session.DeadLetterMessageAsync(message, "Message with this session is to be dead-lettered!");
message = await session.ReceiveMessageAsync(TimeSpan.FromSeconds(20));
}
}
In your case, you need to do this before your consumers start reading the messages, probably you can write this in your consumer application or any trigger application like Azure Function or worker role. That’s upto your method of handling.
You can try this code to read Dead Letter from Queue.
public static async Task GetMessage()
{
string topic = "myqueue1";
string connectionString = "Endpoint = sb://xxx.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xxx";
var servicebusclient = new ServiceBusClient(connectionString);
var reciveroptions = new ServiceBusReceiverOptions { SubQueue = SubQueue.DeadLetter };
var reciver = servicebusclient.CreateReceiver(topic, reciveroptions);
// 10 number of message read from Queue
await receiver.PeekMessageAsync(10);
}
after receiving message from Dead Letter you can send to queue.
As per Microsoft official documents
There's no automatic cleanup of the DLQ. Messages remain in the DLQ
until you explicitly retrieve them from the DLQ and call Complete() on
the dead-letter message.
These following document help you.
Thanks Casually Coding for posting post on Read Message from the Dead Letter Queue
Microsoft Documents Using Dead-Letter Queues to Handle Message Transfer Failures , Receive Message from Dead letter queue

How do I make sure to receive all of my messages with Azure Service Bus Queue?

I created a Service Bus Queue following the tutorial in Microsoft Documentation. I can send and receive messages, however, only half of my messages make it through. Literally half, only the even ones.
I tried changing the message frequency but it doesn't change anything. It doesn't matter if I send a message every 3 seconds or 3 messages per second, I only get half of them on the other end.
I have run the example code in all the possible languages and I have tried using the REST API and batch messaging but no dice.
I also tried using Azure Functions with the specific trigger for Service Bus Queues.
This is the receiving function code:
module.exports = async function(context, mySbMsg) {
context.log('JavaScript ServiceBus queue trigger function processed message', mySbMsg);
context.done();
};
And this is the send function code:
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
var azure = require('azure-sb');
var idx = 0;
function sendMessages(sbService, queueName) {
var msg = 'Message # ' + (++idx);
sbService.sendQueueMessage(queueName, msg, function (err) {
if (err) {
console.log('Failed Tx: ', err);
} else {
console.log('Sent ' + msg);
}
});
}
var connStr = 'Endpoint=sb://<sbnamespace>.servicebus.windows.net/;SharedAccessKeyName=<keyname>;SharedAccessKey=<key>';
var queueName = 'MessageQueue';
context.log('Connecting to ' + connStr + ' queue ' + queueName);
var sbService = azure.createServiceBusService(connStr);
sbService.createQueueIfNotExists(queueName, function (err) {
if (err) {
console.log('Failed to create queue: ', err);
} else {
setInterval(sendMessages.bind(null, sbService, queueName), 2000);
}
});
};
I expect to receive most of the sent messages (specially in this conditions of no load at all) but instead I only receive 50%.
My guess is that the reason is that you are only listening to one of 2 subscriptions on the topic and it is set up to split the messages between subscriptions. This functionality is used to split workload to multiple services. You can read about topics here: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview
and
https://learn.microsoft.com/en-us/azure/service-bus-messaging/topic-filters
Here sort description from above links:
"Partitioning uses filters to distribute messages across several existing topic subscriptions in a predictable and mutually exclusive manner. The partitioning pattern is used when a system is scaled out to handle many different contexts in functionally identical compartments that each hold a subset of the overall data; for example, customer profile information. With partitioning, a publisher submits the message into a topic without requiring any knowledge of the partitioning model. The message then is moved to the correct subscription from which it can then be retrieved by the partition's message handler."
To check this you can see if your service bus have partitioning turned on or any other filters.Turning partitioning off should do the trick in your case I think.

Sending Messages from Leaf Device Downstream device not being handled by IoT Edge running at Transparent Gateway

I have followed all the instruction for setting up a "Downstream Device" to send messages through IoT Edge running in Transparent Gateway. I believe my routing rules are correct, but my Function module is not receiving any of the Messages through the message flow.
These are the instruction I've followed:
https://learn.microsoft.com/en-us/azure/iot-edge/how-to-create-transparent-gateway-linux
I am using 2 Linxu VMs (ubuntu 16.04.5).
IoT Edge Transparent Gateway VM is configured with all the certs properly setup, configured and verified. I've been able to using the openssl tool from the
openssl s_client -connect {my-gateway-machine-name-dns-name}.centralus.cloudapp.azure.com:8883 -CAfile /certs/certs/azure-iot-test-only.root.ca.cert.pem -showcerts
Downstream device running on Linux VM with Certs installed and verified. My connection string is as follows:
HostName={IoTHubName}.azure-devices.net;DeviceId=TC51_EdgeDownStreamDevice01;SharedAccessKey={My-Shared-Access-Key}=GatewayHostName={my-gateway-machine-name-dns-name}.centralus.cloudapp.azure.com
a. I have verified I get a successful verification of the SSL cert using the openssl tool.
b. I'm using the the following in my downstream device for my connection using the NodeJS SDK
var client = DeviceClient.fromConnectionString(connectionString, Mqtt);
c. I can see the messages showing up at the Azure IoT Hub in the Cloud, but I can't get my module running on the IoT Edge Transparent Gateway to be hit.
Here are my routing rules configured for the edgeHub as specified in "Routing messages from downstream devices" in the sample doc page.
This is what the example docs show:
{ "routes":{ "sensorToAIInsightsInput1":"FROM /messages/* WHERE NOT IS_DEFINED($connectionModuleId) INTO BrokeredEndpoint(\"/modules/ai_insights/inputs/input1\")", "AIInsightsToIoTHub":"FROM /messages/modules/ai_insights/outputs/output1 INTO $upstream" } }
This is what my routing configuration is set to:
"routes": {
"downstreamBatterySensorToBatteryDataFunctionInput1": "FROM /* WHERE NOT IS_DEFINED($connectionModuleId) INTO BrokeredEndpoint(\"/modules/BatteryDataFunctionModule/inputs/input1\")",
"BatteryDataFunctionModuleToIoTHub": "FROM /messages/modules/BatteryDataFunctionModule/outputs/* INTO $upstream"
}
** Note that I've used by "FROM /* WHERE NOT IS_DEFINED" and "FROM /messages/* WHERE NOT IS_DEFINED"
My module on the IoT Edge is setup as a Function. When I use the out of the box example where the simulator device is another module running on the IoT Edge, then my function is hit correctly. Its only when I'm trying to use a "Downstream Device" that the module is not being triggered.
I have enabled "Debug Logging for the IoT Edge Service" running on my Transparent Gateway.
This is the basic Run method for the Function module:
#r "Microsoft.Azure.Devices.Client"
#r "Newtonsoft.Json"
using System.IO;
using Microsoft.Azure.Devices.Client;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
// Filter messages based on the temperature value in the body of the message and the temperature threshold value.
public static async Task Run(Message messageReceived, IAsyncCollector<Message> output, TraceWriter log)
{
How can I figure out how to get my Module running in IoT Edge to be hit/triggered from a Downstream device?
So, you say you are seeing messages show up in IoT Hub, but not in Edge... A couple of things:
you posted this as your connection string in your node app:
HostName={IoTHubName}.azure-devices.net;DeviceId=TC51_EdgeDownStreamDevice01;SharedAccessKey={My-Shared-Access-Key}=GatewayHostName={my-gateway-machine-name-dns-name}.centralus.cloudapp.azure.com
Did you copy/paste this exactly? the reason I ask is that, between the shared access key and the word "GatewayHostName", you have an equals sign and not a semi-colon..
it should be:
HostName={IoTHubName}.azure-devices.net;DeviceId=TC51_EdgeDownStreamDevice01;SharedAccessKey={My-Shared-Access-Key};GatewayHostName={my-gateway-machine-name-dns-name}.centralus.cloudapp.azure.com
(note the ';' before GatewayHostName… if you really did have an equals sign there instead of a semicolon, there's no telling what kind of chaos that would cause :-)
Secondly, in your route, you call your module BatteryDataFunctionModule.. just want to make sure that module name is exact, including being case-sensitive. You probably know that, but don't want to assume..
Finally, if the two things above check out, can you add an addition debugging route that sends the 'incoming data' to IoTHub as well..
"FROM /* WHERE NOT IS_DEFINED($connectionModuleId) INTO $upstream"
so we can make sure the messages are actually making it through iot edge.
There are 2 problems that needed to be addressed to get the Downstream Device to communication
Thanks to #Steve-Busby-Msft I needed to have a semi-colon (;) at the end of the SharedAccessKey and before the GatewayHostName
you posted this as your connection string in your node app: HostName={IoTHubName}.azure-devices.net;DeviceId=TC51_EdgeDownStreamDevice01;SharedAccessKey={My-Shared-Access-Key}=GatewayHostName={my-gateway-machine-name-dns-name}.centralus.cloudapp.azure.com
The NodeJS application Downstream Device also has to load up the cert correctly at the 'Application level'.
Notice the section of code for
var edge_ca_cert_path = '[Path to Edge CA certificate]';
Node JS Downstream Application
'use strict';
var fs = require('fs');
var Protocol = require('azure-iot-device-mqtt').Mqtt;
// Uncomment one of these transports and then change it in fromConnectionString to test other transports
// var Protocol = require('azure-iot-device-http').Http;
// var Protocol = require('azure-iot-device-amqp').Amqp;
var Client = require('azure-iot-device').Client;
var Message = require('azure-iot-device').Message;
// 1) Obtain the connection string for your downstream device and to it
// append this string GatewayHostName=<edge device hostname>;
// 2) The edge device hostname is the hostname set in the config.yaml of the Edge device
// to which this sample will connect to.
//
// The resulting string should look like the following
// "HostName=<iothub_host_name>;DeviceId=<device_id>;SharedAccessKey=<device_key>;GatewayHostName=<edge device hostname>"
var connectionString = '[Downstream device IoT Edge connection string]';
// Path to the Edge "owner" root CA certificate
var edge_ca_cert_path = '[Path to Edge CA certificate]';
// fromConnectionString must specify a transport constructor, coming from any transport package.
var client = Client.fromConnectionString(connectionString, Protocol);
var connectCallback = function (err) {
if (err) {
console.error('Could not connect: ' + err.message);
} else {
console.log('Client connected');
client.on('message', function (msg) {
console.log('Id: ' + msg.messageId + ' Body: ' + msg.data);
// When using MQTT the following line is a no-op.
client.complete(msg, printResultFor('completed'));
// The AMQP and HTTP transports also have the notion of completing, rejecting or abandoning the message.
// When completing a message, the service that sent the C2D message is notified that the message has been processed.
// When rejecting a message, the service that sent the C2D message is notified that the message won't be processed by the device. the method to use is client.reject(msg, callback).
// When abandoning the message, IoT Hub will immediately try to resend it. The method to use is client.abandon(msg, callback).
// MQTT is simpler: it accepts the message by default, and doesn't support rejecting or abandoning a message.
});
// Create a message and send it to the IoT Hub every second
var sendInterval = setInterval(function () {
var windSpeed = 10 + (Math.random() * 4); // range: [10, 14]
var temperature = 20 + (Math.random() * 10); // range: [20, 30]
var humidity = 60 + (Math.random() * 20); // range: [60, 80]
var data = JSON.stringify({ deviceId: 'myFirstDownstreamDevice', windSpeed: windSpeed, temperature: temperature, humidity: humidity });
var message = new Message(data);
message.properties.add('temperatureAlert', (temperature > 28) ? 'true' : 'false');
console.log('Sending message: ' + message.getData());
client.sendEvent(message, printResultFor('send'));
}, 2000);
client.on('error', function (err) {
console.error(err.message);
});
client.on('disconnect', function () {
clearInterval(sendInterval);
client.removeAllListeners();
client.open(connectCallback);
});
}
};
// Provide the Azure IoT device client via setOptions with the X509
// Edge root CA certificate that was used to setup the Edge runtime
var options = {
ca : fs.readFileSync(edge_ca_cert_path, 'utf-8'),
};
client.setOptions(options, function(err) {
if (err) {
console.log('SetOptions Error: ' + err);
} else {
client.open(connectCallback);
}
});

how can I make private chat rooms with sockjs?

I am trying to make a chat system where only two users are able to talk to each other at a time ( much like facebook's chat )
I've tried multiplexing, using mongoDB's _id as the name so every channel is unique.
The problem I'm facing is that I cannot direct a message to a single client connection.
this is the client side code that first sends the message
$scope.sendMessage = function() {
specificChannel.send(message)
$scope.messageText = '';
};
this is the server side receiving the message
specificChannel.on('connection', function (conn) {
conn.on('data', function(message){
conn.write('message')
}
}
When I send a message, to any channel, every channel still receives the message.
How can I make it so that each client only listens to the messages sent to a specific channel?
It appeared that SockJS doesn't support "private" channels. I used the following solution for a similar issue:
var channel_id = 'my-very-private-channel'
var connection = new SockJS('/pubsub', '')
connection.onopen = function(){
connection.send({'method': 'set-channel', 'data': {'channel': channel_id}})
}
Backend solution is specific for every technology stack so I can't give a universal solution here. General idea is the following:
1) Parse the message in "on_message" function to find the requested "method name"
2) If the method is "set-channel" -> set the "self.channel" to this value
3) Broadcast further messages to subscribers with the same channel (I'm using Redis for that, but it also depends on your platform)
Hope it helps!

Pusher binding to events regardless of channel

I am attempting to listen to a particular event type regardless of the channel it was triggered in. My understanding of the docs (http://pusher.com/docs/client_api_guide/client_events#bind-events/lang=js) was that I can do so by calling the bind method on the pusher instance rather than on a channel instance. Here is my code:
var pusher = new Pusher('MYSECRETAPPKEY', {'encrypted':true}); // Replace with your app key
var eventName = 'new-comment';
var callback = function(data) {
// add comment into page
console.log(data);
};
pusher.bind(eventName, callback);
I then used the Event Creator tool in my account portal to generate an event. I used a random channel name, set the Event to "new-comment" and just put in some random piece of text into the Event Data. But, I am getting nothing appearing in my Console.
I am using https://d3dy5gmtp8yhk7.cloudfront.net/2.1/pusher.min.js, and performing this test in the latest Chrome.
What am I missing?
Thanks!
Shaheeb R.
Pusher will only send events to the client if that client has subscribed to the channel. So, the first thing you need to do is subscribe the channel. Binding to the event on the client:
pusher.bind('event_name', function( data ) {
// handle update
} );
This is also known as "global event binding".
I've tested this using this code and it does work:
http://jsbin.com/AROvEDO/1/edit
For completeness, here's the code:
var pusher = new Pusher('APP_KEY');
var channel = pusher.subscribe('test_channel');
pusher.bind('my_event', function(data) {
alert(data.message);
});

Resources