Why can my azure PubSub subscriber not recieve from my publisher - azure

I am seeking to use groups within Azure PubSub, but it appears that either my publisher and subscriber are not both joining the same group somehow, or my serverless functions are not handling the broadcast after the message gets published. The service works if I publish without the groups implemented, but once I attempted to add groups I can see messages hitting the live trace tool on azure, but no messages being sent out after, so I suspect I may be missing something in my azure functions, but am not sure what that would be.
Publisher code:
const hub = "simplechat";
let service = new WebPubSubServiceClient("Endpoint=endpointURL", hub);
// by default it uses `application/json`, specify contentType as `text/plain` if you want plain-text
const group = service.group("myGroup");
group.sendToAll('Hello World', { contentType: "text/plain" });
Subscriber code:
const WebSocket = require('ws');
const { WebPubSubServiceClient } = require('#azure/web-pubsub');
var printer = require("printer/lib");
var util = require('util');
async function main() {
const hub = "simplechat";
let service = new WebPubSubServiceClient("EndpointEndpointURL", hub);
const group = service.group("myGroup");
let token = await service.getClientAccessToken();
let ws = new WebSocket(token.url, 'json.webpubsub.azure.v1');
ws.on('open', () => console.log('connected'));
ws.on('message', data => {
console.log('Message received: %s', data);
});
}
main();

I think you missed the part of joining your subscriber to the group.
The simplest way is to give the connection a user name and call addUser to add the connection to the group when the connection is connected:
async function main() {
const hub = "simplechat";
let service = new WebPubSubServiceClient("EndpointEndpointURL", hub);
const group = service.group("myGroup");
let token = await service.getClientAccessToken({ userId: "user1"});
// with this approach, the WebSocket actually does not need to be 'json.webpubsub.azure.v1' subprotocol, a simple WebSocket connection also works
let ws = new WebSocket(token.url, 'json.webpubsub.azure.v1');
ws.on('open', () => {
console.log('connected');
group.addUser("user1");
}
);
ws.on('message', data => {
console.log('Message received: %s', data);
});
}
Or you can wait until received the Connected Response to get the connectionId of the connection and call addConnection to add the subscriber to the group.
Another way, since you are already using json.webpubsub.azure.v1 protocol, would be that your subscriber to send the joinGroup request:
async function main() {
const hub = "simplechat";
let service = new WebPubSubServiceClient("EndpointEndpointURL", hub);
// make sure you set the joinLeaveGroup role for the group
let token = await service.getClientAccessToken({
roles: ['webpubsub.joinLeaveGroup.myGroup']
});
let ws = new WebSocket(token.url, 'json.webpubsub.azure.v1');
let ackId = 0;
ws.on('open', () => {
console.log('connected');
ws.send(JSON.stringify({
type: 'joinGroup',
group: 'myGroup',
ackId: ++ackId,
}));
});
ws.on('message', data => {
console.log('Message received: %s', data);
});
}
When your subscriber receives the AckMessage for this joinGroup action, your subscriber successfully joins the group.

Related

How to listen to azure storage queue with Node.js?

Azure storage queue is not listening to the message automatically when we push to the queue, we have to write a custom lister in-order to fetch the message from the queue.
import {
QueueServiceClient,
} from "#azure/storage-queue";
// Create a QueueServiceClient object
const queueServiceClient = QueueServiceClient.fromConnectionString("AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;DefaultEndpointsProtocol=http;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;");
// // Create a new queue
const queueName = "test-local-queue";
const queueClient = queueServiceClient.getQueueClient(queueName);
queueClient.create().then(() => {
console.log(`Queue "${queueName}" was created successfully`);
});
// Add a message to the queue
const message = "Hello, world!";
queueClient.sendMessage(message).then(() => {
console.log(`Message "${message}" was added to the queue`);
});
// Set the number of messages to retrieve (up to 32)
const maxMessages = 10;
// Set the visibility timeout (in seconds) for the messages
const visibilityTimeout = 60;
// Receive messages from the queue
queueClient
.receiveMessages({ maxMessages, visibilityTimeout })
.then(response => {
const messages = response.receivedMessageItems;
console.log(`Received ${messages.length} messages from the queue`);
// Process the messages
messages.forEach(message => {
console.log(message);
// Do something with the message...
});
})
.catch(error => {
console.error(`Failed to receive messages from the queue: ${error}`);
});
Does Azure service bus solves the above stated problem?
Well instead of recievemessage function try peekmessage .
In this function by default it will peek at a single message but you can configure it the way you want. Here I have configured it to peek 3 messages.
const peekedMessages = await queueClient.peekMessages({ numberOfMessages: 3 });
Now the peekMessages const contains an array called peekedMessageItems which will contain the peeked messages
here I have just iterated in that array and read the messages using messageText property
Complete Code:
const { QueueClient, QueueServiceClient } = require("#azure/storage-queue");
const queueServiceClient = QueueServiceClient.fromConnectionString(conn);
const queueClient = queueServiceClient.getQueueClient(qname);
const peekedMessages = await queueClient.peekMessages({ numberOfMessages: 3 });
for (i = 0; i < peekedMessages.peekedMessageItems.length; i++) {
// Display the peeked message
console.log("Peeked message: ", peekedMessages.peekedMessageItems[i].messageText);
}
Also as the name suggest this will peek message not dequeue them.

GCP provided code snippets to both subscribe and publish mqtt in the same app doesn't work

In my Node.js app, I can successfully publish telemetry/state topics or subscribe to config/command topics, but can't both publish and subscribe.
Both Node.js code snippets that appear below are from
https://cloud.google.com/iot/docs/how-tos/mqtt-bridge
The subscribe code is as follows -
// const deviceId = `myDevice`;
// const registryId = `myRegistry`;
// const region = `us-central1`;
// const algorithm = `RS256`;
// const privateKeyFile = `./rsa_private.pem`;
// const serverCertFile = `./roots.pem`;
// const mqttBridgeHostname = `mqtt.googleapis.com`;
// const mqttBridgePort = 8883;
// const messageType = `events`;
// const numMessages = 5;
// The mqttClientId is a unique string that identifies this device. For Google
// Cloud IoT Core, it must be in the format below.
const mqttClientId = `projects/${projectId}/locations/${region}/registries/${registryId}/devices/${deviceId}`;
// With Google Cloud IoT Core, the username field is ignored, however it must be
// non-empty. The password field is used to transmit a JWT to authorize the
// device. The "mqtts" protocol causes the library to connect using SSL, which
// is required for Cloud IoT Core.
const connectionArgs = {
host: mqttBridgeHostname,
port: mqttBridgePort,
clientId: mqttClientId,
username: 'unused',
password: createJwt(projectId, privateKeyFile, algorithm),
protocol: 'mqtts',
secureProtocol: 'TLSv1_2_method',
ca: [readFileSync(serverCertFile)],
};
// Create a client, and connect to the Google MQTT bridge.
const iatTime = parseInt(Date.now() / 1000);
const client = mqtt.connect(connectionArgs);
// Subscribe to the /devices/{device-id}/config topic to receive config updates.
// Config updates are recommended to use QoS 1 (at least once delivery)
client.subscribe(`/devices/${deviceId}/config`, {qos: 1});
// Subscribe to the /devices/{device-id}/commands/# topic to receive all
// commands or to the /devices/{device-id}/commands/<subfolder> to just receive
// messages published to a specific commands folder; we recommend you use
// QoS 0 (at most once delivery)
client.subscribe(`/devices/${deviceId}/commands/#`, {qos: 0});
// The MQTT topic that this device will publish data to. The MQTT topic name is
// required to be in the format below. The topic name must end in 'state' to
// publish state and 'events' to publish telemetry. Note that this is not the
// same as the device registry's Cloud Pub/Sub topic.
const mqttTopic = `/devices/${deviceId}/${messageType}`;
client.on('connect', success => {
console.log('connect');
if (!success) {
console.log('Client not connected...');
} else if (!publishChainInProgress) {
publishAsync(mqttTopic, client, iatTime, 1, numMessages, connectionArgs);
}
});
client.on('close', () => {
console.log('close');
shouldBackoff = true;
});
client.on('error', err => {
console.log('error', err);
});
client.on('message', (topic, message) => {
let messageStr = 'Message received: ';
if (topic === `/devices/${deviceId}/config`) {
messageStr = 'Config message received: ';
} else if (topic.startsWith(`/devices/${deviceId}/commands`)) {
messageStr = 'Command message received: ';
}
messageStr += Buffer.from(message, 'base64').toString('ascii');
console.log(messageStr);
});
client.on('packetsend', () => {
// Note: logging packet send is very verbose
});
// Once all of the messages have been published, the connection to Google Cloud
// IoT will be closed and the process will exit. See the publishAsync method.
and the publish code is -
const publishAsync = (
mqttTopic,
client,
iatTime,
messagesSent,
numMessages,
connectionArgs
) => {
// If we have published enough messages or backed off too many times, stop.
if (messagesSent > numMessages || backoffTime >= MAXIMUM_BACKOFF_TIME) {
if (backoffTime >= MAXIMUM_BACKOFF_TIME) {
console.log('Backoff time is too high. Giving up.');
}
console.log('Closing connection to MQTT. Goodbye!');
client.end();
publishChainInProgress = false;
return;
}
// Publish and schedule the next publish.
publishChainInProgress = true;
let publishDelayMs = 0;
if (shouldBackoff) {
publishDelayMs = 1000 * (backoffTime + Math.random());
backoffTime *= 2;
console.log(`Backing off for ${publishDelayMs}ms before publishing.`);
}
setTimeout(() => {
const payload = `${argv.registryId}/${argv.deviceId}-payload-${messagesSent}`;
// Publish "payload" to the MQTT topic. qos=1 means at least once delivery.
// Cloud IoT Core also supports qos=0 for at most once delivery.
console.log('Publishing message:', payload);
client.publish(mqttTopic, payload, {qos: 1}, err => {
if (!err) {
shouldBackoff = false;
backoffTime = MINIMUM_BACKOFF_TIME;
}
});
const schedulePublishDelayMs = argv.messageType === 'events' ? 1000 : 2000;
setTimeout(() => {
const secsFromIssue = parseInt(Date.now() / 1000) - iatTime;
if (secsFromIssue > argv.tokenExpMins * 60) {
iatTime = parseInt(Date.now() / 1000);
console.log(`\tRefreshing token after ${secsFromIssue} seconds.`);
client.end();
connectionArgs.password = createJwt(
argv.projectId,
argv.privateKeyFile,
argv.algorithm
);
connectionArgs.protocolId = 'MQTT';
connectionArgs.protocolVersion = 4;
connectionArgs.clean = true;
client = mqtt.connect(connectionArgs);
client.on('connect', success => {
console.log('connect');
if (!success) {
console.log('Client not connected...');
} else if (!publishChainInProgress) {
publishAsync(
mqttTopic,
client,
iatTime,
messagesSent,
numMessages,
connectionArgs
);
}
});
client.on('close', () => {
console.log('close');
shouldBackoff = true;
});
client.on('error', err => {
console.log('error', err);
});
client.on('message', (topic, message) => {
console.log(
'message received: ',
Buffer.from(message, 'base64').toString('ascii')
);
});
client.on('packetsend', () => {
// Note: logging packet send is very verbose
});
}
publishAsync(
mqttTopic,
client,
iatTime,
messagesSent + 1,
numMessages,
connectionArgs
);
}, schedulePublishDelayMs);
}, publishDelayMs);
};
I am wondering if anyone has gotten their Node.js app to both successfully publish and subscribe with Google Cloud. If so, what might I be missing?

Google Pub/Sub pull method restart express server after every 1 min

I am using pun/sub to pull messages when someone buys a subscription from google play.
const { PubSub } = require('#google-cloud/pubsub');
const grpc = require('grpc');
// Instantiates a client
const pubSubClient = new PubSub({ grpc });
const pubsub = () => {
const projectId = process.env.GOOGLE_PUB_SUB_PROJECT_ID; // Your Google Cloud Platform project ID
const subscriptionName = process.env.GOOGLE_PUB_SUB_SUBSCRIBER_NAME; // Name of our subscription
const timeout = 60;
const maxInProgress = 10;
const subscriberOptions = {
flowControl: {
maxMessages: maxInProgress,
},
};
// Get our created subscription
const subscriptionPub = pubSubClient.subscription(subscriptionName, subscriberOptions);
console.log(`subscription ${subscriptionPub.name} found.`);
// Create an event handler to handle messages
let messageCount = 0;
// Create an event handler to handle messages
const messageHandler = message => {
console.log(`Received message: ${message.id}`);
console.log(`Data: ${message.data}`);
console.log(`Attributes: ${JSON.stringify(message.attributes)}`);
//todo: you can update your backend here using the purchase token
messageCount += 1;
// "Ack" (acknowledge receipt of) the message
message.ack();
};
// Create an event handler to handle errors
const errorHandler = function (error) {
console.log(`GOOGLE PUB SUB ERROR: ${error}`);
throw error;
};
// Listen for new messages/errors until timeout is hit
subscriptionPub.on('message', messageHandler);
subscriptionPub.on('error', errorHandler);
setTimeout(() => {
subscriptionPub.removeListener('message', messageHandler);
subscriptionPub.removeListener('error', errorHandler);
console.log(`${messageCount} message(s) received.`);
}, timeout * 1000);
};
module.exports = pubsub;
And above file is called in the main.js file and every 1 min I am receiving log subscription ${subscriptionPub.name} found.
also, I have commented setTimeout code as of now but I want to understand why removeListener is important to remove the listener every one minute.

Azure Service bus topic: why messages seem to be eaten by receiver?

I have a service bus topic that ingest telemetry messages outgoing from one of my servers ('telemetry-topic') at a rate of approximately 1 message every 10 seconds.
I have a web (site) front that subscribes to the 'telemetry-subscription' that consumes the 'telemetry-topic'.
What I witness is that the more front I run, the least message frequency it receives. I sounds like my topic acts as a queue…
What I should see is that any front -no matter the number of them— should sustain de approx 1 message very 10 seconds.
Note: My stack is node.
The receiver:
async function serviceBusTelemetrySubscribe(props) {
const module = '🚌 Service bus telemetry'
const {enter, leave, log, warn, error} = SDK.logFunctions(module, {
verboseLevel: 1,
verboseEnterLeave: true,
})
try {
enter()
log('Subscribing with given processMessage', props.processMessage, 'processError', props.processError)
const {processMessage, processError} = props
// connection string to your Service Bus namespace
const connectionString = 'XXXX'
// == Topic & subscription names
const topicName = 'telemetry-topic'
const subscriptionName = 'telemetry'
// == Create a Service Bus client using the connection string to the Service Bus namespace
const sbClient = new ServiceBusClient(connectionString)
log('Service bus client', sbClient)
// == Create a receiver for "all telemetry" subscription.
const receiver = sbClient.createReceiver(topicName, subscriptionName)
log('Service bus receiver for subscription name "', subscriptionName, '" is:', sbClient)
// == Function to handle messages, which provide a default one
const myMessageHandler = processMessage || (async (messageReceived) => {
log(`received message: ${messageReceived.body}`)
})
// == Function to handle any errors, which provide a default one
const myErrorHandler = processError || (async (error) => {
error(error)
})
log('Subscribing with actual processMessage', myMessageHandler, 'processError',myErrorHandler)
// == Subscribe and specify the message and error handlers
const sbSubscription = receiver.subscribe({
processMessage: myMessageHandler,
processError: myErrorHandler
})
log('Service bus subscription', sbSubscription)
// == Return cleanup method
return async () => {
log('Closing Service bus…')
// Waiting long enough before closing the sender to send messages
await delay(5000)
log('Closing Service bus subscription…')
await sbSubscription.close()
log('Closing Service bus receiver…')
await receiver.close()
log('Closing Service bus client…')
await sbClient.close()
}
}
catch(err) {
error(err)
}
finally {
leave()
}
}

Writing a unit test case in node.js using mocha to mock a azure service bus queue to recive messages

I have written a unit test case,but it is giving error.
Please find the code below
index.js
const { ServiceBusClient, ReceiveMode } = require("#azure/service-bus");
module.exports = async function (context, myTimer) {
// Define connection string and related Service Bus entity names here
const connectionString = process.env['serviceBusConnectionString'];
const queueName = process.env['serviceBusQueueName'];
const sbClient = ServiceBusClient.createFromConnectionString(connectionString);
const queueClient = sbClient.createQueueClient(queueName);
//const receiver = queueClient.createReceiver(ReceiveMode.receiveAndDelete);
const receiver = queueClient.createReceiver(ReceiveMode.peekLock);
const messages = await receiver.receiveMessages(1);
try {
let payloads = [];
messages.forEach((msg) => {
payloads.push(msg.body);
})
await queueClient.close();
} catch (err) {
context.log('Queue message status settle: abandon');
await messages[0].abandon();
console.log('Error ', err);
} finally {
await sbClient.close();
context.done();
}
};
This is the unit test file and I am getting error.Please let me know why I am getting this errorenter image description here
indexTest.js:
beforeEach(() => {
const sbClientStub = {
createQueueClient: sinon.stub().returnsThis(),
createReceiver: sinon.stub().returnsThis(),
receiveMessages:sinon.stub(),
close: sinon.stub(),
};
sinon.stub(ServiceBusClient, 'createFromConnectionString').callsFake(() => sbClientStub);
const ctx = {};
// const actual = await pushToQueue(message, ctx);
// sinon.assert.match(actual, 2);
sinon.assert.calledWithExactly(ServiceBusClient.createFromConnectionString, undefined);
sinon.assert.calledWithExactly(sbClientStub.createQueueClient, undefined);
sinon.assert.calledOnce(sbClientStub.createReceiver, undefined );
//sinon.assert.calledWithExactly(sbClientStub.send.firstCall, { body: 'a' });
//sinon.assert.calledWithExactly(sbClientStub.send.secondCall, { body: 'b' });
sinon.assert.calledTwice(sbClientStub.close);
});
You should replace every sinon.stub() with sinon.spy(). The stub will prevent calling the original implementation of methods, but spies will do. They basically have the same APIs.
In order to call the original methods of #azure/service-bus, make sure the resources of #azure/service-bus are ready such as environment variables, service account, queue and so on.
If you do this, the unit tests are no longer isolated. In fact, they are no longer unit tests, but integration tests, or e2e tests.

Resources