MQTT QoS automated testing with aedes broker on node.js/net - node.js

I want to test the QoS of the MQTT protocol. I use aedes as my broker.
For testing QoS 2, one of the test cases I'd have to evaluate, is to run a scenario where once a message is published by the client, the broker forwards it to the subscriber who receives the message but the acknowledgement(tcp) and/or the pubrec(mqtt) are not received by the broker. By running such a case, I check that the message is not received twice by the subscriber.
So my plan is, once a message is published by the client to run aedes.authorizePublish broker-side and set a sleep for a couple of seconds and in the meantime to block the IP of the subscriber on the used port(1883).
Have tried:
netsh advfirewall firewall add rule name="IP Block" dir=in
protocol=TCP interface=any action=block remoteip=172.30.10.120
localport=1883
So I expect that the broker will publish the message to 172.30.10.120(subscriber) and that no packets from subscriber should be received by the broker until I invert the netsh command with
netsh advfirewall firewall delete rule name="IP Block" remoteip=172.30.10.120.
This is the code that runs the aedes broker:
mqttBroker.authorizePublish = function (client, packet) {
if (packet.topic === 'hello' && client.id === '01B4C') {
let PACK = packet;
setTimeout(function (packet) {
mqttBroker.publish(PACK)
}, 10000)
}
}
So I expected to get packets like that:
Packets when I remove eth cable from subscribed client
But the packets from the image above are the ones I get when I simply remove the eth cable from my subscribed client and put it back in.
The packets I get from the procedure with the netsh commands are:
Packets when I block the ip from subscribed client, only the direction to the broker
My questions are:
Is this a good way to test MQTT QoS like that?
How can I simulate this scenario in an automated way where no cables are removed, of course, and I block one way of the connection?
Thanks in advance
P.S. Beginner programmer..

Related

Apache Pulsar Client - Broker notification of Closed consumer - how to resume data feed?

TLDR: using python client library to subscribe to pulsar topic. logs show: 'broker notification of consumer closed' when something happens server-side. subscription appears to be re-established according to logs but we find later that backlog was growing on cluster b/c no msgs being sent to our subscription to consume
Running into an issue where we have an Apache-Pulsar cluster we are using that is opaque to us, and has a namespace defined where we publish/consume topics, is losing connection with our consumer.
We have a python client consuming from a topic (with one Pulsar Client subscription per thread).
We have run into an issue where, due to an issue on the pulsar cluster, we see the following entry in our client logs:
"Broker notification of Closed consumer"
followed by:
"Created connection for pulsar://houpulsar05.mycompany.com:6650"
....for every thread in our agent.
Then we see the usual periodic log entries like this:
{"log":"2022-09-01 04:23:30.269 INFO [139640375858944] ConsumerStatsImpl:63 | Consumer [persistent://tenant/namespace/topicname, subscription-name, 0] , ConsumerStatsImpl (numBytesRecieved_ = 0, totalNumBytesRecieved_ = 6545742, receivedMsgMap_ = {}, ackedMsgMap_ = {}, totalReceivedMsgMap_ = {[Key: Ok, Value: 3294], }, totalAckedMsgMap_ = {[Key: {Result: Ok, ackType: 0}, Value: 3294], })\n","stream":"stdout","time":"2022-09-01T04:23:30.270009746Z"}
This gives the appearance that some connection has been re-established to some other broker.
However, we do not get any messages being consumed. We have an alert on Grafana dashboard which shows us the backlog on topics and subscription backlog. Eventually it either hits a count or rate thresshold which will alert us that there is a problem. When we restart our agent, the subscription is re-establish and the backlog is can immediately be seen heading to 0.
Has anyone experienced such an issue?
Our code is typical:
consumer = client.subscribe(
topic='my-topic',
subscription_name='my-subscription',
consumer_type=my_consumer_type,
consumer_name=my_agent_name
)
while True:
msg = consumer.receive()
ex = msg.value()
i haven't yet found a readily-available way docker-compose or anything to run a multi-cluster pulsar installation locally on Docker desktop for me to try killing off a broker and see how consumer reacts.
Currently Python client only supports configuring one broker's address and doesn't support retry for lookup yet. Here are two related PRs to support it:
https://github.com/apache/pulsar/pull/17162
https://github.com/apache/pulsar/pull/17410
Therefore, setting up a multi-nodes cluster might be nothing different from a standalone.
If you only specified one broker in the service URL, you can simply test it with a standalone. Run a consumer and a producer sending messages periodically, then restart the standalone. The "Broker notification of Closed consumer" appears when the broker actively closes the connection, e.g. your consumer has sent a SEEK command (by seek call), then broker will disconnect the consumer and the log appears.
BTW, it's better to show your Python client version. And GitHub issues might be a better place to track the issue.

JmsOutboundGateway how can we set alter replyTo when sending out bound message so it is not same as replyDestination?

We have a MQ request/reply pattern implementation
Here we use a IBM MQ Cluster host. Here the request/reply queues on both sides are linked to each other by the MQ cluster environment, as the queue managers of different systems within the cluster talks to each other.
Our Requestor code uses Spring JMS Integration - JmsOutboundGateway to send and receive message
The service provider is a Mainframe application which we have no control.
public class JmsOutboundGatewayConfig {
#Bean
public MessageChannel outboundRequestChannel() {
return new DirectChannel();
}
#Bean
public QueueChannel outboundResponseChannel() {
return new QueueChannel();
}
#Bean
#ServiceActivator(inputChannel = "outboundRequestChannel")
public JmsOutboundGateway jmsTestOutboundGateway(ConnectionFactory connectionFactory) {
JmsOutboundGateway gateway = new JmsOutboundGateway();
gateway.setConnectionFactory(connectionFactory);
gateway.setRequestDestinationName("REQUEST.ALIAS.CLUSTER.QUEUE");
gateway.setReplyDestinationName("REPLY.ALIAS.CLUSTER.QUEUE");
gateway.setReplyChannel(outboundResponseChannel());
gateway.setRequiresReply(true);
return gateway;
}
}
// Requestor - sendAndReceive code
outboundRequestChannel.send(new GenericMessage<>("payload"));
Message<?> response = outboundResponseChannel.receive(10000);
Issue:
The issue we are facing when we send message, the gateway code is also passing the replyTo = queue://REPLY.ALIAS.CLUSTER.QUEUE.
Now the mainframe program that consumes this message , it is forced to reply back to the replyTo queue. It is failing on mainframe side as this replyTo queue which we send is not part of their MQ Mgr/env.
I could not find a way to remove the replyTo when sending message. As JmsOutboundGateway set this replyTo using the "ReplyDestinationName" which I had configured.
Our requestor will need to set the "ReplyDestinationName" as we are listening to this Alias-cluster reply queue for reply back.
I looked at the Channel interceptor options, I could only Intercept the message to alter it, but no option to change the replyTo.
Is there way to alter the replyTo i.e replyTo and ReplyDestination different?
Is there anyway to remove/not-set the replyTo when sending message to request queue?
Just wondering how to get this working for such MQ cluster environment where the replyTo queue will have to kept what the mainframe consumer service want, that is different to the replyDestination queue which we use.
Considering that the replyTo is used by the mainframe service to reply back. If it is not passed the mainframe service will use its own reply queue which is linked to our reply-cluster-alias queue.
Any inputs appreciated?
Thanks
Saishm
Further clarification:
The cluster mq env we have, Our spring jms outbound gateway is writing request to - "REQUEST.ALIAS.CLUSTER.QUEUE" & listening to the reply on "REPLY.ALIAS.CLUSTER.QUEUE"
So the jmsOutboundGateway sets the replyTo=REPLY.ALIAS.CLUSTER.QUEUE
Now the mainframe service on the other side is reading the message from "REQUEST.LOCAL.QUEUE". In the cluster env the "REQUEST.ALIAS.CLUSTER.QUEUE" ands its QMGR are linked to "REQUEST.LOCAL.QUEUE" and its QMGR, this is all managed within the cluster MQ env.
The mainframe service when consuming the request, sees that the incoming message had a replyTo and tries to send the response to this replyTo.
The issue is mainframe was supposed to reply to "REPLY.LOCAL.QUEUE" which is linked to REPLY.ALIAS.CLUSTER.QUEUE
If there is no replyTo it would have send the reply to "REPLY.LOCAL.QUEUE".
Now from the jmsOutBoundGateway I dont have any options to remove replyTo when sending mEssage or edit it to "REPLY.LOCAL.QUEUE" and keep listening to the response/reply of the request on "REPLY.ALIAS.CLUSTER.QUEUE"
Doesn't look like a JmsOutboundGateway will fit your IBM MQ cluster configuration and requirements. You just cannot use that replyTo feature since we cannot bypass JMS protocol over here.
Consider to use a pair of JmsSendingMessageHandler and JmsMessageDrivenEndpoint components, respectively. Your JmsSendingMessageHandler will just send a JMS message to the REQUEST.ALIAS.CLUSTER.QUEUE and forget. Only what you need is to supply a JmsHeaders.CORRELATION_ID. The JmsHeaderMapper will populate a jmsMessage.setJMSCorrelationID() property - the same way as JmsOutboundGateway does that by default. In this case your mainframe service is free to use its own reply queue and I guess it will supply our correlationId correctly.
A JmsMessageDrivenEndpoint has to be subscribed to the REPLY.ALIAS.CLUSTER.QUEUE and you do a correlation between request and reply yourself. For example a Map of correlationId and Future to fulfill when reply comes back.

Send MQTT message from the android client to all the clients through MQTT broker

I need MQTT broker to publish the received MQTT message from the Android client to all other clients so added mosquitto pub command in the body of the message.
publish(client,"mosquitto_pub -h 192.34.63.138 -t fromApp -m "Turn" -d ");
It is giving error that "Can't resolve symbol "Turn" and ; or ) expected" .
Update
I understood it correctly later. I actually needed an MQTT message from the android client to be sent to all other clients so I thought to include publish keyword in the message body which was quite wrong. The MQTT itself sends the received messages to all the clients provided if the clients have subscribed to that topic.Hopefully, it will help other readers.
There are a number of problems with your approach.
First, the compile time error is because you have nested " within a String (which is bound by ". To do this you need to escape the "'s with the \ as follows:
"mosquitto_pub -h 192.34.63.138 -t fromApp -m \"Turn\" -d "
The second problem is the more important one. MQTT doesn't work the way you seem to be expecting it to.
You do not send commands to the broker for it to execute, you publish a message from one client to a topic, and the broker then delivers that message to all the clients that have subscribed to that topic. So in this case you would just publish a message with the payload Turn to the topic fromApp. Which is going to look something like:
MqttMessage message = new MqttMessage("Turn".getBytes());
sampleClient.publish("fromApp", message);

nodejs rhea npm for amqp couldn't create subscription queue on address in activemq artemis

I have an address "pubsub.foo" already configured as multicast in broker.xml.
<address name="pubsub.foo">
<multicast/>
</address>
As per the Artemis documentation:
When clients connect to an address with the multicast element, a subscription queue for the client will be automatically created for the client.
I am creating a simple utility using rhea AMQP Node.js npm to publish messages to the address.
var connection = require('rhea').connect({ port: args.port, host: args.host, username:'admin', password:'xxxx' });
var sender = connection.open_sender('pubsub.foo');
sender.on('sendable', function(context) {
var m = 'Hii test'
console.log('sent ' + m);
sender.send({body:m});
connection.close();
});
I enabled debug log and while running the client code I see the message like this.
2020-02-03 22:43:25,071 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage#68933e4b is not going anywhere as it didn't have a binding on address:pubsub.foo
I also tried different variations of the topic, for example, client1.pubsub.foo, pubsub.foo::client1 however no luck from the client code. Please share your thoughts. I am new to ActiveMQ Artemis.
What you're observing actually is the expected behavior.
Unfortunately, the documentation you cited isn't as clear as it could be. When it says a subscription queue will be created in response to a client connecting it really means a subscriber not a producer. That's why it creates a subscription queue. The semantics for a multicast address (and publish/subscribe in general) dictate that a message sent when there are no subscribers will be dropped. Therefore, you need to create a subscriber and then send a message.
If you want different semantics then I recommend you use anycast.

Do leaf/downstream devices connect directly to iot-hub even when edge is used as gateway?

I am trying to setup an iot-edge device as an edge gateway. We wouldn't want our leaf/sensor/downstream devices directly connecting to the internet/cloud, and thus I would expect the iot-edge-gateway(as it name suggests) to bridge the connection between downstream devices and the cloud/iot-hub. However, I realize that the connection string for iot-hub/edge at any device level is simply
connection-string-for-iothub-with-gatewayhostwayAppended
This makes me assume that downstream devices transmit messages to an endpoint (prolly messages/* )to cloud/iot-hub and it is from there that gateway gets it(works with that data maybe then) and forwards it back to the $upstream, which shuns the whole point of a gateway.
Here in the message routing section of IOT-EDGE-GATEWAY
https://learn.microsoft.com/en-us/azure/iot-edge/how-to-create-transparent-gateway, in the ROUTE MESSAGES FROM DOWNSTREAM DEVICES section
{
"routes":{
"sensorToAIInsightsInput1":"FROM /messages/* WHERE NOT IS_DEFINED($connectionModuleId) INTO BrokeredEndpoint(\"/modules/ai_insights/inputs/input1\")",
"AIInsightsToIoTHub":"FROM /messages/modules/ai_insights/outputs/output1 INTO $upstream"
}
}
makes it sound like the gateway is routing messages falling on the built-in-endpoint(Default) to $upstream. I can't find any other clearer documentations over the web on this subject. I would really appreciate if someone clears this up. I was expecting the connection string for edge-gateway(that i'd mention in the device end to be something along the lines of localhost:port and not cloudaddress+gatewayhostname)
If your connection string contains a gateway hostname - and the SDK you are using on the device properly handles this, the device only connects to the gateway, not to the IoT Hub.
You can see the example from the .NET SDK here:
this.HostName = builder.GatewayHostName == null || builder.GatewayHostName == "" ? builder.HostName : builder.GatewayHostName;
https://github.com/Azure/azure-iot-sdk-csharp/blob/f86cb76470326f5af8426f3c2695279f51f6e0c8/iothub/device/src/IotHubConnectionString.cs#L30
If the gateway hostname is set, it actually overwrites the IoT Hub hostname for the connection.

Resources