Subscribing and reading from Topic: ActiveMQ & Python - python-3.x

I am trying to subscribe to a topic in ActiveMQ running in localhost using stompest for connecting to the broker. Please refer below code:
import os
import json
from stompest.sync import Stomp
from stompest.config import StompConfig
CONFIG = StompConfig(uri=os.environ['MQ_URL'],
login=os.environ['MQ_UID'],
passcode=os.environ['MQ_DWP'],
version="1.2")
topic = '/topic/SAMPLE.TOPIC'
msg = {'refresh': True}
client = Stomp(CONFIG)
client.connect()
client.send(topic, json.dumps(msg).encode())
client.disconnect()
client = Stomp(CONFIG)
client.connect(heartBeats=(0, 10000))
token = client.subscribe(topic, {
"ack": "client",
"id": '0'
})
frame = client.receiveFrame()
if frame and frame.body:
print(f"Frame received from MQ: {frame.info()}")
client.disconnect()
Although I see active connection it the ActiveMQ web console, no message is received in the code. The flow of control seems to pause at frame = client.receiveFrame().
I didn't find any reliable resource or documentation regarding this.
Am I doing anything wrong here?

This is the expected behavior since you're using a topic (i.e. pub/sub semantics). When you send a message to a topic it will be delivered to all existing subscribers. If no subscribers are connected then the message is discarded.
You send your message before any subscribers are connected which means the broker will discard the message. Once the subscriber connects there are no messages to receive therefore receiveFrame() will simply block waiting for a frame as the stompest documentation notes:
Keep in mind that this method will block forever if there are no frames incoming on the wire.
Try either sending a message to a queue and then receiving it or creating an asynchronous client first and then sending your message.

Related

Apache Pulsar Client - Broker notification of Closed consumer - how to resume data feed?

TLDR: using python client library to subscribe to pulsar topic. logs show: 'broker notification of consumer closed' when something happens server-side. subscription appears to be re-established according to logs but we find later that backlog was growing on cluster b/c no msgs being sent to our subscription to consume
Running into an issue where we have an Apache-Pulsar cluster we are using that is opaque to us, and has a namespace defined where we publish/consume topics, is losing connection with our consumer.
We have a python client consuming from a topic (with one Pulsar Client subscription per thread).
We have run into an issue where, due to an issue on the pulsar cluster, we see the following entry in our client logs:
"Broker notification of Closed consumer"
followed by:
"Created connection for pulsar://houpulsar05.mycompany.com:6650"
....for every thread in our agent.
Then we see the usual periodic log entries like this:
{"log":"2022-09-01 04:23:30.269 INFO [139640375858944] ConsumerStatsImpl:63 | Consumer [persistent://tenant/namespace/topicname, subscription-name, 0] , ConsumerStatsImpl (numBytesRecieved_ = 0, totalNumBytesRecieved_ = 6545742, receivedMsgMap_ = {}, ackedMsgMap_ = {}, totalReceivedMsgMap_ = {[Key: Ok, Value: 3294], }, totalAckedMsgMap_ = {[Key: {Result: Ok, ackType: 0}, Value: 3294], })\n","stream":"stdout","time":"2022-09-01T04:23:30.270009746Z"}
This gives the appearance that some connection has been re-established to some other broker.
However, we do not get any messages being consumed. We have an alert on Grafana dashboard which shows us the backlog on topics and subscription backlog. Eventually it either hits a count or rate thresshold which will alert us that there is a problem. When we restart our agent, the subscription is re-establish and the backlog is can immediately be seen heading to 0.
Has anyone experienced such an issue?
Our code is typical:
consumer = client.subscribe(
topic='my-topic',
subscription_name='my-subscription',
consumer_type=my_consumer_type,
consumer_name=my_agent_name
)
while True:
msg = consumer.receive()
ex = msg.value()
i haven't yet found a readily-available way docker-compose or anything to run a multi-cluster pulsar installation locally on Docker desktop for me to try killing off a broker and see how consumer reacts.
Currently Python client only supports configuring one broker's address and doesn't support retry for lookup yet. Here are two related PRs to support it:
https://github.com/apache/pulsar/pull/17162
https://github.com/apache/pulsar/pull/17410
Therefore, setting up a multi-nodes cluster might be nothing different from a standalone.
If you only specified one broker in the service URL, you can simply test it with a standalone. Run a consumer and a producer sending messages periodically, then restart the standalone. The "Broker notification of Closed consumer" appears when the broker actively closes the connection, e.g. your consumer has sent a SEEK command (by seek call), then broker will disconnect the consumer and the log appears.
BTW, it's better to show your Python client version. And GitHub issues might be a better place to track the issue.

Azure Service Bus - random deserialization issues

I've been recently having problems with my Service Bus queue. Random messages (one can pass and the other not) are placed on the deadletter queue with the error message saying:
"DeadLetterReason": "Moved because of Unable to get Message content There was an error deserializing the object of type System.String. The input source is not correctly formatted."
"DeadLetterErrorDescription": "Des"
This happens even before my consumer has the chance to receive the message from the queue.
The weird part is that when I requeue the message through Service Bus Explorer it passes and is successfully received and handled by my consumer.
I am using the same version of Service Bus either for sending and receiving the messages:
Azure.Messaging.ServiceBus, version: 7.2.1
My message is being sent like this:
await using var client = new ServiceBusClient(connString);
var sender = client.CreateSender(endpointName);
var message = new ServiceBusMessage(serializedMessage);
await sender.SendMessageAsync(message).ConfigureAwait(true);
So the solution I have for now for the described issue is that I implemented a retry policy for the messages that land on the dead-letter queue. The message is cloned from the DLQ and added again to the ServiceBus queue and for the second time there is no problems and the message completes successfully. I suppose that this happens because of some weird performance issues I might have in the Azure infrastructure. But this approach bought me some time to investigate further.

nodejs rhea npm for amqp couldn't create subscription queue on address in activemq artemis

I have an address "pubsub.foo" already configured as multicast in broker.xml.
<address name="pubsub.foo">
<multicast/>
</address>
As per the Artemis documentation:
When clients connect to an address with the multicast element, a subscription queue for the client will be automatically created for the client.
I am creating a simple utility using rhea AMQP Node.js npm to publish messages to the address.
var connection = require('rhea').connect({ port: args.port, host: args.host, username:'admin', password:'xxxx' });
var sender = connection.open_sender('pubsub.foo');
sender.on('sendable', function(context) {
var m = 'Hii test'
console.log('sent ' + m);
sender.send({body:m});
connection.close();
});
I enabled debug log and while running the client code I see the message like this.
2020-02-03 22:43:25,071 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage#68933e4b is not going anywhere as it didn't have a binding on address:pubsub.foo
I also tried different variations of the topic, for example, client1.pubsub.foo, pubsub.foo::client1 however no luck from the client code. Please share your thoughts. I am new to ActiveMQ Artemis.
What you're observing actually is the expected behavior.
Unfortunately, the documentation you cited isn't as clear as it could be. When it says a subscription queue will be created in response to a client connecting it really means a subscriber not a producer. That's why it creates a subscription queue. The semantics for a multicast address (and publish/subscribe in general) dictate that a message sent when there are no subscribers will be dropped. Therefore, you need to create a subscriber and then send a message.
If you want different semantics then I recommend you use anycast.

Rabbitmq keep request after stopping rabitmq procces and queue

I make a connection app with rabbitmq, it works fine but when I stop rabbitmq process all of my request get lost, I want even after killing rabitmq service, my requests get saved and after restart rabitmq service, all of my request return to their own places.
Here is my rabitmq.py:
import pika
import SimilarURLs
data = ''
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
def rabit_mq_start(Parameter):
channel.queue_declare(queue='req')
a = (take(datas=Parameter.decode()))
channel.basic_publish(exchange='',
routing_key='req',
body=str(a))
print(" [x] Sent {}".format(a))
return a
channel.start_consuming()
def take(datas):
returns = SimilarURLs.start(data=datas)
return returns
In addition, I'm sorry for writing mistakes in my question.
You need to enable publisher confirms (via the confirm_delivery method on your channel object). Then your application must keep track of what messages have been confirmed as published, and what messages have not. You will have to implement this yourself. When RabbitMQ is stopped and started again, your application can re-publish the messages that weren't confirmed.
It would be best to use the asynchronous publisher example as a guide. If you use BlockingConnection you won't get the async notifications when a message is confirmed, defeating their purpose.
If you need further assistance after trying to implement this yourself I suggest following up on the pika-python mailing list.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

How to listen to a queue using azure service-bus with Node.js?

Background
I have several clients sending messages to an azure service bus queue. To match it, I need several machines reading from that queue and consuming the messages as they arrive, using Node.js.
Research
I have read the azure service bus queues tutorial and I am aware I can use receiveQueueMessage to read a message from the queue.
However, the tutorial does not mention how one can listen to a queue and read messages as soon as they arrive.
I know I can simply poll the queue for messages, but this spams the servers with requests for no real benefit.
After searching in SO, I found a discussion where someone had a similar issue:
Listen to Queue (Event Driven no polling) Service-Bus / Storage Queue
And I know they ended up using the C# async method ReceiveAsync, but it is not clear to me if:
That method is available for Node.js
If that method reads messages from the queue as soon as they arrive, like I need.
Problem
The documentation for Node.js is close to non-existant, with that one tutorial being the only major document I found.
Question
How can my workers be notified of an incoming message in azure bus service queues ?
Answer
According to Azure support, it is not possible to be notified when a queue receives a message. This is valid for every language.
Work arounds
There are 2 main work arounds for this issue:
Use Azure topics and subscriptions. This way you can have all clients subscribed to an event new-message and have them check the queue once they receive the notification. This has several problems though: first you have to pay yet another Azure service and second you can have multiple clients trying to read the same message.
Continuous Polling. Have the clients check the queue every X seconds. This solution is horrible, as you end up paying the network traffic you generate and you spam the service with useless requests. To help minimize this there is a concept called long polling which is so poorly documented it might as well not exist. I did find this NPM module though: https://www.npmjs.com/package/azure-awesome-queue
Alternatives
Honestly, at this point, you may be wondering why you should be using this service. I agree...
As an alternative there is RabbitMQ which is free, has a community, good documentation and a ton more features.
The downside here is that maintaining a RabbitMQ fault tolerant cluster is not exactly trivial.
Another alternative is Apache Kafka which is also very reliable.
You can receive messages from the service bus queue via subscribe method which listens to a stream of values. Example from Azure documentation below
const { delay, ServiceBusClient, ServiceBusMessage } = require("#azure/service-bus");
// connection string to your Service Bus namespace
const connectionString = "<CONNECTION STRING TO SERVICE BUS NAMESPACE>"
// name of the queue
const queueName = "<QUEUE NAME>"
async function main() {
// create a Service Bus client using the connection string to the Service Bus namespace
const sbClient = new ServiceBusClient(connectionString);
// createReceiver() can also be used to create a receiver for a subscription.
const receiver = sbClient.createReceiver(queueName);
// function to handle messages
const myMessageHandler = async (messageReceived) => {
console.log(`Received message: ${messageReceived.body}`);
};
// function to handle any errors
const myErrorHandler = async (error) => {
console.log(error);
};
// subscribe and specify the message and error handlers
receiver.subscribe({
processMessage: myMessageHandler,
processError: myErrorHandler
});
// Waiting long enough before closing the sender to send messages
await delay(20000);
await receiver.close();
await sbClient.close();
}
// call the main function
main().catch((err) => {
console.log("Error occurred: ", err);
process.exit(1);
});
source :
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-nodejs-how-to-use-queues
I asked myslef the same question, here is what I found.
Use Google PubSub, it does exactly what you are looking for.
If you want to stay with Azure, the following ist possible:
cloud functions can be triggered from SBS messages
trigger an event-hub event with that cloud function
receive the event and fetch the message from SBS
You can make use of serverless functions which are "ServiceBusQueueTrigger",
they are invoked as soon as message arrives in queue,
Its pretty straight forward doing in nodejs, you need bindings defined in function.json which have type as
"type": "serviceBusTrigger",
This article (https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus#trigger---javascript-example) probably would help in more detail.

Resources