Rabbitmq keep request after stopping rabitmq procces and queue - python-3.x

I make a connection app with rabbitmq, it works fine but when I stop rabbitmq process all of my request get lost, I want even after killing rabitmq service, my requests get saved and after restart rabitmq service, all of my request return to their own places.
Here is my rabitmq.py:
import pika
import SimilarURLs
data = ''
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
def rabit_mq_start(Parameter):
channel.queue_declare(queue='req')
a = (take(datas=Parameter.decode()))
channel.basic_publish(exchange='',
routing_key='req',
body=str(a))
print(" [x] Sent {}".format(a))
return a
channel.start_consuming()
def take(datas):
returns = SimilarURLs.start(data=datas)
return returns
In addition, I'm sorry for writing mistakes in my question.

You need to enable publisher confirms (via the confirm_delivery method on your channel object). Then your application must keep track of what messages have been confirmed as published, and what messages have not. You will have to implement this yourself. When RabbitMQ is stopped and started again, your application can re-publish the messages that weren't confirmed.
It would be best to use the asynchronous publisher example as a guide. If you use BlockingConnection you won't get the async notifications when a message is confirmed, defeating their purpose.
If you need further assistance after trying to implement this yourself I suggest following up on the pika-python mailing list.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Related

Apache Pulsar Client - Broker notification of Closed consumer - how to resume data feed?

TLDR: using python client library to subscribe to pulsar topic. logs show: 'broker notification of consumer closed' when something happens server-side. subscription appears to be re-established according to logs but we find later that backlog was growing on cluster b/c no msgs being sent to our subscription to consume
Running into an issue where we have an Apache-Pulsar cluster we are using that is opaque to us, and has a namespace defined where we publish/consume topics, is losing connection with our consumer.
We have a python client consuming from a topic (with one Pulsar Client subscription per thread).
We have run into an issue where, due to an issue on the pulsar cluster, we see the following entry in our client logs:
"Broker notification of Closed consumer"
followed by:
"Created connection for pulsar://houpulsar05.mycompany.com:6650"
....for every thread in our agent.
Then we see the usual periodic log entries like this:
{"log":"2022-09-01 04:23:30.269 INFO [139640375858944] ConsumerStatsImpl:63 | Consumer [persistent://tenant/namespace/topicname, subscription-name, 0] , ConsumerStatsImpl (numBytesRecieved_ = 0, totalNumBytesRecieved_ = 6545742, receivedMsgMap_ = {}, ackedMsgMap_ = {}, totalReceivedMsgMap_ = {[Key: Ok, Value: 3294], }, totalAckedMsgMap_ = {[Key: {Result: Ok, ackType: 0}, Value: 3294], })\n","stream":"stdout","time":"2022-09-01T04:23:30.270009746Z"}
This gives the appearance that some connection has been re-established to some other broker.
However, we do not get any messages being consumed. We have an alert on Grafana dashboard which shows us the backlog on topics and subscription backlog. Eventually it either hits a count or rate thresshold which will alert us that there is a problem. When we restart our agent, the subscription is re-establish and the backlog is can immediately be seen heading to 0.
Has anyone experienced such an issue?
Our code is typical:
consumer = client.subscribe(
topic='my-topic',
subscription_name='my-subscription',
consumer_type=my_consumer_type,
consumer_name=my_agent_name
)
while True:
msg = consumer.receive()
ex = msg.value()
i haven't yet found a readily-available way docker-compose or anything to run a multi-cluster pulsar installation locally on Docker desktop for me to try killing off a broker and see how consumer reacts.
Currently Python client only supports configuring one broker's address and doesn't support retry for lookup yet. Here are two related PRs to support it:
https://github.com/apache/pulsar/pull/17162
https://github.com/apache/pulsar/pull/17410
Therefore, setting up a multi-nodes cluster might be nothing different from a standalone.
If you only specified one broker in the service URL, you can simply test it with a standalone. Run a consumer and a producer sending messages periodically, then restart the standalone. The "Broker notification of Closed consumer" appears when the broker actively closes the connection, e.g. your consumer has sent a SEEK command (by seek call), then broker will disconnect the consumer and the log appears.
BTW, it's better to show your Python client version. And GitHub issues might be a better place to track the issue.

How to accept push notifications in a plotly/dash app?

I have a client with an open connection to a server which accepts push notifications from the server. I would like to display the data from the push notifications in a plotly/dash page in near real time.
I've been considering my options as discussed in the documentation page.
If I have multiple push-notification clients running in each potential plotly/dash worker process, then I had to manage de-duplicating events, doable, but bug prone and quirky to code.
The idea solution seems to be to run the push network client on only one process and push those notifications into a dcc.Store objects. I assume I would do that by populating a queue in the push clients async callback, and on a dcc.Interval timer gather any new data in that queue and place it in the dcc.Store object. Then all other callbacks get triggered on the dcc.Store object, possibly in separate python processes.
From the documentation I don't see how I would be guarantee the callback that interacts with the push network client to the main process and ensure it doesn't run on any worker processes. Is this possible? The dcc.Interval documentation doesn't make any mention of this detail.
Is there a way to force the dcc.Interval onto one process, or is that the normal operation under Dash with multiple worker processes? Or is there another recommended approach to handling data from a push notification network client?
An alternative to the Interval component pulling updates at regular intervals could be to use a Websocket component to enable push notifications. Simply add the component to the layout and add a clientside callback that performs the appropriate updates based on the received message,
app.clientside_callback("function(msg){return \"Response from websocket: \" + msg.data;}",
Output("msg", "children"), [Input("ws", "message")])
Here is a complete example using a SocketPool to setup endpoints for sending messages,
import dash_html_components as html
from dash import Dash
from dash.dependencies import Input, Output
from dash_extensions.websockets import SocketPool, run_server
from dash_extensions import WebSocket
# Create example app.
app = Dash(prevent_initial_callbacks=True)
socket_pool = SocketPool(app)
app.layout = html.Div([html.Div(id="msg"), WebSocket(id="ws")])
# Update div using websocket.
app.clientside_callback("function(msg){return \"Response from websocket: \" + msg.data;}",
Output("msg", "children"), [Input("ws", "message")])
# End point to send message to current session.
#app.server.route("/send/<message>")
def send_message(message):
socket_pool.send(message)
return f"Message [{message}] sent."
# End point to broadcast message to ALL sessions.
#app.server.route("/broadcast/<message>")
def broadcast_message(message):
socket_pool.broadcast(message)
return f"Message [{message}] broadcast."
if __name__ == '__main__':
run_server(app)

Subscribing and reading from Topic: ActiveMQ & Python

I am trying to subscribe to a topic in ActiveMQ running in localhost using stompest for connecting to the broker. Please refer below code:
import os
import json
from stompest.sync import Stomp
from stompest.config import StompConfig
CONFIG = StompConfig(uri=os.environ['MQ_URL'],
login=os.environ['MQ_UID'],
passcode=os.environ['MQ_DWP'],
version="1.2")
topic = '/topic/SAMPLE.TOPIC'
msg = {'refresh': True}
client = Stomp(CONFIG)
client.connect()
client.send(topic, json.dumps(msg).encode())
client.disconnect()
client = Stomp(CONFIG)
client.connect(heartBeats=(0, 10000))
token = client.subscribe(topic, {
"ack": "client",
"id": '0'
})
frame = client.receiveFrame()
if frame and frame.body:
print(f"Frame received from MQ: {frame.info()}")
client.disconnect()
Although I see active connection it the ActiveMQ web console, no message is received in the code. The flow of control seems to pause at frame = client.receiveFrame().
I didn't find any reliable resource or documentation regarding this.
Am I doing anything wrong here?
This is the expected behavior since you're using a topic (i.e. pub/sub semantics). When you send a message to a topic it will be delivered to all existing subscribers. If no subscribers are connected then the message is discarded.
You send your message before any subscribers are connected which means the broker will discard the message. Once the subscriber connects there are no messages to receive therefore receiveFrame() will simply block waiting for a frame as the stompest documentation notes:
Keep in mind that this method will block forever if there are no frames incoming on the wire.
Try either sending a message to a queue and then receiving it or creating an asynchronous client first and then sending your message.

Server Sent Events with Pyramid - How to detect if the connection to the client has been lost

I have a pyramid application that send SSE messages. It works basically like these:
def message_generator():
for i in range(100):
print("Sending message:" + str(i))
yield "data: %s\n\n" % json.dumps({'message': str(i)})
time.sleep(random.randint(1, 10))
#view_config(route_name='events')
def events(request):
headers = [('Content-Type', 'text/event-stream'),
('Cache-Control', 'no-cache')]
response = Response(headerlist=headers)
response.app_iter = message_generator()
return response
When I browse to /events I get the events. When I move to another page the events stop, when I close the browser the events stop.
The problem happens for example if I am in /events and I switch off the computer. The server does not know that the client got lost and message_generator keeps sending messages to the void.
In this page: A Look at Server-Sent Events mention this:
...the server should detect this (when the client stops) and stop
sending further events as the client is no longer listening for them.
If the server does not do this, then it will essentially be sending
events out into a void.
Is there a way to detect this with Pyramid? I tried with
request.add_finished_callback()
but this callback seems to be called with
return response
I use Gunicorn with gevent to start the server.
Any idea is highly appreciated
From PEP 3333:
Applications returning a generator or other custom iterator should not assume the entire iterator will be consumed, as it may be closed early by the server.
Basically a WSGI server "should" invoke the close() method on the app_iter when a client disconnects (all generators, such as in your example, support this automatically). However, a server is not required to do it, and it seems many WSGI servers do not. For example, you mentioned gunicorn (which I haven't independently verified), but I did verify that waitress also does not. I opened [1] on waitress as a result, and have been working on a fix. Streaming responses in WSGI environments is shaky at best and usually depends on the server. For example, on waitress, you need to set send_bytes=0 to avoid it buffering the response data.
[1] https://github.com/Pylons/waitress/issues/236

How to listen to a queue using azure service-bus with Node.js?

Background
I have several clients sending messages to an azure service bus queue. To match it, I need several machines reading from that queue and consuming the messages as they arrive, using Node.js.
Research
I have read the azure service bus queues tutorial and I am aware I can use receiveQueueMessage to read a message from the queue.
However, the tutorial does not mention how one can listen to a queue and read messages as soon as they arrive.
I know I can simply poll the queue for messages, but this spams the servers with requests for no real benefit.
After searching in SO, I found a discussion where someone had a similar issue:
Listen to Queue (Event Driven no polling) Service-Bus / Storage Queue
And I know they ended up using the C# async method ReceiveAsync, but it is not clear to me if:
That method is available for Node.js
If that method reads messages from the queue as soon as they arrive, like I need.
Problem
The documentation for Node.js is close to non-existant, with that one tutorial being the only major document I found.
Question
How can my workers be notified of an incoming message in azure bus service queues ?
Answer
According to Azure support, it is not possible to be notified when a queue receives a message. This is valid for every language.
Work arounds
There are 2 main work arounds for this issue:
Use Azure topics and subscriptions. This way you can have all clients subscribed to an event new-message and have them check the queue once they receive the notification. This has several problems though: first you have to pay yet another Azure service and second you can have multiple clients trying to read the same message.
Continuous Polling. Have the clients check the queue every X seconds. This solution is horrible, as you end up paying the network traffic you generate and you spam the service with useless requests. To help minimize this there is a concept called long polling which is so poorly documented it might as well not exist. I did find this NPM module though: https://www.npmjs.com/package/azure-awesome-queue
Alternatives
Honestly, at this point, you may be wondering why you should be using this service. I agree...
As an alternative there is RabbitMQ which is free, has a community, good documentation and a ton more features.
The downside here is that maintaining a RabbitMQ fault tolerant cluster is not exactly trivial.
Another alternative is Apache Kafka which is also very reliable.
You can receive messages from the service bus queue via subscribe method which listens to a stream of values. Example from Azure documentation below
const { delay, ServiceBusClient, ServiceBusMessage } = require("#azure/service-bus");
// connection string to your Service Bus namespace
const connectionString = "<CONNECTION STRING TO SERVICE BUS NAMESPACE>"
// name of the queue
const queueName = "<QUEUE NAME>"
async function main() {
// create a Service Bus client using the connection string to the Service Bus namespace
const sbClient = new ServiceBusClient(connectionString);
// createReceiver() can also be used to create a receiver for a subscription.
const receiver = sbClient.createReceiver(queueName);
// function to handle messages
const myMessageHandler = async (messageReceived) => {
console.log(`Received message: ${messageReceived.body}`);
};
// function to handle any errors
const myErrorHandler = async (error) => {
console.log(error);
};
// subscribe and specify the message and error handlers
receiver.subscribe({
processMessage: myMessageHandler,
processError: myErrorHandler
});
// Waiting long enough before closing the sender to send messages
await delay(20000);
await receiver.close();
await sbClient.close();
}
// call the main function
main().catch((err) => {
console.log("Error occurred: ", err);
process.exit(1);
});
source :
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-nodejs-how-to-use-queues
I asked myslef the same question, here is what I found.
Use Google PubSub, it does exactly what you are looking for.
If you want to stay with Azure, the following ist possible:
cloud functions can be triggered from SBS messages
trigger an event-hub event with that cloud function
receive the event and fetch the message from SBS
You can make use of serverless functions which are "ServiceBusQueueTrigger",
they are invoked as soon as message arrives in queue,
Its pretty straight forward doing in nodejs, you need bindings defined in function.json which have type as
"type": "serviceBusTrigger",
This article (https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus#trigger---javascript-example) probably would help in more detail.

Resources