How to get old rabbitmq events into new services? - python-3.x

for my backend micro-service application i am using RabbitMq as the message broker.
For the existing services i am getting events well. My question is how to pull all the old events into a new micro-service which would launch in future.
Just consider if there are three services currently
Product
Notification
Order
if i created a new product it information will be broadcasted to Notification service as well as Order service. So a record of product will be there in both Notification and Order.
so, after a while(when around 500+ products where added), if i had created new service called Analytic’s and wanted all the product created events to be listened when it is initially up.
I am using Python, RabbitMQ & Pika library.
this is my sample code
Sample Publisher code
import pika
import sys, random
connection = pika.BlockingConnection(pika.URLParameters('<rabbitmq-link>'))
channel = connection.channel()
channel.exchange_declare(exchange='group', exchange_type='fanout')
message = "info: Hello World!"
channel.basic_publish(exchange='group', routing_key='', body=message)
print(" [x] Sent %r" % message)
connection.close()
Service one code
import pika
connection = pika.BlockingConnection(pika.URLParameters('<rabbitmq-link>'))
channel = connection.channel()
channel.exchange_declare(exchange='group', exchange_type='fanout')
result = channel.queue_declare(queue='group-1', exclusive=False)
queue_name = result.method.queue
channel.queue_bind(exchange='group', queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r" % body)
channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True)
channel.start_consuming()
Service two code
import pika
connection = pika.BlockingConnection(pika.URLParameters('<rabbitmq-link>'))
channel = connection.channel()
channel.exchange_declare(exchange='group', exchange_type='fanout')
result = channel.queue_declare(queue='group-2', exclusive=False)
queue_name = result.method.queue
channel.queue_bind(exchange='group', queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r" % body)
channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True)
channel.start_consuming()
So, i want a way where when i made the third service live, it should be able to triggered by old events

Related

Cant publish to aws mqtt broker over websockets

I am following aws api to connect to mqtt over websockets. Below is my code:
credentials_provider = AwsCredentialsProvider.new_static(
access_key_id = auth_response_dictionary['user']['accessKeyId'],
secret_access_key = auth_response_dictionary['user']['secretKey'],
session_token = auth_response_dictionary['user']['sessionToken']
)
event_loop_group = io.EventLoopGroup(1)
host_resolver = io.DefaultHostResolver(event_loop_group)
client_bootstrap = io.ClientBootstrap(event_loop_group, host_resolver)
mqtt_connection = mqtt_connection_builder.websockets_with_default_aws_signing(
endpoint=auth_response_dictionary['user']['iotEndpoint'],
region=auth_response_dictionary['user']['region'],
credentials_provider=credentials_provider,
client_bootstrap=client_bootstrap,
client_id=clientId
)
print("Connecting to aws")
# Make the connect() call
connect_future = mqtt_connection.connect()
# Future.result() waits until a result is available
print('connect_future ' + str(connect_future))
x= connect_future.result()
print('connect_future ' + str(x))
print("Connected!")
future, packet_id = mqtt_connection.publish(topic=TOPIC, payload=json.dumps(message), qos=mqtt.QoS.AT_LEAST_ONCE)
future, packet_id = mqtt_connection.publish(topic='test/po', payload=json.dumps(message), qos=mqtt.QoS.AT_LEAST_ONCE)
print('future ' + str(future))
print('future ' + str(packet_id))
print('Publish End')
I am not getting any error while connecting and while publishing but I am not receiving any msgs on my aws mqtt broker when I subscribe to that topic there in 'Test' section.
I think that i have configured something wrong in either credentials_provider or client_bootstrap or both but dont know what.
Here are the printed logs
Connecting to aws
connect_future<Future at 0x7f605f942af0 state=pending>
connect_future{'session_present': False}
Connected!
future <Future at 0x7f605f8e54f0 state=pending>
future 3
Publish End
Can somebody please help?
mqtt_connection.subscribe(...) is used to subscribe to an MQTT topic for AWS IoT messages, which I can't see anywhere in your code.
mqtt_connection.subscribe is called like below, taking in the topic name, a Quality of Service level and a callback.
received_count = 0
received_all_event = threading.Event()
...
topic='test/po'
print("Subscribing to topic '{}'...".format(topic))
subscribe_future, packet_id = mqtt_connection.subscribe(
topic=topic,
qos=mqtt.QoS.AT_LEAST_ONCE,
callback=on_message_received)
subscribe_result = subscribe_future.result()
print("Subscribed with {}".format(str(subscribe_result['qos'])))
on_message_received can look like this:
def on_message_received(topic, payload, dup, qos, retain, **kwargs):
print("Received message from topic '{}': {}".format(topic, payload))
global received_count
received_count += 1
# Number of messages to wait for
if received_count = 10:
received_all_event.set()
Then in your main method, you can wait until you've received 10 messages:
# Wait for all messages to be received.
# This waits forever if count was set to 0.
if not received_all_event.is_set():
print("Waiting for all messages to be received...")
received_all_event.wait()
print("{} message(s) received.".format(received_count))
There's really good sample code provided by AWS, which I'd recommend you check out.

on_subscribe not working - paho python with IBM iot platform

I tried my subscriber which is written using Paho python client with HiveMQ broker and it worked just fine, but it is not working with IBM.
from Subscribing to application status messages, and this question, I implemented suscriber client as following (I got the "a:<ORG-ID>:<App-ID>" from the apps section of my IBM Watson platform):
def on_connect(client, userdata, flags, rc):
print("CONNACK received with code %d." % (rc))
(result, mid) = client.subscribe("iot-2/app/MyAppID/sensordata", 2)
print("result: ", result, ", mid: ", mid)
if result == paho.MQTT_ERR_SUCCESS:
print("success in subscribing.")
def on_subscribe(client, userdata, mid, granted_qos):
print("Subscribed: "+str(mid)+" "+str(granted_qos))
client = paho.Client("a:<ORG-ID>:<App-ID>")
# adding callbacks to client
client.on_connect = on_connect
client.on_subscribe = on_subscribe
client.on_message = on_message
client.username_pw_set("a-<ORG-ID>-<App-ID>","my authentication token")
client.tls_set( ca_certs=None, certfile=None, keyfile=None, cert_reqs=ssl.CERT_REQUIRED,
tls_version=ssl.PROTOCOL_TLS, ciphers=None)
client.connect("<ORG-ID>.messaging.internetofthings.ibmcloud.com", 8883, 60)
client.loop_start()
When I run the project, I get rc with value of 0 which means successful connection.
and this is the on_connect() callback prints:
CONNACK received with code 0.
result: 0 , mid: 2
success in subscribing.
And the on_subscribe() callback is not being called. what am I doing wrong?
If you want to Subscribe to application status messages then
An application can subscribe to monitor status of one or more applications, for example:
Subscribe to topic iot-2/app/appId/mon
Note: To subscribe to updates for all applications, use the MQTT "any" wildcard character (+) for the appId comp
Based on above, the line:
(result, mid) = client.subscribe("iot-2/app/MyAppID/sensordata", 2)
should be
(result, mid) = client.subscribe("iot-2/app/MyAppID/mon", 2)
or
(result, mid) = client.subscribe("iot-2/app/+/mon", 2)
If want to receive sensor data, then use the below line:
Subscribe to topic iot-2/type/device_type/id/device_id/evt/event_id/fmt/format_string
You would need to replace: device_type, device_id, event_id, format_string(could be json, txt)
For every possible event:
(result, mid) = client.subscribe("iot-2/type/+/id/+/evt/+/fmt/+",2)

Python - Pass a function (callback) variable between functions running in separate threads

I am trying to develop a Python 3.6 script which uses pika and threading modules.
I have a problem which I think is caused by my A) being very new to Python and coding in general, and B) my not understanding how to pass variables between functions when they are run in separate threads and already being passed a parameter in parentheses at the end of the receiving function name.
The reason I think this, is because when I do not use threading, I can pass a variable between functions simply by calling the receiving function name, and supplying the variable to be passed, in parentheses, a basic example is shown below:
def send_variable():
body = "this is a text string"
receive_variable(body)
def receive_variable(body):
print(body)
This when run, prints:
this is a text string
A working version of the code I need to to get working with threading is shown below - this uses straight functions (no threading) and I am using pika to receive messages from a (RabbitMQ) queue via the pika callback function, I then pass the body of the message received in the 'callback' function to the 'processing function' :
import pika
...mq connection variables set here...
# defines username and password credentials as variables set at the top of this script
credentials = pika.PlainCredentials(mq_user_name, mq_pass_word)
# defines mq server host, port and user credentials and creates a connection
connection = pika.BlockingConnection(pika.ConnectionParameters(host=mq_host, port=mq_port, credentials=credentials))
# creates a channel connection instance using the above settings
channel = connection.channel()
# defines the queue name to be used with the above channel connection instance
channel.queue_declare(queue=mq_queue)
def callback(ch, method, properties, body):
# passes (body) to processing function
body_processing(body)
# sets channel consume type, also sets queue name/message acknowledge settings based on variables set at top of script
channel.basic_consume(callback, queue=mq_queue, no_ack=mq_no_ack)
# tells the callback function to start consuming
channel.start_consuming()
# calls the callback function to start receiving messages from mq server
callback()
# above deals with pika connection and the main callback function
def body_processing(body):
...code to send a pika message every time a 'body' message is received...
This works fine however I want to translate this to run within a script that uses threading. When I do this I have to supply the parameter 'channel' to the function name that runs in its own thread - when I then try to include the 'body' parameter so that the 'processing_function' looks as per the below:
def processing_function(channel, body):
I get an error saying:
[function_name] is missing 1 positional argument: 'body'
I know that when using threading there is more code needed and I have included the actual code that I use for threading below so that you can see what I am doing:
...imports and mq variables and pika connection details are set here...
def get_heartbeats(channel):
channel.queue_declare(queue=queue1)
#print (' [*] Waiting for messages. To exit press CTRL+C')
def callback(ch, method, properties, body):
process_body(body)
#print (" Received %s" % (body))
channel.basic_consume(callback, queue=queue1, no_ack=no_ack)
channel.start_consuming()
def process_body(channel, body):
channel.queue_declare(queue=queue2)
#print (' [*] Waiting for Tick messages. To exit press CTRL+C')
# sets the mq host which pika client will use to send a message to
connection = pika.BlockingConnection(pika.ConnectionParameters(host=mq_host))
# create a channel connection instance
channel = connection.channel()
# declare a queue to be used by the channel connection instance
channel.queue_declare(queue=order_send_queue)
# send a message via the above channel connection settings
channel.basic_publish(exchange='', routing_key=send_queue, body='Test Message')
# send a message via the above channel settings
# close the channel connection instance
connection.close()
def manager():
# Channel 1 Connection Details - =======================================================================================
credentials = pika.PlainCredentials(mq_user_name, mq_password)
connection1 = pika.BlockingConnection(pika.ConnectionParameters(host=mq_host, credentials=credentials))
channel1 = connection1.channel()
# Channel 1 thread =====================================================================================================
t1 = threading.Thread(target=get_heartbeats, args=(channel1,))
t1.daemon = True
threads.append(t1)
# as this is thread 1 call to start threading is made at start threading section
# Channel 2 Connection Details - =======================================================================================
credentials = pika.PlainCredentials(mq_user_name, mq_password)
connection2 = pika.BlockingConnection(pika.ConnectionParameters(host=mq_host, credentials=credentials))
channel2 = connection2.channel()
# Channel 2 thread ====================================================================================================
t2 = threading.Thread(target=process_body, args=(channel2, body))
t2.daemon = True
threads.append(t2)
t2.start() # as this is thread 2 - we need to start the thread here
# Start threading
t1.start() # start the first thread - other threads will self start as they call t1.start() in their code block
for t in threads: # for all the threads defined
t.join() # join defined threads
manager() # run the manager module which starts threads that call each module
This when run produces the error
process_body() missing 1 required positional argument: (body)
and I do not understand why this is or how to fix it.
Thank you for taking the time to read this question and any help or advice you can supply is much appreciated.
Please keep in mind that I am new to python and coding so may need things spelled out rather than being able to understand more cryptic replies.
Thanks!
On further looking in to this and playing with the code it seems that if I edit the lines:
def process_body(channel, body):
to read
def process_body(body):
and
t2 = threading.Thread(target=process_body, args=(channel2, body))
so that it reads:
t2 = threading.Thread(target=process_body)
then the code seems to work as needed - I also see multiple script processes in htop so it appears that threading is working - I have left the script processing for 24 hours + and did not receive any errors...

Interrupt paho mqtt client to reload subscriptions

I have an mqtt client app that subscribes to topics based on a configuration file. Something like:
def connectMQTT():
global Connection
Connection = Client()
Connection.on_message = handleQuery
for clientid in clientids.allIDs(): # clientids.allIDs() reads files to get this
topic = '{}/{}/Q/+'.format(Basename, clientid)
print('subscription:', topic)
Connection.subscribe(topic)
I have been using it with a simple invocation like:
def main():
connectMQTT()
Connection.loop_forever()
The loop_forever will block forever. But I'd like to notice when the information read by clientids.allIDs() is out of date and I should reconnect forcing it to subscribe afresh.
I can detect a change in the files with pyinotify:
def filesChanged():
# NOT SURE WHAT TO DO HERE
def watchForChanges():
watchManager = pyinotify.WatchManager()
notifier = pyinotify.ThreadedNotifier(watchManager, FileEventHandler(eventCallback))
notifier.start()
watchManager.add_watch('/etc/my/config/dir', pyinotify.IN_CLOSE_WRITE | pyinotify.IN_DELETE)
Basically, I need loop_forever (or some other paho mqtt mechanism) to run until some signal comes from the pyinotify machinery. I'm not sure how to weld those two together though. In pseudo code, I thing I want something like
def main():
signal = setup_directory_change_signal()
while True:
connectMQTT()
Connection.loop(until=signal)
Connection.disconnect()
I'm not sure how to effect that though.
I finally circled around to the following solution which seems to work. Whereas I was trying to run the notifier in another thread and the mqtt loop in the main thread, the trick seemed to be invert that setup:
def restartMQTT():
if Connection:
Connection.loop_stop()
connectMQTT()
Connection.loop_start()
class FileEventHandler(pyinotify.ProcessEvent):
def process_IN_CREATE(self, fileEvent):
restartMQTT()
def process_IN_DELETE(self, fileEvent):
restartMQTT()
def main():
restartMQTT()
watchManager = pyinotify.WatchManager()
notifier = pyinotify.Notifier(watchManager, FileEventHandler())
watchManager.add_watch('/etc/my/config_directory', pyinotify.IN_CREATE | pyinotify.IN_DELETE)
notifier.loop()
Where connectMQTT stores a newly connected and configured MQTT client in the Connection global.

Why a simple publish subscribe is not working with zeromq?

I want to establish publish subscribe communication between to machines.
The two machines, that I have, are ryu-primary and ryu-secondary
The steps I follow in each of the machines are as follows.
In the initializer for ryu-primary (IP address is 192.168.241.131)
self.context = zmq.Context()
self.sub_socket = self.context.socket(zmq.SUB)
self.pub_socket = self.context.socket(zmq.PUB)
self.pub_port = 5566
self.sub_port = 5566
def establish_zmq_connection(self): # Socket to talk to server
print( "Connection to ryu-secondary..." )
self.sub_socket.connect( "tcp://192.168.241.132:%s" % self.sub_port )
def listen_zmq_connection(self):
print( 'Listen to zmq connection' )
self.pub_socket.bind( "tcp://*:%s" % self.pub_port )
def recieve_messages(self):
while True:
try:
string = self.sub_socket.recv( flags=zmq.NOBLOCK )
print( 'flow mod messages recieved {}'.format(string) )
return string
except zmq.ZMQError:
break
def push_messages(self,msg):
self.pub_socket.send( "%s" % (msg) )
From ryu-secondary (IP address - 192.168.241.132)
In the initializer
self.context = zmq.Context()
self.sub_socket = self.context.socket(zmq.SUB)
self.pub_socket = self.context.socket(zmq.PUB)
self.pub_port = 5566
self.sub_port = 5566
def establish_zmq_connection(self): # Socket to talk to server
print( "Connection to ryu-secondary..." )
self.sub_socket.connect( "tcp://192.168.241.131:%s" % self.sub_port )
def listen_zmq_connection(self):
print( 'Listen to zmq connection' )
self.pub_socket.bind( "tcp://*:%s" % self.pub_port )
def recieve_messages(self):
while True:
try:
string = self.sub_socket.recv( flags=zmq.NOBLOCK )
print( 'flow mod messages recieved {}'.format(string) )
return string
except zmq.ZMQError:
break
def push_messages(self,msg):
print( 'pushing message to publish socket' )
self.pub_socket.send( "%s" % (msg) )
These are the functions that I have.
I am calling on ryu-secondary:
establish_zmq_connections()
push_messages()
But I am not recieving those messages on ryu-primary, when I call
listen_zmq_connection()
recieve_messages()
Can someone point out to me what I am doing wrong?
Repair the PUB/SUB messaging pattern setup
There are several important steps in making the PUB/SUB pattern work.
All this is well described in the ZeroMQ documentation.
You need not repeat both pub & sub parts of code on both sides, the more that it masks, as A side-effect thereof, the case if you mix the pub and sub socket addresses/ports/calls/etc in an "opposite" node code and you do not see such a principal collision.
your code defines the initial form of PUB-archetype, that is expected to .push_messages()
your code defines the initial form of SUB-archetype, that is expected to .receive_messages()
your code does not show, how do you control who goes first on a connection setup -- whether .bind() or .connect() appears at random or before/after the other
your code does not show any subscription setup, after the SUB-archetype was instantiated. A default value upon a socket instantiation does need to be modified via a .setsockopt( zmq.SUBSCRIBE = '') method, otherwise there is a prohibitive filter that does not allow any ( yet unsubscribed ) message to pass through and got-output ( "received" ) on the SUB-side
Must modify a default SUB-side subscription filter, it is prohibitive
You may have noticed from the ZeroMQ documentation, that until setup otherwise, the sub-side does filter-out all incoming messages.
http://api.zeromq.org/2-1:zmq-setsockopt
"The ZMQ_SUBSCRIBE option shall establish a new message filter on a ZMQ_SUB socket. Newly created ZMQ_SUB sockets shall filter out all incoming messages, therefore you should call this option to establish an initial message filter.
An empty option_value of length zero shall subscribe to all incoming messages. A non-empty option_value shall subscribe to all messages beginning with the specified prefix. Multiple filters may be attached to a single ZMQ_SUB socket, in which case a message shall be accepted if it matches at least one filter."
Class-method pre-configuration of a Context instance possible
There is another possibility for a python code using pyzmq 13.0+. There you may also setup this via a Context class-method .setsockopt( zmq.SUBSCRIBE, "" ) et al, but such call has to precede the new socket instantiation from a Context-instance pre-configured this way.

Resources