I am following aws api to connect to mqtt over websockets. Below is my code:
credentials_provider = AwsCredentialsProvider.new_static(
access_key_id = auth_response_dictionary['user']['accessKeyId'],
secret_access_key = auth_response_dictionary['user']['secretKey'],
session_token = auth_response_dictionary['user']['sessionToken']
)
event_loop_group = io.EventLoopGroup(1)
host_resolver = io.DefaultHostResolver(event_loop_group)
client_bootstrap = io.ClientBootstrap(event_loop_group, host_resolver)
mqtt_connection = mqtt_connection_builder.websockets_with_default_aws_signing(
endpoint=auth_response_dictionary['user']['iotEndpoint'],
region=auth_response_dictionary['user']['region'],
credentials_provider=credentials_provider,
client_bootstrap=client_bootstrap,
client_id=clientId
)
print("Connecting to aws")
# Make the connect() call
connect_future = mqtt_connection.connect()
# Future.result() waits until a result is available
print('connect_future ' + str(connect_future))
x= connect_future.result()
print('connect_future ' + str(x))
print("Connected!")
future, packet_id = mqtt_connection.publish(topic=TOPIC, payload=json.dumps(message), qos=mqtt.QoS.AT_LEAST_ONCE)
future, packet_id = mqtt_connection.publish(topic='test/po', payload=json.dumps(message), qos=mqtt.QoS.AT_LEAST_ONCE)
print('future ' + str(future))
print('future ' + str(packet_id))
print('Publish End')
I am not getting any error while connecting and while publishing but I am not receiving any msgs on my aws mqtt broker when I subscribe to that topic there in 'Test' section.
I think that i have configured something wrong in either credentials_provider or client_bootstrap or both but dont know what.
Here are the printed logs
Connecting to aws
connect_future<Future at 0x7f605f942af0 state=pending>
connect_future{'session_present': False}
Connected!
future <Future at 0x7f605f8e54f0 state=pending>
future 3
Publish End
Can somebody please help?
mqtt_connection.subscribe(...) is used to subscribe to an MQTT topic for AWS IoT messages, which I can't see anywhere in your code.
mqtt_connection.subscribe is called like below, taking in the topic name, a Quality of Service level and a callback.
received_count = 0
received_all_event = threading.Event()
...
topic='test/po'
print("Subscribing to topic '{}'...".format(topic))
subscribe_future, packet_id = mqtt_connection.subscribe(
topic=topic,
qos=mqtt.QoS.AT_LEAST_ONCE,
callback=on_message_received)
subscribe_result = subscribe_future.result()
print("Subscribed with {}".format(str(subscribe_result['qos'])))
on_message_received can look like this:
def on_message_received(topic, payload, dup, qos, retain, **kwargs):
print("Received message from topic '{}': {}".format(topic, payload))
global received_count
received_count += 1
# Number of messages to wait for
if received_count = 10:
received_all_event.set()
Then in your main method, you can wait until you've received 10 messages:
# Wait for all messages to be received.
# This waits forever if count was set to 0.
if not received_all_event.is_set():
print("Waiting for all messages to be received...")
received_all_event.wait()
print("{} message(s) received.".format(received_count))
There's really good sample code provided by AWS, which I'd recommend you check out.
Related
for my backend micro-service application i am using RabbitMq as the message broker.
For the existing services i am getting events well. My question is how to pull all the old events into a new micro-service which would launch in future.
Just consider if there are three services currently
Product
Notification
Order
if i created a new product it information will be broadcasted to Notification service as well as Order service. So a record of product will be there in both Notification and Order.
so, after a while(when around 500+ products where added), if i had created new service called Analytic’s and wanted all the product created events to be listened when it is initially up.
I am using Python, RabbitMQ & Pika library.
this is my sample code
Sample Publisher code
import pika
import sys, random
connection = pika.BlockingConnection(pika.URLParameters('<rabbitmq-link>'))
channel = connection.channel()
channel.exchange_declare(exchange='group', exchange_type='fanout')
message = "info: Hello World!"
channel.basic_publish(exchange='group', routing_key='', body=message)
print(" [x] Sent %r" % message)
connection.close()
Service one code
import pika
connection = pika.BlockingConnection(pika.URLParameters('<rabbitmq-link>'))
channel = connection.channel()
channel.exchange_declare(exchange='group', exchange_type='fanout')
result = channel.queue_declare(queue='group-1', exclusive=False)
queue_name = result.method.queue
channel.queue_bind(exchange='group', queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r" % body)
channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True)
channel.start_consuming()
Service two code
import pika
connection = pika.BlockingConnection(pika.URLParameters('<rabbitmq-link>'))
channel = connection.channel()
channel.exchange_declare(exchange='group', exchange_type='fanout')
result = channel.queue_declare(queue='group-2', exclusive=False)
queue_name = result.method.queue
channel.queue_bind(exchange='group', queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r" % body)
channel.basic_consume(queue=queue_name, on_message_callback=callback, auto_ack=True)
channel.start_consuming()
So, i want a way where when i made the third service live, it should be able to triggered by old events
I am trying to build a publisher/subscriber system. In simple words, a publisher can publish messages to all subscribers via a broker. Subscribers can support 2 functionalities: 1) receive a message from the broker 2) send a message to the broker with input method. But there is a problem: Subscribers block while waiting the input from stdin and they receive the publishers messages after input, even though no input is needed.
Ι wοuld like to solve the problem in this direction: while waiting for input from stdin, a subscriber can receive a message from publish. I tried "curses" but i failed.
I post a part of my code
subscriber.py
while True:
try:
command = input("Enter a command: ")
#command = sys.stdin.readline()
if command == "quit":
break
command.strip()
command_to_broker = process_command(command, sub_id)
if command_to_broker == error_message:
print("Command with wrong format!")
continue
sock.sendall(bytes(command_to_broker, ENCODING))
received = str(sock.recv(BUFFER_SIZE), ENCODING)
print("Received from BROKER: " + received)
except:
print("Error/Disconnect")
broker.py (starts a publisher thread)
def publisher_thread(connection, topics_and_subscribers, subscribers_and_ports, subscribers):
while True:
data = connection.recv(BUFFER_SIZE)
print("Command from PUBLISHER {}".format(data.decode(ENCODING)))
response = 'OK'
command = data.decode(ENCODING).split(" ", 3)
pub_id = command[0]
topic = command[2]
message = command[3]
if topic in topics_and_subscribers:
for s in subscribers:
print(s.getpeername())
sum = s.send(bytes(message, ENCODING))
print(sum)
if not data:
break
connection.sendall(bytes(response, ENCODING))
connection.close()
I tried my subscriber which is written using Paho python client with HiveMQ broker and it worked just fine, but it is not working with IBM.
from Subscribing to application status messages, and this question, I implemented suscriber client as following (I got the "a:<ORG-ID>:<App-ID>" from the apps section of my IBM Watson platform):
def on_connect(client, userdata, flags, rc):
print("CONNACK received with code %d." % (rc))
(result, mid) = client.subscribe("iot-2/app/MyAppID/sensordata", 2)
print("result: ", result, ", mid: ", mid)
if result == paho.MQTT_ERR_SUCCESS:
print("success in subscribing.")
def on_subscribe(client, userdata, mid, granted_qos):
print("Subscribed: "+str(mid)+" "+str(granted_qos))
client = paho.Client("a:<ORG-ID>:<App-ID>")
# adding callbacks to client
client.on_connect = on_connect
client.on_subscribe = on_subscribe
client.on_message = on_message
client.username_pw_set("a-<ORG-ID>-<App-ID>","my authentication token")
client.tls_set( ca_certs=None, certfile=None, keyfile=None, cert_reqs=ssl.CERT_REQUIRED,
tls_version=ssl.PROTOCOL_TLS, ciphers=None)
client.connect("<ORG-ID>.messaging.internetofthings.ibmcloud.com", 8883, 60)
client.loop_start()
When I run the project, I get rc with value of 0 which means successful connection.
and this is the on_connect() callback prints:
CONNACK received with code 0.
result: 0 , mid: 2
success in subscribing.
And the on_subscribe() callback is not being called. what am I doing wrong?
If you want to Subscribe to application status messages then
An application can subscribe to monitor status of one or more applications, for example:
Subscribe to topic iot-2/app/appId/mon
Note: To subscribe to updates for all applications, use the MQTT "any" wildcard character (+) for the appId comp
Based on above, the line:
(result, mid) = client.subscribe("iot-2/app/MyAppID/sensordata", 2)
should be
(result, mid) = client.subscribe("iot-2/app/MyAppID/mon", 2)
or
(result, mid) = client.subscribe("iot-2/app/+/mon", 2)
If want to receive sensor data, then use the below line:
Subscribe to topic iot-2/type/device_type/id/device_id/evt/event_id/fmt/format_string
You would need to replace: device_type, device_id, event_id, format_string(could be json, txt)
For every possible event:
(result, mid) = client.subscribe("iot-2/type/+/id/+/evt/+/fmt/+",2)
I'm trying to simulate processing in threads by using asyncio.Queue. However, I'm struggling to turn a threaded processing simulation part to asynchronous loop.
So what my script does in brief: 1) receive processing requests over a websocket, 2) assign the request to the requested queue (which simulates a thread), 3) runs processing queues, which put responses into one shared response queue, and then 4) the websocket takes out the responses from the shared queue one by one and sends them out to the server.
Simplified version of my code:
# Initialize empty processing queues for the number of threads I want to simulate
processing_queues = [asyncio.Queue() for i in range(n_queues)
# Initialize shared response queue
response_q = asyncio.Queue()
# Set up a websocket context manager
async with websockets.connect(f"ws://{host}:{port}") as websocket:
while True:
# Read incoming requests
message = await websocket.recv()
# Parse mssg -> get request data and on which thread / queue to process it
request_data, queue_no = parse_message(message)
# Put the request data to the requested queue (imitating thread)
await processing_queues[queue_no].put(request_data)
# THIS IS WHERE I THINK ASYNCHRONY BREAKS (AND I NEED HELP)
# Do processing in each imitated processing thread
for proc_q in processing_queues:
if not proc_q.empty():
request_data = await proc_q.get()
# do the processing
response = process_data(request_data)
# Add the response to the response queue
await response_q.put(response)
# Send responses back to the server
if not response_q.empty():
response_data = response_q.get()
await websocket.send(response_data)
From the output of the script, I deduced that 1) I seem to receive requests and send out responses asynchronously; 2) processing in queues does not happen asynchronously. Correct me if I'm wrong.
I was reading about create_task() in asyncio. Maybe that could be a way to solve my problem?
I'm open to any solution (even hacky).
P.S. I would just use threads from threading library, but I need asyncio for websockets library.
P.P.S. Threaded version of my idea.
class ProcessingImitationThread(threading.Thread):
def __init__(self, thread_id, request_q, response_q):
threading.Thread.__init__(self)
self.thread_id = thread_id
self.request_q = request_q
self.response_q = response_q
def run(self):
while True:
try:
(x, request_id) = self.request_q.get()
except Empty:
time.sleep(0.2)
else:
if x == -1:
# EXIT CONDITION
break
else:
sleep_time_for_x = count_imitation(x, state)
time.sleep(sleep_time_for_x)
self.response_q.put(request_id)
print(f"request {request_id} executed")
# Set up
processing_qs = [queue.Queue() for i in range(n_processes_simulated)]
response_q = queue.Queue()
processing_thread_handlers = []
for i in n_processes_simulated:
# create thread
t = ProcessingImitationThread(i, processing_qs[i], response_q)
processing_thread_handlers.append(t)
# Main loop
while True:
# receive requests and assign to requested queue (so that thread picks up)
if new_request:
requested_process, x, request_id = parse(new_request)
processing_qs[requested_process].put((x, request_id))
...
# if there are any new responses, sent them out to the server
if response_q.q_size() > 0:
request_id = response_q.get()
# Networking: send to server
...
# Close down
...
EDIT: fixes small typos.
Your intuition that you need create_task is correct, as create_task is the closest async equivalent of Thread.start: it creates a task that runs in parallel (in an async sense) to whatever you are doing now.
You need separate coroutines that drain the respective queues running in parallel; something like this:
async def main():
processing_qs = [asyncio.Queue() for i in range(n_queues)]
response_q = asyncio.Queue()
async with websockets.connect(f"ws://{host}:{port}") as websocket:
processing_tasks = [
asyncio.create_task(processing(processing_q, response_q))
for processing_q in processing_qs
]
response_task = asyncio.create_task(
send_responses(websocket, response_q))
while True:
message = await websocket.recv()
requested_process, x, request_id = parse(message)
await processing_qs[requested_process].put((x, request_id))
async def processing(processing_q, response_q):
while True:
x, request_id = await processing_q.get()
... create response ...
await response_q.put(response)
async def send_responses(websocket, response_q):
while True:
msg = await response_q.get()
await websocket.send(msg)
The following queue is not working properly somehow. Is there any obvious mistake I have made? Basically every incoming SMS message is put onto the queue, tries to send it and if it successful deletes from the queue. If its unsuccessful it sleeps for 2 seconds and tries sending it again.
# initialize queue
queue = queue.Queue()
def messagePump():
while True:
item = queue.get()
if item is not None:
status = sendText(item)
if status == 'SUCCEEDED':
queue.task_done()
else:
time.sleep(2)
def sendText(item):
response = getClient().send_message(item)
response = response['messages'][0]
if response['status'] == '0':
return 'SUCCEEDED'
else:
return 'FAILED'
#app.route('/webhooks/inbound-sms', methods=['POST'])
def delivery_receipt():
data = dict(request.form) or dict(request.args)
senderNumber = data['msisdn'][0]
incomingMessage = data['text'][0]
# came from customer service operator
if (senderNumber == customerServiceNumber):
try:
split = incomingMessage.split(';')
# get recipient phone number
recipient = split[0]
# get message content
message = split[1]
# check if target number is 10 digit long and there is a message
if (len(message) > 0):
# for confirmation send beginning string only
successText = 'Message successfully sent to: '+recipient+' with text: '+message[:7]
queue.put({'from': virtualNumber, 'to': recipient, 'text': message})
The above is running on a Flask server. So invoking messagePump:
thread = threading.Thread(target=messagePump)
thread.start()
The common in such cases is that Thread has completed execution before item started to be presented in the queue, please call thread.daemon = True before running thread.start().
Another thing which may happen here is that Thread was terminated due to exception. Make sure the messagePump handle all possible exceptions.
That topic regarding tracing exceptions on threads may be useful for you:
Catch a thread's exception in the caller thread in Python