paho mqtt: Incorrect ouput, when compiling with python 3 - python-3.x

I want to set up an mqtt connection. And I want to do it in Python3.
This is a part of the client's code:
def on_message(client, userdata, message):
print("Received message '" + str(message.payload) + "' on topic '"
+ message.topic + "' with QoS " + str(message.qos))
if message.payload == "Hello":
print("Received message #1. Do something.")
# do something
if message.payload == "World":
print("Received message #2. Do something else.")
# do something
I'm basically publishing both messages "World" and "Hello" under their given topics to this client but I receive different results based on the python version I use.
This is the output, when I compile with python2: (It's the output I want)
pi#raspberrypi:~/Desktop $ python Client.py
Connection returned result: 0
Received message 'Hello' on topic 'Wulff/test' with QoS 0
Received message #1. Do something.
Received message 'World' on topic 'Wulff/topic' with QoS 0
Received message #2. Do something else.
This is the output, when I compile with python3:
pi#raspberrypi:~/Desktop $ python3 Client.py
Connection returned result: 0
Received message 'b'Hello'' on topic 'Wulff/test' with QoS 0
Received message 'b'World'' on topic 'Wulff/topic' with QoS 0
Received message 'b'Hello'' on topic 'Wulff/test' with QoS 0
Received message 'b'World'' on topic 'Wulff/topic' with QoS 0`
I do not understand why the program won't recognise the message payload here.
What do I have to pay attention to when running a program with different versions of python? I already installed the necessary modules on both Python2 and Python3.

This is because with Python 3 the message payload is now considered to be a byte array (this is actually how it should always have been treated)
This can be converted to a String with the following:
def on_message(client, userdata, message):
payload = message.payload.decode()
print("Received message '" + payload + "' on topic '"
+ message.topic + "' with QoS " + str(message.qos))
if payload == "Hello":
print("Received message #1. Do something.")
# do something
if payload == "World":
print("Received message #2. Do something else.")
# do something

Related

Input block the subscriber/client of receiving message in Pub/Sub system

I am trying to build a publisher/subscriber system. In simple words, a publisher can publish messages to all subscribers via a broker. Subscribers can support 2 functionalities: 1) receive a message from the broker 2) send a message to the broker with input method. But there is a problem: Subscribers block while waiting the input from stdin and they receive the publishers messages after input, even though no input is needed.
Ι wοuld like to solve the problem in this direction: while waiting for input from stdin, a subscriber can receive a message from publish. I tried "curses" but i failed.
I post a part of my code
subscriber.py
while True:
try:
command = input("Enter a command: ")
#command = sys.stdin.readline()
if command == "quit":
break
command.strip()
command_to_broker = process_command(command, sub_id)
if command_to_broker == error_message:
print("Command with wrong format!")
continue
sock.sendall(bytes(command_to_broker, ENCODING))
received = str(sock.recv(BUFFER_SIZE), ENCODING)
print("Received from BROKER: " + received)
except:
print("Error/Disconnect")
broker.py (starts a publisher thread)
def publisher_thread(connection, topics_and_subscribers, subscribers_and_ports, subscribers):
while True:
data = connection.recv(BUFFER_SIZE)
print("Command from PUBLISHER {}".format(data.decode(ENCODING)))
response = 'OK'
command = data.decode(ENCODING).split(" ", 3)
pub_id = command[0]
topic = command[2]
message = command[3]
if topic in topics_and_subscribers:
for s in subscribers:
print(s.getpeername())
sum = s.send(bytes(message, ENCODING))
print(sum)
if not data:
break
connection.sendall(bytes(response, ENCODING))
connection.close()

Azure python sdk service bus receive message

I am a bit confused about the azure python servicebus.
I have a servicebus TOPIC and SUBSCRIPTION which listen to specific messages, I have the code to receive those messages which then they will be processed by aws comprehend.
Following Microsoft documentation, the basic code to receive the message work and I am able to print it, but when I integrate the same logic with comprehend it fails.
Here is the example, this is the bit of code from Microsoft documentation:
with servicebus_client:
# get the Queue Receiver object for the queue
receiver = servicebus_client.get_queue_receiver(queue_name=QUEUE_NAME, max_wait_time=5)
with receiver:
for msg in receiver:
print("Received: " + str(msg))
# complete the message so that the message is removed from the queue
receiver.complete_message(msg)
and the output is this
{"ModuleId":"123458", "Text":"This is amazing."}
Receive is done.
My first thought was that the message received, was a Json object. so I started writing the code to read data from a json outputs as follow:
servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STR)
with servicebus_client:
receiver = servicebus_client.get_subscription_receiver(
topic_name=TOPIC_NAME,
subscription_name=SUBSCRIPTION_NAME
)
with receiver:
received_msgs = receiver.receive_messages(max_message_count=10, max_wait_time=5)
for msg in received_msgs:
# print(str(msg))
message = json.dumps(msg)
text = message['Text']
#passing the text to comprehend
result_json= json.dumps(comprehend.detect_sentiment(Text=text, LanguageCode='en'), sort_keys=True, indent=4)
result = json.loads(result_json) # converting json to python dictionary
#extracting the sentiment value
sentiment = result["Sentiment"]
#extracting the sentiment score
if sentiment == "POSITIVE":
value = round(result["SentimentScore"]["Positive"] * 100,2)
elif sentiment == "NEGATIVE":
value = round(result["SentimentScore"]["Negative"] * 100,2)
elif sentiment == "NEUTRAL":
value = round(result["SentimentScore"]["Neutral"] * 100,2)
elif sentiment == "MIXED":
value = round(result["SentimentScore"]["Mixed"] * 100,2)
#store the text, sentiment and value in a dictionary and convert it tp JSON
output={'Text':text,'Sentiment':sentiment, 'Value':value}
output_json = json.dumps(output)
print('Text: ',text,'\nSentiment: ',sentiment,'\nValue: ', value)
print('In JSON format\n',output_json)
receiver.complete_message(msg)
print("Receive is done.")
But when I run this I get the following error:
TypeError: Object of type ServiceBusReceivedMessage is not JSON serializable
Did this ever happened to anybody who can help me to understand what is the type of servicebus that is coming back from the receive?
Thank you so much everyone
Did this ever happened to anybody who can help me to understand what
is the type of servicebus that is coming back from the receive?
The type of the received message is ServiceBusReceivedMessage which is derived from ServiceBusMessage. The contents of the message can be fetched from its body property.
Can you please try something like:
message = json.dumps(msg.body)

Cant publish to aws mqtt broker over websockets

I am following aws api to connect to mqtt over websockets. Below is my code:
credentials_provider = AwsCredentialsProvider.new_static(
access_key_id = auth_response_dictionary['user']['accessKeyId'],
secret_access_key = auth_response_dictionary['user']['secretKey'],
session_token = auth_response_dictionary['user']['sessionToken']
)
event_loop_group = io.EventLoopGroup(1)
host_resolver = io.DefaultHostResolver(event_loop_group)
client_bootstrap = io.ClientBootstrap(event_loop_group, host_resolver)
mqtt_connection = mqtt_connection_builder.websockets_with_default_aws_signing(
endpoint=auth_response_dictionary['user']['iotEndpoint'],
region=auth_response_dictionary['user']['region'],
credentials_provider=credentials_provider,
client_bootstrap=client_bootstrap,
client_id=clientId
)
print("Connecting to aws")
# Make the connect() call
connect_future = mqtt_connection.connect()
# Future.result() waits until a result is available
print('connect_future ' + str(connect_future))
x= connect_future.result()
print('connect_future ' + str(x))
print("Connected!")
future, packet_id = mqtt_connection.publish(topic=TOPIC, payload=json.dumps(message), qos=mqtt.QoS.AT_LEAST_ONCE)
future, packet_id = mqtt_connection.publish(topic='test/po', payload=json.dumps(message), qos=mqtt.QoS.AT_LEAST_ONCE)
print('future ' + str(future))
print('future ' + str(packet_id))
print('Publish End')
I am not getting any error while connecting and while publishing but I am not receiving any msgs on my aws mqtt broker when I subscribe to that topic there in 'Test' section.
I think that i have configured something wrong in either credentials_provider or client_bootstrap or both but dont know what.
Here are the printed logs
Connecting to aws
connect_future<Future at 0x7f605f942af0 state=pending>
connect_future{'session_present': False}
Connected!
future <Future at 0x7f605f8e54f0 state=pending>
future 3
Publish End
Can somebody please help?
mqtt_connection.subscribe(...) is used to subscribe to an MQTT topic for AWS IoT messages, which I can't see anywhere in your code.
mqtt_connection.subscribe is called like below, taking in the topic name, a Quality of Service level and a callback.
received_count = 0
received_all_event = threading.Event()
...
topic='test/po'
print("Subscribing to topic '{}'...".format(topic))
subscribe_future, packet_id = mqtt_connection.subscribe(
topic=topic,
qos=mqtt.QoS.AT_LEAST_ONCE,
callback=on_message_received)
subscribe_result = subscribe_future.result()
print("Subscribed with {}".format(str(subscribe_result['qos'])))
on_message_received can look like this:
def on_message_received(topic, payload, dup, qos, retain, **kwargs):
print("Received message from topic '{}': {}".format(topic, payload))
global received_count
received_count += 1
# Number of messages to wait for
if received_count = 10:
received_all_event.set()
Then in your main method, you can wait until you've received 10 messages:
# Wait for all messages to be received.
# This waits forever if count was set to 0.
if not received_all_event.is_set():
print("Waiting for all messages to be received...")
received_all_event.wait()
print("{} message(s) received.".format(received_count))
There's really good sample code provided by AWS, which I'd recommend you check out.

Message sequence acting unexpectedly in RabbitMQ

I'd like to implement a RabbitMQ topology similar to Option 3 here, except for some differences:
The new topology should handle a few 1000 messages per day. And it shall have two exchanges: one to deal with the main queues (about 30), the other to deal with retry and error queues (about 60). I've been following this tutorial and the usual RMQ tutorials, plus many SO posts. The RMQ server is fired up in a Docker container.
The problem I am facing is that the not all messages are being picked up by the consumer, and the sequence of receiving the messages is unexpected. I'm also seeing the same message being rejected twice. Here's my code:
exchanges.py
def callback(self, channel, method, properties, body):
print("delivery_tag: {0}".format(method.delivery_tag))
data = json.loads(body)
routingKey = data.get('routing-key')
routingKey_dl_error = queues_dict[routingKey]['error']
print(" [X] Got {0}".format(body))
print(" [X] Received {0} (try: {1})".format(data.get('keyword'), int(properties.priority)+1))
# redirect faulty messages to *.error queues
if data.get('keyword') == 'FAIL':
channel.basic_publish(exchange='exchange.retry',
routing_key=routingKey_dl_error,
body=json.dumps(data),
properties=pika.BasicProperties(
delivery_mode=2,
priority=int(properties.priority),
timestamp=int(time.time()),
headers=properties.headers))
print(" [*] Sent to error queue: {0}".format(routingKey_dl_error))
time.sleep(5)
channel.basic_ack(delivery_tag=method.delivery_tag) #leaving this in creates 1000s of iterations(?!)
# check number of sent counts
else:
# redirect messages that exceed MAX_RETRIES to *.error queues
if properties.priority >= MAX_RETRIES - 1:
print(" [!] {0} Rejected after {1} retries".format(data.get('keyword'), int(properties.priority) + 1))
channel.basic_publish(exchange='exchange.retry',
routing_key=routingKey_dl_error,
body=json.dumps(data),
properties=pika.BasicProperties(
delivery_mode=2,
priority=int(properties.priority),
timestamp=int(time.time()),
headers=properties.headers))
print(" [*] Sent to error queue: {0}".format(routingKey_dl_error))
#channel.basic_ack(delivery_tag=method.delivery_tag)
else:
timestamp = time.time()
now = datetime.datetime.now()
expire = 1000 * int((now.replace(hour=23, minute=59, second=59, microsecond=999999) - now).total_seconds())
# to reject job we create new one with other priority and expiration
channel.basic_publish(exchange='exchange_main',
routing_key=routingKey,
body=json.dumps(data),
properties=pika.BasicProperties(
delivery_mode=2,
priority=int(properties.priority) + 1,
timestamp=int(timestamp),
expiration=str(expire),
headers=properties.headers))
# send back acknowledgement about job
channel.basic_ack(delivery_tag=method.delivery_tag) # nack or reject???
print("[!] Rejected. Going to sleep for a while...")
time.sleep(5)
def exchange(self):
# 1 - connect and channel setup
parameters = "..."
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
# 2 - declare exchanges
# declares the main exchange to be used by all producers to send messages. External facing
channel.exchange_declare(exchange='exchange_main',
exchange_type='direct',
durable=True,
auto_delete=False)
# declares the dead letter exchange. Routes messages to *error and *retry queues. Internal use only
channel.exchange_declare(exchange='exchange.retry',
exchange_type='direct',
durable=True,
auto_delete=False)
# 3- bind the external facing exchange to the internal exchange
#channel.exchange_bind(destination='exchange.retry', source='exchange_main')
# 4 - declare queues
# Create durable queues bound to the exchange_main exchange
for queue_name in self.queueName_list:
queueArgs = {
"x-message-ttl": 5000,
"x-dead-letter-exchange": 'exchange.retry',
#"x-dead-letter-routing-key": queue_name + '.retry'
}
channel.queue_declare(queue=queue_name, durable=True, arguments=queueArgs)
# Create durable queues bound to the exchange.retry exchange
'''
for queue_dl_name in self.queueName_dl_list:
if queue_dl_name[-5:] == 'retry':
queueArgs_retry = {
"x-message-ttl": 5000,
"x-dead-letter-exchange": 'exchange_main',
"x-dead-letter-routing-key": queue_dl_name[:-6]
}
channel.queue_declare(queue=queue_dl_name, durable=True, arguments=queueArgs_retry)
else:
channel.queue_declare(queue=queue_dl_name, durable=True)
'''
for queue_dl_name in self.queueName_dl_list:
channel.queue_declare(queue=queue_dl_name, durable=True)
# 5 - bind retry and main queues to exchanges
# bind queues to exchanges. Allows for messages to be saved when no consumer is present
for queue_name in self.queueName_list:
channel.queue_bind(queue=queue_name, exchange='exchange_main')
for queue_dl_name in self.queueName_dl_list:
channel.queue_bind(queue=queue_dl_name, exchange='exchange.retry')
# 6 - don't dispatch a new message to worker until processed and acknowledged the previous one, dispatch to next worker instead
channel.basic_qos(prefetch_count=1)
# 7 - consume the message
all_queues = self.queueName_list + self.queueName_dl_list
for queue in all_queues:
channel.basic_consume(queue=queue,
on_message_callback=self.callback,
auto_ack=False)
print ('[*] Waiting for data for:')
for queue in all_queues:
print(' ' + queue)
print ('[*] To exit press CTRL+C')
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
channel.close()
connection.close()
producer.py
# 1 - connect and channel setup
parameters = "..."
try:
connection = pika.BlockingConnection(parameters)
except pika.exceptions.AMQPConnectionError as err:
print("AMQP connection failure. Ensure RMQ server is running.")
raise err
channel = connection.channel() # create a channel in TCP connection
# 2 - Turn on delivery confirmations (either a basic.ack or basic.nack)
channel.confirm_delivery()
# 3 - send message to rmq
print(" [*] Sending message to create a queue")
# set header parameters
count = 3
for i in range(1, count + 1):
if self.keyword is None:
message = "data {0}".format(i)
else:
message = self.keyword
timestamp = time.time()
now = datetime.datetime.now()
expire = 1000 * int((now.replace(hour=23, minute=59, second=59, microsecond=999999) - now).total_seconds())
headers = dict()
headers['RMQ_Header_Key'] = self.queueName
headers['x-retry-count'] = 0
headers['x-death'] = None
data = {
'routing-key': self.queueName,
'keyword': message,
'domain': message,
'created': int(timestamp),
'expire': expire
}
# properties are often uses for bits of data that your code needs to have, but aren't part of the actual message body.
channel.basic_publish(
exchange='exchange_main',
routing_key=self.queueName,
body=json.dumps(data),
properties=pika.BasicProperties(
delivery_mode=2, # makes persistent job
priority=0, # default priority
timestamp=int(timestamp), # timestamp of job creation
expiration=str(expire), # job expiration
headers=headers
))
print(" [*] Sent message: {0} via routing key: {1}".format(message, self.queueName))
# 4 - close channel and connection
channel.close()
connection.close()
After firing up the exchange.py, I then send from my command line in another Terminal window: python3 producer.py queue1
And then get:
delivery_tag: 1
[X] Got b'{"routing-key": "queue1", "keyword": "data 1", "domain": "data 1", "created": 1567068725, "expire": 47274000}'
[X] Received data 1 (try: 1)
[!] Rejected. Going to sleep for a while...
delivery_tag: 2
[X] Got b'{"routing-key": "queue1", "keyword": "data 1", "domain": "data 1", "created": 1567068725, "expire": 47274000}'
[X] Received data 1 (try: 2)
[!] Rejected. Going to sleep for a while...
delivery_tag: 3
[X] Got b'{"routing-key": "queue1", "keyword": "data 3", "domain": "data 3", "created": 1567068725, "expire": 47274000}'
[X] Received data 3 (try: 1)
[!] Rejected. Going to sleep for a while...
delivery_tag: 4
[X] Got b'{"routing-key": "queue1", "keyword": "data 3", "domain": "data 3", "created": 1567068725, "expire": 47274000}'
[X] Received data 3 (try: 2)
[!] Rejected. Going to sleep for a while...
delivery_tag: 5
[X] Got b'{"routing-key": "queue1", "keyword": "data 3", "domain": "data 3", "created": 1567068725, "expire": 47274000}'
[X] Received data 3 (try: 3)
[!] data 3 Rejected after 3 retries
[*] Sent to error queue: queue1.error
delivery_tag: 6
[X] Got b'{"routing-key": "queue1", "keyword": "data 3", "domain": "data 3", "created": 1567068725, "expire": 47274000}'
[X] Received data 3 (try: 3)
[!] data 3 Rejected after 3 retries
[*] Sent to error queue: queue1.error
Questions:
Is my code current implementation corresponding to my desired topology?
Direct vs topic: is a direct exchange the most optimal/efficient solution in this case?
One exchange vs two: is it advised to stick to 2 exchanges, or can this be simplified to just one exchange for everything?
How do I test for messages which are not normal i.e. sent to the retry loop section of my callback function? The callback currently doesn't handle "normal" messages (i.e. without retries been triggered or simply failed messages).
Is binding the two exchanges necessary? Commenting out this code made no difference.
Do I need to implement arguments (channel.queue_declare) to both dead letter and non-dead letter queues? I know that the non-dead letter queues are to have the arguments declared, whereby the x-dead-letter-exchange is set, but I'm not sure whether the x-dead-letter-routing-key should also be set too.
Do I need ack / nack every time a message is published, because I notice differing behaviour when this is and isn't implemented (i.e. w/o ack the FAIL message is not sent 3 times but only two, with ack it sent more than 3)
In the output above, "data 1" is only consumed twice, "data 2" doesn't appear at all, and "data 3" reaches the MAX_RETRY limit of 3 times but then gets sent to the *.error queue twice (not once), which I find strange. What is RMQ doing here?

Python3 socket doesn't work - graphite

I created a function to send data to a graphite server. It sends the metricname, value and the timestamp to the graphite server at execution:
def collect_metric(metricname, value, timestamp):
sock = socket.socket()
sock.connect( ("localhost", 2003) )
sock.send("%s %s %s\n" % (metricname, value, timestamp))
sock.close()
This function above worked fine in Python2. I had to rewrite this function for Python3. Now no data will be send to graphite. No log entries in the graphite/carbon logs or something else ...:
def collect_metric(metricname, value, timestamp):
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect( ("localhost", 2003) )
metricname = metricname.encode()
if type(value) == "str":
value = value.encode()
timestamp = timestamp.encode()
message = bytearray()
message = bytes(metricname+b" "+value+b" "+timestamp)
sock.sendall(message)
print(message.decode())
sock.close()
I receive no errors. Also on terminal I get the right format/output (see "print(message.decode())")
Has anybody some ideas why it doesn't work?
Thanks.
The bytearray is without any encoding. Try this:
message = (metricname+" "+value+" "+timestamp).encode("UTF-8")
sock.send(messages)
It seems like you're missing '\n' at the end of the message you're sending
message = bytes(metricname+b" "+value+b" "+timestamp)
should be:
message = bytes(metricname+b" "+value+b" "+timestamp + '\n')

Resources