I am using Ubuntu mate on the Pine A64+ 1Gb, I have installed paho mqtt library with python3, I tested library by creating local mosquito server and it is working fine. Now I need to connect to external broker having username and password, I tried with following code but it didn't worked for me. With this I am not even able to connect to the broker.
import paho.mqtt.client as mqtt
import time
broker_address = "121.242.232.175.xip.io"
port = 1883
def on_connect(client, userdata, flags, rc):
if rc==0:
client.connected_flag = True
print("connected OK Returned code=",rc)
else:
print("Bad connection Returned code=",rc)
mqtt.Client.connected_flag = False
client = mqtt.Client("SWAHVACAHU00000600")
client.username_pw_set(username="#####",password="#####")
client.on_connect = on_connect
client.loop_start()
client.connect(broker_address, port)
#while not client.connected_flag:
# print("inthe while")
# time.sleep(1)
client.loop_stop()
client.publish("pine", "Hello from Pinr A64",0)
client.disconnect()
I am checking on hivemqtt using as websocket client and subscribing to the same topic.
Check again what loop_start() does:
These functions implement a threaded interface to the network loop. Calling loop_start() once, before or after connect*(), runs a thread in the background to call loop() automatically. This frees up the main thread for other work that may be blocking. This call also handles reconnecting to the broker. Call loop_stop() to stop the background thread
paho-mqtt
That means that you start a thread which frequently handles all your networking actions (also sending your connection attempt). In your code you immediateley stop this thread again by calling loop_stop() - so there is a high chance that your connection attempt wasn't even send out.
In addition your main program terminates right after client.disconnect() without any delay - so the networking thread (if running) has absoluetly no time to deal anything at all
I recommend to restructure your code so your actions are properly timed and connection is closed after all work is done:
def on_connect(client, userdata, flags, rc):
if rc==0:
print("Connected.")
client.publish("mytopic/example", "")
else:
print("Connection refused, rc=",rc)
def on_disconnect(client, userdata, rc):
print ("Disconnected")
if rc != 0:
# otherwise loop_forever won't return
client.disconnect()
def on_publish(client, userdataq, mid):
print ("Message delivered - closing down connection")
client.disconnect()
print ("Program started")
client = mqtt.Client("MyClient")
client.username_pw_set(username=user,password=pw)
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.on_publish = on_publish
client.connect(broker_address, port)
client.loop_forever()
print("Program finished")
The blocking loop loop_forever() automatically returns if disconnect() is called. When using loop_start() / loop_stop() you need a loop by your own in order to prevent your program from terminating and you also have to handle when to break the loop and when to close the networking thread.
Also consider putting client.connect() within a try...except block
Related
With mqtt subscribe client I am subscribing to lots of threads (over 6000) but not getting results that change on the fly. I'm lagging. Does mqtt give possibility to subscribe too many threads in parallel in background? loop_start would that be enough?
What should I pay attention to when subscribing to more topics?
import logging
import paho.mqtt.client as mqtt
import requests
import zmq
import pandas as pd
PORT=1351
def set_publisher():
context = zmq.Context()
socket_server = context.socket(zmq.PUB)
socket_server.bind(f"tcp://*:{PORT}")
return socket_server
# The callback for when the client receives a CONNACK response from the server.
def on_connect(client, userdata, flags, rc):
#logging.warning(f"Connected with result code :: code : {rc}")
print(f"Connected with result code :: code : {rc}")
# Subscribing in on_connect() means that if we lose the connection and
# reconnect then subscriptions will be renewed.
client.subscribe(topics)
# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
msg = msg.payload
# logging.info(f"message:: {msg}\n")
print(f"message:: {msg}\n")
if msg:
publisher.send(f"{msg}")
def on_disconnect(client, userdata, rc):
if rc != 0:
# logging.warning(f"Unexpected disconnection :: code: {rc}")
print(f"Unexpected disconnection :: code: {rc}")
#todo: if rc is change hostname raise except
client = mqtt.Client(protocol=mqtt.MQTTv31, transport="tcp")
client.username_pw_set(******, password=******)
topics = [(f"topic{i}", 0) for i in 6000]
client.on_connect = on_connect
client.on_message = on_message
client.on_disconnect = on_disconnect
if client.connect(hostname= *****, port= **** , keepalive=300) != 0:
# logging.info("Could not connect to MQTT Broker !")
print("Could not connect to MQTT Broker !")
client.loop_forever(timeout=3000)
You are describing a situation of compute power (either at the client or the broker or in-between) not sufficient to handle your scenario. It is a common occurrance and that is what performance testing is for: does your setup handle your scenario for your requirements? Capacity planning then expands that question to: ... in the future.
I have a basic MQTTListener class in Python which listens for messages on certain topics and should start or stop an async process imported from another script. This process runs forever, unless it is manually stopped. Let's assume the Listener looks like this:
import paho.mqtt.client as mqtt
import json
from python_file.py import async_forever_function
class MqttListener:
def __init__(self, host, port, client_id):
self.host = host
self.port = port
self.client_id = client_id
self.client = mqtt.Client(client_id=self.client_id)
self.client.connect(host=self.host, port=self.port)
def on_connect(self, client, userdata, flags, rc):
self.client.subscribe(topic=[("start", 1), ])
self.client.subscribe(topic=[("stop", 1), ])
logging.info(msg="MQTT - connected!")
def on_disconnect(client, userdata, rc):
logging.info(msg="MQTT - disconnected!")
def on_message(self, client, userdata, message, ):
print('PROCESSING MESSAGE', message.topic, message.payload.decode('utf-8'), )
if message.topic == 'start':
async_forever_function(param='start')
print('process started')
else:
async_forever_function(param='stop')
print('process removed')
def start(self):
self.client.on_connect = lambda client, userdata, flags, rc: self.on_connect(client, userdata, flags, rc)
self.client.on_message = lambda client, userdata, message: self.on_message(client, userdata, message)
self.client.on_disconnect = lambda client, userdata, rc: self.on_disconnect(client, userdata, rc)
self.client.loop_start()
def stop(self):
self.client.loop_stop()
Now, this works for starting a new async process. That is, async_function is correctly triggered when a message is posted on the start MQTT topic. However, once this async process is started, the listener is no longer able to receive/process messages from the stop MQTT topic and the async process will continue to run forever, when in fact it should have been stopped.
My question: how can I adapt the code of this class such that it can also process messages when an active async process is running in the background?
You can not do blocking tasks in the on_message() callback.
This call back runs on the MQTT client thread (the one started by the loop_start() function. This thread handles all network traffic and message handling, if you block it then it can't do any of that.
If you want to call long running tasks from the on_message() callback you need to start a new thread for the long running task so it doesn't block the MQTT client loop.
I'm creating a python3 tornado web server that may listen to an MQTT broker and whenever listens a new message from it, broadcasts it to the connected browsers, through web sockets. However, seems that Tornado doesn't like calls to its API from a thread different to IOLoop.current() and I can't figure out another solution...
I've already tried to write some code. I've put the whole MQTT client (in this case called PMCU client), on a separated thread which loops and listens to MQTT notifications.
def on_pmcu_data(data):
for websocket_client in websocket_clients:
print("Sending websocket message")
websocket_client.write_message(data) # Here it stuck!
print("Sent")
class WebSocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
websocket_clients.append(self)
def on_close(self):
websocket_clients.remove(self)
def make_app():
return tornado.web.Application([
(r'/ws', WebSocketHandler)
])
if __name__ == "__main__":
main_loop = IOLoop().current()
pmcu_client = PMCUClient(on_pmcu_data)
threading.Thread(target=lambda: pmcu_client.listen("5.4.3.2")).start()
app = make_app()
app.listen(8080)
main_loop.start()
However as I said, seems that calls to Tornado API outside the IOLoop.current() blocks: the code above only prints Sending websocket message.
My intent is to run websocket_client.write_message(data) on IOLoop.current() event loop. But seems that the function IOLoop.current().spawn_callback(lambda: websocket_client.write_message(data)) not works after IOLoop.current() has started. How could I achieve that?
I know that I have a huge misunderstanding of IOLoop, asyncio, on which it depends, and python3 async.
on_pmcu_data is being called in a separate thread but the websocket is controlled by Tornado's event loop. You can't write to a websocket from a thread unless you have access to the event loop.
You'll need to ask the IOLoop to write the data to websockets.
Solution 1:
For simple cases, if you don't want to change much in the code, you can do this:
if __name__ == "__main__":
main_loop = IOLoop().current()
on_pmcu_data_callback = lambda data: main_loop.add_callback(on_pmcu_data, data)
pmcu_client = PMCUClient(on_pmcu_data_callback)
...
This should solve your problem.
Solution 2:
For more elaborate cases, you can pass the main_loop to PMCUClient class and then use add_callback (or spawn_callback) to run on_pmcu_data.
Example:
if __name__ == "__main__":
main_loop = IOLoop().current()
pmcu_client = PMCUClient(on_pmcu_data, main_loop) # also pass the main loop
...
Then in PMCUCLient class:
class PMCUClient:
def __init__(self, on_pmcu_data, main_loop):
...
self.main_loop = main_loop
def lister(...):
...
self.main_loop.add_callback(self.on_pmcu_data, data)
I'm trying to write an asyncio-based server. The problem is, that it stops to respond after the first request.
My code is built upon this template for echo-server and this method to pass parameters to coroutines.
class MsgHandler:
def __init__(self, mem):
# here (mem:dict) I store received metrics
self.mem = mem
async def handle(self, reader, writer):
#this coroutine handles requests
data = await reader.read(1024)
print('request:', data.decode('utf-8'))
# read_msg returns an answer based on the request received
# My server closes connection on every second request
# For the first one, everything works as intended,
# so I don't thik the problem is in read_msg()
response = read_msg(data.decode('utf-8'), self.mem)
print('response:', response)
writer.write(response.encode('utf-8'))
await writer.drain()
writer.close()
def run_server(host, port):
mem = {}
msg_handler = MsgHandler(mem)
loop = asyncio.get_event_loop()
coro = asyncio.start_server(msg_handler.handle, host, port, loop=loop)
server = loop.run_until_complete(coro)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
On the client-side I either get an empty response or ConnectionResetError (104, 'Connection reset by peer').
You are closing the writer with writer.close() in the handler, which closes the socket.
From the 3.9 docs on StreamWriter:
Also, if you don't close the stream writer, then you would still have store it somewhere in order to keep receiving messages over that same connection.
I have been working on MQTT protocol for quite some time. This protocol would be used by the organisation for sending acknowledgment messages to each client separately.
In the use-case, there is only 1 'publisher' client which publishes acknowledgment messages to 'subscriber' clients subscribed to their unique topics
To make sure that the broker is largely bug-free and can scale easily in future, I have been trying to test Emqx and Vernemq open-source brokers by trying to connect at least 50,000 clients to them.
However, I am not able to create those many connections. My Ubuntu 18.04
instance (8 core CPU, 15 GB RAM) on the Google Cloud fails to make any more successful connection after around 300-400.
I have tried making the following changes:
ulimit -n 64500 (for allowing these many file descriptors since every socket connection would require a file descriptor)
Please help me in making over 50,000 connections. Should I run 'n' number of threads and run the total_clients/total_threads clients under loop on each thread?
or should I create one thread for every client connection?
What should I do?
The following message appear on the "$SYS/#" topic once the clients start getting disconnected even though there is no disconnect data packet being sent by the client side.
$SYS/brokers/emqx#127.0.0.1/clients/112/disconnected {"clientid":"112","username":"undefined","reason":"closed","ts":1536587647}
import paho.mqtt.client as mqtt
from threading import Lock
import time
print_lock = Lock()
def s_print(str):
print_lock.acquire()
print(str)
print_lock.release()
def on_connect(client, userdata, flags, rc):
client_id = userdata["client_id"]
if (rc == 0):
s_print("connected " + client_id)
client.subscribe(client_id, 2)
def on_disconnect(client, userdata, rc):
client_id = userdata["client_id"]
s_print("disconnected: " + client_id + " reason " + str(rc))
def on_message(client, userdata, message):
topic = message.topic
payload = str(message.payload)
s_print("Recieved " + payload + " on " + topic)
if __name__ == '__main__':
n_clients = int(input("Enter no. of clients: "))
for i in range(n_clients):
client_id = str(i)
s_print(client_id)
userdata = {
"client_id" : client_id
}
client = mqtt.Client(client_id=client_id, clean_session=True, userdata=userdata)
client.on_connect = on_connect
client.on_disconnect =on_disconnect
client.on_message = on_message
client.connect("35.228.57.228", 1883, 60)
client.loop_start()
time.sleep(0.5)
while(1):
time.sleep(5)