How to get response on python client from a nodeJS server - node.js

I'm trying to build a simple chat app using a NodeJS server with socket.IO and a client written in python 2.7 using socketIO-client package.
The js server is on local and is very simple :
io.on('connection', function(socket){
socket.on("chat_message", function(msg){
io.emit("chat_message", msg);
});
});
This chat app works for several pages opened in my browser.
(source comes from : http://socket.io/get-started/chat/ )
I wanted to connect this server from a python client, and I succesfully emit from python to js server (text entered in python client appears into the browser).
The problem is the following :
When I type some text into the browser, Python doesn't print it into the shell.
Here is the code I use on python side :
def communicate(self, msg):
logging.debug('processing event : %s', msg)
self.socketIO.emit("chat_message", msg, self.on_chat_message)
def on_chat_message(self, *args):
output = ''
index = 0
for a in args:
output += ' | '+str(index)+' : '+str(a)
index += 1
logging.debug('processing server output : ' + output)
return
As the server emits to all connected clients, python should normally handle it into the callback 'on_chat_message' but, it doesn't work.
I also tried to add a self.socketIO.wait_for_callbacks() to the python, without success.
If someone has an idea about what I'm doing it wrong, it would be great =D !
Thanks.

I finally managed to get a response from the server on my python client.
My mistake was to not 'wait' on the socketIO variable, so I didn't catch any feedback.
To properly handle server responses, I use a BaseNamespace class which is binded to the socket, and handle basic events : overriding of the function on_event
def on_event(self, event, *args):
#Catch your event and do stuff
If an event is handled by the class, a flag is raised up and checked by the client.
def communicate(self, msg):
self.socketIO.emit('chat_message', msg)
def wait_for_server_output(self):
while(1):
self.socketIO.wait(seconds=1)
ns = self.socketIO.get_namespace()
if ns.flag.isSet():
data = ns.data_queue.get()
ns.flag.clear()
These two functions are managed by 2 threads inside the client code.
Hoping this will help =)

Related

How to close connection websocket where user going to another url

So, I Have django app with traffic.py to send data to client (traffic.html)
class ClientTraffic(WebsocketConsumer):
def connect(self):
self.accept()
# LOOP here for send traffic when current url is x
output = item.generateTraffic()
u = float(output['tx-bits-per-second'])/1000000,
d = float(output['rx-bits-per-second'])/1000000,
self.send(json.dumps({
'upload': '{:.2f}'.format(u[0]),
'download': '{:.2f}'.format(d[0]),
}))
sleep(1)
My Question is, how I can loop send message to client while the client is in that current url (ex: /traffic/), so when client is change current page (close or go to another path) the websocket connection is close.

Use Websocket Client in graphql resolver for data communication Nodejs

i want to connect to a websocket server, when my graphql server starts, and inside a resolver i want to use send and recv functions of the connected websocket for data communication.
brief about my backend,
i have a python rest client that also have a websocket server, i can fetch solo product details and product list via the websocket server.
(in graphql resolver i want to collect my products data and also an inventory data and merge them both for UI. as node is async programming, and all the example are connect to server then, use a then block and communicate, i dont want to do that i want to use the connection object in resolver and the connection should be done once.)
import { WebSocketClient } from './base';
const productWebSocket = new WebSocketClient("ws://192.168.0.109:8000/abi-connection");
productWebSocket.connect({reconnect: true});
export { productWebSocket };
now i will import this productWebSocket and want to use it in resolvers.
this websocket and graphql shi* isnt that popular, but designing this shi* this way gives me a performance boost, as i use utility functions for both of my restapis and websocket server in core-apis. i call this maksuDII+ way of programming.
i couldnt do this with nodejs and got no help. so i implemented graphql with python and got more better control in websocket client.
i search for ws, websocket some other shitty packages in nodejs none worked
websocket.on('connect', ws=> {
websocket.on('message', data => {
// this shit is a argument how am i suppose to get this value in a express api end point.
// search total 5 pages in google got nothing so have to shift to python.
})
})
python Version
from graphql import GraphQLError
from ..service_base import query
from app.websocket.product_websocket import product_ws_connection
from app.websocket.inventory_websocket import inventory_ws_connection
import json
from app.utils.super_threads import SuperThreads
def get_websocket_data(socket_connection, socket_send_data):
socket_connection.send(json.dumps(socket_send_data))
raw_data = socket_connection.recv()
jsonified_data = json.loads(raw_data)
return jsonified_data
#query.field("productDetails")
def product_details(*_, baseCode: str):
product_ws = product_ws_connection.get_connection() # connected client, with proper connection to my websocket server
inventory_ws = inventory_ws_connection.get_connection()
if not product_ws:
raise GraphQLError("Product Api Down")
if not inventory_ws:
raise GraphQLError("Inventory Api Down")
product_ws_data = {
"operation": "PRODUCT_FETCH",
"baseCode": baseCode
}
inventory_ws_data = {
"operation": "STOCK_FETCH",
"baseCode": baseCode
}
# super threads here is a diffrent topic, it a wrapper around standard python Threads.
ws_product_thread = SuperThreads(target=get_websocket_data, args=(product_ws, product_ws_data))
ws_inventory_thread = SuperThreads(target=get_websocket_data, args=(inventory_ws, inventory_ws_data))
ws_product_thread.start() # asking one of my websocket server for data.
ws_inventory_thread.start() # same with this thread.
product_data_payload = ws_product_thread.join() # i get the what websocket gives me as response
inventory_data_payload = ws_inventory_thread.join() # needed this type of shit in nodejs could not have it.
if "Fetched" in product_data_payload["abi_response_info"]["message"] and \
"Fetched" in inventory_data_payload["abi_response_info"]["message"]:
# data filtering here
return product_data
else:
raise GraphQLError("Invalid Product Code")

Run actions on Tornado main loop, after it starts

I'm creating a python3 tornado web server that may listen to an MQTT broker and whenever listens a new message from it, broadcasts it to the connected browsers, through web sockets. However, seems that Tornado doesn't like calls to its API from a thread different to IOLoop.current() and I can't figure out another solution...
I've already tried to write some code. I've put the whole MQTT client (in this case called PMCU client), on a separated thread which loops and listens to MQTT notifications.
def on_pmcu_data(data):
for websocket_client in websocket_clients:
print("Sending websocket message")
websocket_client.write_message(data) # Here it stuck!
print("Sent")
class WebSocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
websocket_clients.append(self)
def on_close(self):
websocket_clients.remove(self)
def make_app():
return tornado.web.Application([
(r'/ws', WebSocketHandler)
])
if __name__ == "__main__":
main_loop = IOLoop().current()
pmcu_client = PMCUClient(on_pmcu_data)
threading.Thread(target=lambda: pmcu_client.listen("5.4.3.2")).start()
app = make_app()
app.listen(8080)
main_loop.start()
However as I said, seems that calls to Tornado API outside the IOLoop.current() blocks: the code above only prints Sending websocket message.
My intent is to run websocket_client.write_message(data) on IOLoop.current() event loop. But seems that the function IOLoop.current().spawn_callback(lambda: websocket_client.write_message(data)) not works after IOLoop.current() has started. How could I achieve that?
I know that I have a huge misunderstanding of IOLoop, asyncio, on which it depends, and python3 async.
on_pmcu_data is being called in a separate thread but the websocket is controlled by Tornado's event loop. You can't write to a websocket from a thread unless you have access to the event loop.
You'll need to ask the IOLoop to write the data to websockets.
Solution 1:
For simple cases, if you don't want to change much in the code, you can do this:
if __name__ == "__main__":
main_loop = IOLoop().current()
on_pmcu_data_callback = lambda data: main_loop.add_callback(on_pmcu_data, data)
pmcu_client = PMCUClient(on_pmcu_data_callback)
...
This should solve your problem.
Solution 2:
For more elaborate cases, you can pass the main_loop to PMCUClient class and then use add_callback (or spawn_callback) to run on_pmcu_data.
Example:
if __name__ == "__main__":
main_loop = IOLoop().current()
pmcu_client = PMCUClient(on_pmcu_data, main_loop) # also pass the main loop
...
Then in PMCUCLient class:
class PMCUClient:
def __init__(self, on_pmcu_data, main_loop):
...
self.main_loop = main_loop
def lister(...):
...
self.main_loop.add_callback(self.on_pmcu_data, data)

sending response to particular django websocket client from rest api or a server

consumer.py
# accept websocket connection
def connect(self):
self.accept()
# Receive message from WebSocket
def receive(self, text_data):
text_data_json = json.loads(text_data)
command = text_data_json['command']
job_id = text_data_json['job_id']
if command == 'subscribe':
self.subscribe(job_id)
elif command == 'unsubscribe':
self.unsubscribe(job_id)
else:
self.send({
'error': 'unknown command'
})
# Subscribe the client to a particular 'job_id'
def subscribe(self, job_id):
self.channel_layer.group_add(
'job_{0}'.format(job_id),
self.channel_name
)
# call this method from rest api to get the status of a job
def send_job_notification(self, message, job_id):
channel_layer = get_channel_layer()
group_name = 'job_{0}'.format(job_id)
channel_layer.group_send(
group_name,
{
"type": "send.notification",
"message": message,
}
)
# Receive message from room group
def send_notification(self, event):
message = event['message']
# Send message to WebSocket
self.send(text_data=json.dumps(
message))
In the above code what I am trying to do is connect clients to the socket and subscribe clients to a particular "job_id" by creating a group called "job_1" using "subscribe" method and add it to the channel layer. Creation of groups are dynamic.
I am using below "simple websocket client extension" from Google to connect to the above websocket. I am able to make a connection with the websocket and send request to it as shown in the picture below.
Now since the client is connected and subscribed to a particular "job_id",
I am using "Postman" to send notification to the above connected client(simple websocket client extension) subscribed to particular "job_id" by passing in the job_id in the request as highlighted in yellow below.
when I do a post to the "REST-API" I am calling "send_job_notification(self, message, job_id)" method of "consumer.py" file along with the "job_id" as '1' shown in the picture below in yellow
After doing all this I don't see any message sent to the connected client subscribed to a "job_id" from the "REST-API" call.
Any help would be highly appreciated as it has been dragging on for a long time.
Edit:
thank you for the suggestion Ken its worth to make the method as "#staticmethod" but Ken how do I make the API send job status updates to the connected Clients because my long running jobs would run in some process and send update messages back to the backend via REST-API and the updates then need to be sent to the correct Client (via websockets).
My API call to the socket consumer is as below:
from websocket_consumer import consumers
class websocket_connect(APIView):
def post(self, request, id):
consumers.ChatConsumer.send_job_notification("hello",id)
My socket consumer code is as below:
Edit
`CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
},
}`
As you can see 'Redis' service is also running
Edit-1
You cannot call the method in the consumer directly from an external code because you need to get the particular consumer instance connected to your client. This is the job of the channel layer achieved by using a message passing system or broker as reddis.
From what I can see, you're already going towards the right direction, except that the send_job_notification is an instance method which will require instantiating the consumer. Make it a static method instead, so you can call it directly without a consumer instance
#staticmethod
def send_job_notification(message, job_id):
channel_layer = get_channel_layer()
group_name = 'job_{0}'.format(job_id)
channel_layer.group_send(
group_name,
{
"type": "send.notification",
"message": message,
}
And in your API view, you can simply call it as:
ChatConsumer.send_job_notification(message, job_id)

Messenger sending multiple Post requests per user input

I am using Facebook Messenger and its send API. I also use ngrok as the server to handle the traffic. My chatbot worked fine a few days ago but now after a bit of debugging it seems that Messenger, with every input from the user, sends multiple post request one after the other very quickly. I had a thought if it might be latency issues as my chatbot that responds does take a while to process. My chatbot takes a long time due to all the requests but after a while it does manage to answer all the requests with a 200 response. If so how do I make Messenger not send multiple Post requests and flood my ngrok webhook? If it's something else how do I deal with the issue?
This is the code that listens to the requests:
#app.route("/webhook", methods=['GET','POST'])
def listen():
"""This is the main function flask uses to
listen at the `/webhook` endpoint"""
if request.method == 'GET':
return verify_webhook(request)
if request.method == 'POST':
payload = request.get_json()
print(payload)
event = payload['entry'][0]['messaging']
for x in event:
if is_user_message(x):
text = x['message']['text']
sender_id = x['sender']['id']
respond(sender_id, text)
return "ok", 200
You can read about webhooks here. It does not look like you can change the timeout period.
Webhook Performance Requirements
Your webhook should meet the following minimum performance standards:
Respond to all webhook events with a 200 OK.
Respond to all webhook events in 20 seconds or less.
If you would want to implement a multiprocessing solution it may look like this:
from threading import Thread as Sub
#app.route("/webhook", methods=['GET','POST'])
def listen():
"""This is the main function flask uses to
listen at the `/webhook` endpoint"""
if request.method == 'GET':
return verify_webhook(request)
if request.method == 'POST':
payload = request.get_json()
print(payload)
event = payload['entry'][0]['messaging']
for x in event:
if is_user_message(x):
text = x['message']['text']
sender_id = x['sender']['id']
sub = Sub(target=respond, args=[sender_id, text])
sub.start()
return "ok", 200
If this gives you issues with shared scope or with flask itself, consider using multiprocessing instead. For this simply switch to
from multiprocessing import Process as Sub

Resources