Use Websocket Client in graphql resolver for data communication Nodejs - node.js

i want to connect to a websocket server, when my graphql server starts, and inside a resolver i want to use send and recv functions of the connected websocket for data communication.
brief about my backend,
i have a python rest client that also have a websocket server, i can fetch solo product details and product list via the websocket server.
(in graphql resolver i want to collect my products data and also an inventory data and merge them both for UI. as node is async programming, and all the example are connect to server then, use a then block and communicate, i dont want to do that i want to use the connection object in resolver and the connection should be done once.)
import { WebSocketClient } from './base';
const productWebSocket = new WebSocketClient("ws://192.168.0.109:8000/abi-connection");
productWebSocket.connect({reconnect: true});
export { productWebSocket };
now i will import this productWebSocket and want to use it in resolvers.
this websocket and graphql shi* isnt that popular, but designing this shi* this way gives me a performance boost, as i use utility functions for both of my restapis and websocket server in core-apis. i call this maksuDII+ way of programming.

i couldnt do this with nodejs and got no help. so i implemented graphql with python and got more better control in websocket client.
i search for ws, websocket some other shitty packages in nodejs none worked
websocket.on('connect', ws=> {
websocket.on('message', data => {
// this shit is a argument how am i suppose to get this value in a express api end point.
// search total 5 pages in google got nothing so have to shift to python.
})
})
python Version
from graphql import GraphQLError
from ..service_base import query
from app.websocket.product_websocket import product_ws_connection
from app.websocket.inventory_websocket import inventory_ws_connection
import json
from app.utils.super_threads import SuperThreads
def get_websocket_data(socket_connection, socket_send_data):
socket_connection.send(json.dumps(socket_send_data))
raw_data = socket_connection.recv()
jsonified_data = json.loads(raw_data)
return jsonified_data
#query.field("productDetails")
def product_details(*_, baseCode: str):
product_ws = product_ws_connection.get_connection() # connected client, with proper connection to my websocket server
inventory_ws = inventory_ws_connection.get_connection()
if not product_ws:
raise GraphQLError("Product Api Down")
if not inventory_ws:
raise GraphQLError("Inventory Api Down")
product_ws_data = {
"operation": "PRODUCT_FETCH",
"baseCode": baseCode
}
inventory_ws_data = {
"operation": "STOCK_FETCH",
"baseCode": baseCode
}
# super threads here is a diffrent topic, it a wrapper around standard python Threads.
ws_product_thread = SuperThreads(target=get_websocket_data, args=(product_ws, product_ws_data))
ws_inventory_thread = SuperThreads(target=get_websocket_data, args=(inventory_ws, inventory_ws_data))
ws_product_thread.start() # asking one of my websocket server for data.
ws_inventory_thread.start() # same with this thread.
product_data_payload = ws_product_thread.join() # i get the what websocket gives me as response
inventory_data_payload = ws_inventory_thread.join() # needed this type of shit in nodejs could not have it.
if "Fetched" in product_data_payload["abi_response_info"]["message"] and \
"Fetched" in inventory_data_payload["abi_response_info"]["message"]:
# data filtering here
return product_data
else:
raise GraphQLError("Invalid Product Code")

Related

Azure Functions fails in load testing in combination with Azure Cosmos DB or SQL database

I got a large dataset of all the addresses in my country (3.8GB). I am creating an API which will query the database for one specific address and respond with basic JSON data (300bytes). The API is running in Python on Azure Functions.
So far everything works great. When I do a single request I get a response time of +/- 100-150ms. Great! but... If I try to load test the API with, let's say, 200 requests in 1 minute. The average response time is around 4-6 seconds.
This is what I tried so far
Connect the API to a SQL database
Connect the API to a Cosmos DB database
Smaller tables (less columns)
Is there some limit on the number of connections per database? The SQL Database or Cosmos DB doesn't seem to be the issue (% of CPU/Mem are good).
I created a simple /status endpoint without the DB connection on the API which can handle 200 requests in 1 minute easily. Hopefully someone can push me in the right direction.
import azure.functions as func
import os
from azure.functions import AsgiMiddleware
from fastapi import Query
from typing import Optional
from api_app import app
import azure.cosmos.documents as documents
import azure.cosmos.cosmos_client as cosmos_client
import azure.cosmos.exceptions as exceptions
from azure.cosmos.partition_key import PartitionKey
#app.get("/status")
def get_status():
return ({"status": 200})
#app.get("/postcode_cosmosdb/{postcode}/{huisnummer}")
async def postcode_cosmosdb(postcode: str, huisnummer: int):
settings = {
'host': os.environ.get('ACCOUNT_HOST', 'XXXXXX'),
'master_key': os.environ.get('ACCOUNT_KEY', 'XXXXXX'),
'database_id': os.environ.get('COSMOS_DATABASE', 'WoningAdressen'),
'container_id': os.environ.get('COSMOS_CONTAINER', 'AdressenLight'),
}
HOST = settings['host']
MASTER_KEY = settings['master_key']
DATABASE_ID = settings['database_id']
CONTAINER_ID = settings['container_id']
client = cosmos_client.CosmosClient(HOST, {'masterKey': MASTER_KEY}, user_agent="CosmosDBPythonQuickstart", user_agent_overwrite=True)
db = client.get_database_client(DATABASE_ID)
container = db.get_container_client(CONTAINER_ID)
items = list(container.query_items(
query="SELECT l.postcode, l.huisnummer, l.huisletter, l.nummeraanduiding_id as bagid, l.gemeente FROM AdressenLight as l WHERE l.postcode=#postcode AND l.huisnummer=#huisnummer",
parameters=[
{ "name":"#postcode", "value": postcode },
{ "name":"#huisnummer", "value": huisnummer }
]
))
return items
def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse:
return AsgiMiddleware(app).handle(req, context)
It is official recommendation to reuse clients across function invocations. That keeps number of connections small and calls more efficient.
Please refer to docs:
https://learn.microsoft.com/en-us/azure/azure-functions/manage-connections

How to send a GraphQL query to AppSync from python?

How do we post a GraphQL request through AWS AppSync using boto?
Ultimately I'm trying to mimic a mobile app accessing our stackless/cloudformation stack on AWS, but with python. Not javascript or amplify.
The primary pain point is authentication; I've tried a dozen different ways already. This the current one, which generates a "401" response with "UnauthorizedException" and "Permission denied", which is actually pretty good considering some of the other messages I've had. I'm now using the 'aws_requests_auth' library to do the signing part. I assume it authenticates me using the stored /.aws/credentials from my local environment, or does it?
I'm a little confused as to where and how cognito identities and pools will come into it. eg: say I wanted to mimic the sign-up sequence?
Anyways the code looks pretty straightforward; I just don't grok the authentication.
from aws_requests_auth.boto_utils import BotoAWSRequestsAuth
APPSYNC_API_KEY = 'inAppsyncSettings'
APPSYNC_API_ENDPOINT_URL = 'https://aaaaaaaaaaaavzbke.appsync-api.ap-southeast-2.amazonaws.com/graphql'
headers = {
'Content-Type': "application/graphql",
'x-api-key': APPSYNC_API_KEY,
'cache-control': "no-cache",
}
query = """{
GetUserSettingsByEmail(email: "john#washere"){
items {name, identity_id, invite_code}
}
}"""
def test_stuff():
# Use the library to generate auth headers.
auth = BotoAWSRequestsAuth(
aws_host='aaaaaaaaaaaavzbke.appsync-api.ap-southeast-2.amazonaws.com',
aws_region='ap-southeast-2',
aws_service='appsync')
# Create an http graphql request.
response = requests.post(
APPSYNC_API_ENDPOINT_URL,
json={'query': query},
auth=auth,
headers=headers)
print(response)
# this didn't work:
# response = requests.post(APPSYNC_API_ENDPOINT_URL, data=json.dumps({'query': query}), auth=auth, headers=headers)
Yields
{
"errors" : [ {
"errorType" : "UnauthorizedException",
"message" : "Permission denied"
} ]
}
It's quite simple--once you know. There are some things I didn't appreciate:
I've assumed IAM authentication (OpenID appended way below)
There are a number of ways for appsync to handle authentication. We're using IAM so that's what I need to deal with, yours might be different.
Boto doesn't come into it.
We want to issue a request like any regular punter, they don't use boto, and neither do we. Trawling the AWS boto docs was a waste of time.
Use the AWS4Auth library
We are going to send a regular http request to aws, so whilst we can use python requests they need to be authenticated--by attaching headers.
And, of course, AWS auth headers are special and different from all others.
You can try to work out how to do it
yourself, or you can go looking for someone else who has already done it: Aws_requests_auth, the one I started with, probably works just fine, but I have ended up with AWS4Auth. There are many others of dubious value; none endorsed or provided by Amazon (that I could find).
Specify appsync as the "service"
What service are we calling? I didn't find any examples of anyone doing this anywhere. All the examples are trivial S3 or EC2 or even EB which left uncertainty. Should we be talking to api-gateway service? Whatsmore, you feed this detail into the AWS4Auth routine, or authentication data. Obviously, in hindsight, the request is hitting Appsync, so it will be authenticated by Appsync, so specify "appsync" as the service when putting together the auth headers.
It comes together as:
import requests
from requests_aws4auth import AWS4Auth
# Use AWS4Auth to sign a requests session
session = requests.Session()
session.auth = AWS4Auth(
# An AWS 'ACCESS KEY' associated with an IAM user.
'AKxxxxxxxxxxxxxxx2A',
# The 'secret' that goes with the above access key.
'kwWxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxgEm',
# The region you want to access.
'ap-southeast-2',
# The service you want to access.
'appsync'
)
# As found in AWS Appsync under Settings for your endpoint.
APPSYNC_API_ENDPOINT_URL = 'https://nqxxxxxxxxxxxxxxxxxxxke'
'.appsync-api.ap-southeast-2.amazonaws.com/graphql'
# Use JSON format string for the query. It does not need reformatting.
query = """
query foo {
GetUserSettings (
identity_id: "ap-southeast-2:8xxxxxxb-7xx4-4xx4-8xx0-exxxxxxx2"
){
user_name, email, whatever
}}"""
# Now we can simply post the request...
response = session.request(
url=APPSYNC_API_ENDPOINT_URL,
method='POST',
json={'query': query}
)
print(response.text)
Which yields
# Your answer comes as a JSON formatted string in the text attribute, under data.
{"data":{"GetUserSettings":{"user_name":"0xxxxxxx3-9102-42f0-9874-1xxxxx7dxxx5"}}}
Getting credentials
To get rid of the hardcoded key/secret you can consume the local AWS ~/.aws/config and ~/.aws/credentials, and it is done this way...
# Use AWS4Auth to sign a requests session
session = requests.Session()
credentials = boto3.session.Session().get_credentials()
session.auth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
boto3.session.Session().region_name,
'appsync',
session_token=credentials.token
)
...<as above>
This does seem to respect the environment variable AWS_PROFILE for assuming different roles.
Note that STS.get_session_token is not the way to do it, as it may try to assume a role from a role, depending where it keyword matched the AWS_PROFILE value. Labels in the credentials file will work because the keys are right there, but names found in the config file do not work, as that assumes a role already.
OpenID
In this scenario, all the complexity is transferred to the conversation with the openid connect provider. The hard stuff is all the auth hoops you jump through to get an access token, and thence using the refresh token to keep it alive. That is where all the real work lies.
Once you finally have an access token, assuming you have configured the "OpenID Connect" Authorization Mode in appsync, then you can, very simply, drop the access token into the header:
response = requests.post(
url="https://nc3xxxxxxxxxx123456zwjka.appsync-api.ap-southeast-2.amazonaws.com/graphql",
headers={"Authorization": ACCESS_TOKEN},
json={'query': "query foo{GetStuff{cat, dog, tree}}"}
)
You can set up an API key on the AppSync end and use the code below. This works for my case.
import requests
# establish a session with requests session
session = requests.Session()
# As found in AWS Appsync under Settings for your endpoint.
APPSYNC_API_ENDPOINT_URL = 'https://vxxxxxxxxxxxxxxxxxxy.appsync-api.ap-southeast-2.amazonaws.com/graphql'
# setup the query string (optional)
query = """query listItemsQuery {listItemsQuery {items {correlation_id, id, etc}}}"""
# Now we can simply post the request...
response = session.request(
url=APPSYNC_API_ENDPOINT_URL,
method='POST',
headers={'x-api-key': '<APIKEYFOUNDINAPPSYNCSETTINGS>'},
json={'query': query}
)
print(response.json()['data'])
Building off Joseph Warda's answer you can use the class below to send AppSync commands.
# fileName: AppSyncLibrary
import requests
class AppSync():
def __init__(self,data):
endpoint = data["endpoint"]
self.APPSYNC_API_ENDPOINT_URL = endpoint
self.api_key = data["api_key"]
self.session = requests.Session()
def graphql_operation(self,query,input_params):
response = self.session.request(
url=self.APPSYNC_API_ENDPOINT_URL,
method='POST',
headers={'x-api-key': self.api_key},
json={'query': query,'variables':{"input":input_params}}
)
return response.json()
For example in another file within the same directory:
from AppSyncLibrary import AppSync
APPSYNC_API_ENDPOINT_URL = {YOUR_APPSYNC_API_ENDPOINT}
APPSYNC_API_KEY = {YOUR_API_KEY}
init_params = {"endpoint":APPSYNC_API_ENDPOINT_URL,"api_key":APPSYNC_API_KEY}
app_sync = AppSync(init_params)
mutation = """mutation CreatePost($input: CreatePostInput!) {
createPost(input: $input) {
id
content
}
}
"""
input_params = {
"content":"My first post"
}
response = app_sync.graphql_operation(mutation,input_params)
print(response)
Note: This requires you to activate API access for your AppSync API. Check this AWS post for more details.
graphql-python/gql supports AWS AppSync since version 3.0.0rc0.
It supports queries, mutation and even subscriptions on the realtime endpoint.
The documentation is available here
Here is an example of a mutation using the API Key authentication:
import asyncio
import os
import sys
from urllib.parse import urlparse
from gql import Client, gql
from gql.transport.aiohttp import AIOHTTPTransport
from gql.transport.appsync_auth import AppSyncApiKeyAuthentication
# Uncomment the following lines to enable debug output
# import logging
# logging.basicConfig(level=logging.DEBUG)
async def main():
# Should look like:
# https://XXXXXXXXXXXXXXXXXXXXXXXXXX.appsync-api.REGION.amazonaws.com/graphql
url = os.environ.get("AWS_GRAPHQL_API_ENDPOINT")
api_key = os.environ.get("AWS_GRAPHQL_API_KEY")
if url is None or api_key is None:
print("Missing environment variables")
sys.exit()
# Extract host from url
host = str(urlparse(url).netloc)
auth = AppSyncApiKeyAuthentication(host=host, api_key=api_key)
transport = AIOHTTPTransport(url=url, auth=auth)
async with Client(
transport=transport, fetch_schema_from_transport=False,
) as session:
query = gql(
"""
mutation createMessage($message: String!) {
createMessage(input: {message: $message}) {
id
message
createdAt
}
}"""
)
variable_values = {"message": "Hello world!"}
result = await session.execute(query, variable_values=variable_values)
print(result)
asyncio.run(main())
I am unable to add a comment due to low rep, but I just want to add that I tried the accepted answer and it didn't work. I was getting an error saying my session_token is invalid. Probably because I was using AWS Lambda.
I got it to work pretty much exactly, but by adding to the session token parameter of the aws4auth object. Here's the full piece:
import requests
import os
from requests_aws4auth import AWS4Auth
def AppsyncHandler(event, context):
# These are env vars that are always present in an AWS Lambda function
# If not using AWS Lambda, you'll need to add them manually to your env.
access_id = os.environ.get("AWS_ACCESS_KEY_ID")
secret_key = os.environ.get("AWS_SECRET_ACCESS_KEY")
session_token = os.environ.get("AWS_SESSION_TOKEN")
region = os.environ.get("AWS_REGION")
# Your AppSync Endpoint
api_endpoint = os.environ.get("AppsyncConnectionString")
resource = "appsync"
session = requests.Session()
session.auth = AWS4Auth(access_id,
secret_key,
region,
resource,
session_token=session_token)
The rest is the same.
Hope this Helps Everyone
import requests
import json
import os
from dotenv import load_dotenv
load_dotenv(".env")
class AppSync(object):
def __init__(self,data):
endpoint = data["endpoint"]
self.APPSYNC_API_ENDPOINT_URL = endpoint
self.api_key = data["api_key"]
self.session = requests.Session()
def graphql_operation(self,query,input_params):
response = self.session.request(
url=self.APPSYNC_API_ENDPOINT_URL,
method='POST',
headers={'x-api-key': self.api_key},
json={'query': query,'variables':{"input":input_params}}
)
return response.json()
def main():
APPSYNC_API_ENDPOINT_URL = os.getenv("APPSYNC_API_ENDPOINT_URL")
APPSYNC_API_KEY = os.getenv("APPSYNC_API_KEY")
init_params = {"endpoint":APPSYNC_API_ENDPOINT_URL,"api_key":APPSYNC_API_KEY}
app_sync = AppSync(init_params)
mutation = """
query MyQuery {
getAccountId(id: "5ca4bbc7a2dd94ee58162393") {
_id
account_id
limit
products
}
}
"""
input_params = {}
response = app_sync.graphql_operation(mutation,input_params)
print(json.dumps(response , indent=3))
main()

sending response to particular django websocket client from rest api or a server

consumer.py
# accept websocket connection
def connect(self):
self.accept()
# Receive message from WebSocket
def receive(self, text_data):
text_data_json = json.loads(text_data)
command = text_data_json['command']
job_id = text_data_json['job_id']
if command == 'subscribe':
self.subscribe(job_id)
elif command == 'unsubscribe':
self.unsubscribe(job_id)
else:
self.send({
'error': 'unknown command'
})
# Subscribe the client to a particular 'job_id'
def subscribe(self, job_id):
self.channel_layer.group_add(
'job_{0}'.format(job_id),
self.channel_name
)
# call this method from rest api to get the status of a job
def send_job_notification(self, message, job_id):
channel_layer = get_channel_layer()
group_name = 'job_{0}'.format(job_id)
channel_layer.group_send(
group_name,
{
"type": "send.notification",
"message": message,
}
)
# Receive message from room group
def send_notification(self, event):
message = event['message']
# Send message to WebSocket
self.send(text_data=json.dumps(
message))
In the above code what I am trying to do is connect clients to the socket and subscribe clients to a particular "job_id" by creating a group called "job_1" using "subscribe" method and add it to the channel layer. Creation of groups are dynamic.
I am using below "simple websocket client extension" from Google to connect to the above websocket. I am able to make a connection with the websocket and send request to it as shown in the picture below.
Now since the client is connected and subscribed to a particular "job_id",
I am using "Postman" to send notification to the above connected client(simple websocket client extension) subscribed to particular "job_id" by passing in the job_id in the request as highlighted in yellow below.
when I do a post to the "REST-API" I am calling "send_job_notification(self, message, job_id)" method of "consumer.py" file along with the "job_id" as '1' shown in the picture below in yellow
After doing all this I don't see any message sent to the connected client subscribed to a "job_id" from the "REST-API" call.
Any help would be highly appreciated as it has been dragging on for a long time.
Edit:
thank you for the suggestion Ken its worth to make the method as "#staticmethod" but Ken how do I make the API send job status updates to the connected Clients because my long running jobs would run in some process and send update messages back to the backend via REST-API and the updates then need to be sent to the correct Client (via websockets).
My API call to the socket consumer is as below:
from websocket_consumer import consumers
class websocket_connect(APIView):
def post(self, request, id):
consumers.ChatConsumer.send_job_notification("hello",id)
My socket consumer code is as below:
Edit
`CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
},
}`
As you can see 'Redis' service is also running
Edit-1
You cannot call the method in the consumer directly from an external code because you need to get the particular consumer instance connected to your client. This is the job of the channel layer achieved by using a message passing system or broker as reddis.
From what I can see, you're already going towards the right direction, except that the send_job_notification is an instance method which will require instantiating the consumer. Make it a static method instead, so you can call it directly without a consumer instance
#staticmethod
def send_job_notification(message, job_id):
channel_layer = get_channel_layer()
group_name = 'job_{0}'.format(job_id)
channel_layer.group_send(
group_name,
{
"type": "send.notification",
"message": message,
}
And in your API view, you can simply call it as:
ChatConsumer.send_job_notification(message, job_id)

Jupyter for WebSocket communication

I'm working on an app which needs to have a WebSockets API, and will also integrate Jupyter (former IPython) notebooks as a relatively minor feature. Since Jupyter already uses WebSockets for communication, how difficult it would be to integrate it as a general library for serving other WebSockets APIs apart to its own? Or am I better off using another library such as aiohttp? I'm looking for any advice and hints abut the best practices for this. Thanks!
You can proxy WebSockets from your main application to Jupyter.
It really doesn't matter what technology you use to serve WebSockets, the proxy loop will be very similar (wait for message, push message forward). However, it will be web server dependent as Python does not have standard to WebSockets akin WSGI.
I did one in pyramid_notebook project. Running Jupyter in its own process is must as, at least by the time of writing the code, embedding Jupyter directly to your application was not feasible. I am not sure though if the latest version have changed this. Jupyter itself was using Tornado.
"""UWSGI websocket proxy."""
from urllib.parse import urlparse, urlunparse
import logging
import time
import uwsgi
from pyramid import httpexceptions
from ws4py import WS_VERSION
from ws4py.client import WebSocketBaseClient
#: HTTP headers we need to proxy to upstream websocket server when the Connect: upgrade is performed
CAPTURE_CONNECT_HEADERS = ["sec-websocket-extensions", "sec-websocket-key", "origin"]
logger = logging.getLogger(__name__)
class ProxyClient(WebSocketBaseClient):
"""Proxy between upstream WebSocket server and downstream UWSGI."""
#property
def handshake_headers(self):
"""
List of headers appropriate for the upgrade
handshake.
"""
headers = [
('Host', self.host),
('Connection', 'Upgrade'),
('Upgrade', 'WebSocket'),
('Sec-WebSocket-Key', self.key.decode('utf-8')),
# Origin is proxyed from the downstream server, don't set it twice
# ('Origin', self.url),
('Sec-WebSocket-Version', str(max(WS_VERSION)))
]
if self.protocols:
headers.append(('Sec-WebSocket-Protocol', ','.join(self.protocols)))
if self.extra_headers:
headers.extend(self.extra_headers)
logger.info("Handshake headers: %s", headers)
return headers
def received_message(self, m):
"""Push upstream messages to downstream."""
# TODO: No support for binary messages
m = str(m)
logger.debug("Incoming upstream WS: %s", m)
uwsgi.websocket_send(m)
logger.debug("Send ok")
def handshake_ok(self):
"""
Called when the upgrade handshake has completed
successfully.
Starts the client's thread.
"""
self.run()
def terminate(self):
super(ProxyClient, self).terminate()
def run(self):
"""Combine async uwsgi message loop with ws4py message loop.
TODO: This could do some serious optimizations and behave asynchronously correct instead of just sleep().
"""
self.sock.setblocking(False)
try:
while not self.terminated:
logger.debug("Doing nothing")
time.sleep(0.050)
logger.debug("Asking for downstream msg")
msg = uwsgi.websocket_recv_nb()
if msg:
logger.debug("Incoming downstream WS: %s", msg)
self.send(msg)
s = self.stream
self.opened()
logger.debug("Asking for upstream msg")
try:
bytes = self.sock.recv(self.reading_buffer_size)
if bytes:
self.process(bytes)
except BlockingIOError:
pass
except Exception as e:
logger.exception(e)
finally:
logger.info("Terminating WS proxy loop")
self.terminate()
def serve_websocket(request, port):
"""Start UWSGI websocket loop and proxy."""
env = request.environ
# Send HTTP response 101 Switch Protocol downstream
uwsgi.websocket_handshake(env['HTTP_SEC_WEBSOCKET_KEY'], env.get('HTTP_ORIGIN', ''))
# Map the websocket URL to the upstream localhost:4000x Notebook instance
parts = urlparse(request.url)
parts = parts._replace(scheme="ws", netloc="localhost:{}".format(port))
url = urlunparse(parts)
# Proxy initial connection headers
headers = [(header, value) for header, value in request.headers.items() if header.lower() in CAPTURE_CONNECT_HEADERS]
logger.info("Connecting to upstream websockets: %s, headers: %s", url, headers)
ws = ProxyClient(url, headers=headers)
ws.connect()
# TODO: Will complain loudly about already send headers - how to abort?
return httpexceptions.HTTPOk()

How to get response on python client from a nodeJS server

I'm trying to build a simple chat app using a NodeJS server with socket.IO and a client written in python 2.7 using socketIO-client package.
The js server is on local and is very simple :
io.on('connection', function(socket){
socket.on("chat_message", function(msg){
io.emit("chat_message", msg);
});
});
This chat app works for several pages opened in my browser.
(source comes from : http://socket.io/get-started/chat/ )
I wanted to connect this server from a python client, and I succesfully emit from python to js server (text entered in python client appears into the browser).
The problem is the following :
When I type some text into the browser, Python doesn't print it into the shell.
Here is the code I use on python side :
def communicate(self, msg):
logging.debug('processing event : %s', msg)
self.socketIO.emit("chat_message", msg, self.on_chat_message)
def on_chat_message(self, *args):
output = ''
index = 0
for a in args:
output += ' | '+str(index)+' : '+str(a)
index += 1
logging.debug('processing server output : ' + output)
return
As the server emits to all connected clients, python should normally handle it into the callback 'on_chat_message' but, it doesn't work.
I also tried to add a self.socketIO.wait_for_callbacks() to the python, without success.
If someone has an idea about what I'm doing it wrong, it would be great =D !
Thanks.
I finally managed to get a response from the server on my python client.
My mistake was to not 'wait' on the socketIO variable, so I didn't catch any feedback.
To properly handle server responses, I use a BaseNamespace class which is binded to the socket, and handle basic events : overriding of the function on_event
def on_event(self, event, *args):
#Catch your event and do stuff
If an event is handled by the class, a flag is raised up and checked by the client.
def communicate(self, msg):
self.socketIO.emit('chat_message', msg)
def wait_for_server_output(self):
while(1):
self.socketIO.wait(seconds=1)
ns = self.socketIO.get_namespace()
if ns.flag.isSet():
data = ns.data_queue.get()
ns.flag.clear()
These two functions are managed by 2 threads inside the client code.
Hoping this will help =)

Resources