So, I have a socket.io connection with a React.js frontend that sends a blob of wav audio data every 100 ms to my Flask backend. On my backend, I am maintaining a thread safe queue
audio_buffer = queue.Queue()
#socketio.on('stream')
def stream(data):
#Data is a properly formatted wav file in bytes
audio_buffer.put(data)
Now, my goal is that while my backend is taking the streamed data from the frontend, I also want to send the data to google ASR from my backend at the same time. I am basing my code here on the documentation here: https://cloud.google.com/speech-to-text/docs/streaming-recognize
Basically, I am going to create a generator with a reference to the queue and give that to the google api.
audio_generator = stream.generator() #Here, I wrote a new method for creating a generator from audio_buffer
requests = (
speech.StreamingRecognizeRequest(audio_content=content)
for content in audio_generator
)
responses = client.streaming_recognize(streaming_config, requests)
# Now, put the transcription responses to use.
listen_print_loop(responses)
Now the issue that I have is that essentially, if I try to send things to the google api, my backend stops listening to the frontend and just waits for the queue to be populated, ignoring socket.io requests from the client. For example, I tried to have the client to send a start command for the backend to create the generator and start streaming data to google while the client also sends data to populate the queue through the stream method. What happened was that it got stuck on trying to stream data to google and never got around to actually listening to the data blobs.
My question is that should I attempt to use threading or celery to try and run the google api code simultaneously? Basically, I am thinking of instantiating a thread or worker to do this google api streaming recognize, and the main flask thread would keep listening to the client and populate the queue. Is this a sound approach? Is there a library that implements this for me?
Update 5/16:
So I am attempting to run the google api on a standard thread. My code is pretty simple. On start, I start a thread with
yourThread = threading.Thread(target=recognize, args=[stream])
and my recognize code passes in the thread safe queue and actually sends the code to the google api properly. However, my issue now is that emits that I put in the code don't actually make it to the client. My code is based on the google api code:
for response in responses:
#Some other code from the tutorial
socketio.emit("transcription", transcript)
that last emit call never actually gets to the frontend at all. Now, looking around, I found the same issue here: https://github.com/miguelgrinberg/Flask-SocketIO/issues/1239
import httplib2shim
httplib2shim.patch()
but when I tried the fix over there, and switched async_mode to eventlet, the google api calls still worked, but none of the emits still get to the client
Okay, so I solved my own issue. The problem is just that flask_socketio + python google speech api just doesn't work due to incompatibility between eventlet/gevent and grpc.
My solution was to switch to using regular websockets. By using a regular websocket on the frontend and flask_sockets on the backend, I managed to get the google cloud speech python api to work without issue. I've placed some sample code for this on my github: https://github.com/dragon18456/flask-sockets-google-speech-api
Related
A few questions about how to handle this dataflow. To notice I'm new to node and next.js but doing some starting basics and now trying to think ahead. Because I have read websockets and node being good for live-data handling.
We have an external API from a game. We have multiple moderators/clients that login to our system so they can use that API. Now they are requesting with multiple clients on that API multiple times per second which isn't a nice dataflow.
The main GET functions from that API are chat, so best would be live-data/push?? Else 1 second refresh for it. And the server info with the players. That could do with a 5 second refresh. Or is it just as easy to get that data with the 1 second/live-data from the chat ownGET flow I have in mind, as mentioned next? That's the first question sneaked in.
Then the ownGET flow.
I was thinking of making a own API that does the request to the game API. The game API doesn't support a websocket. So the idea is to let the own API do a request once a second to the game API. And then let the clients listen to the own API by websocket.
What I think websocket can do is, sending a push notification to the client when there is new data. On that push notification the client sends a get request to the own API. Instead of doing a 1 second get ping per client on the own API.
Just a quick check, is this even possible? And is this the right way of thinking or could it be optimized even better? Thanks in advance!
I am using these guides to create my Next-Mongo app without a NodeJS server:
https://www.mongodb.com/developer/how-to/nextjs-with-mongodb/
https://www.section.io/engineering-education/build-nextjs-with-mongodb-and-deploy-on-vercel/#setting-up-the-api-route
I couldn't find the answer to this but is it possible to stream data? The closest solution I have to making the ap look real-time is either refreshing the page or getting back the published data on the response (which I can just append to the state) - this however is very limited.
I have used libraries like Pusher in the past with an express app but I can't use that on my serveless app.
It sounds to me you want to watch and consume on the front end collection changes from your mongodb!? If so, I'd try ably for real-time communication.
You can use ably in your model to broadcast the data you want to render real-time. You could also use pusher to accomplish the same result.
https://mongoosejs.com/docs/change-streams.html
I'm working on a React/Node program. Somewhere in the program, I want to send a request to back-end using Axios, then while the back-end function is processing the request, I want to get step by step responses to update the front-end by showing some notifications.
I could do this by sending multiple requests and waiting for each response. But the problem is that the first process in each step is identically the same in all steps and it will create some performance issues.
My question is:
Is there any way to send a single request to API, then on the back-end side, return the response in multiple steps while it's processing? Meanwhile on the front-end, get the updates from back-end and update the notifications on it?
Thank you very much
Sorry bro, I'm afraid that you can't do this with using HTTP alone since the connection is ended with a single response for a single request. You need to do this with multiple HTTP call with Axios.
Otherwise, you could use WebSocket.
Their are cool Module socket.io with planty examples and documentations.
check this out,
https://www.npmjs.com/package/socket.io
Right now I have a Discord bot that is in approximately 575 servers, and on the website I made it lists the current server count of the bot. Right now, my method is to log the bot in every 5 minutes on the express app for the webpage and save the current server count to be served to the client. This causes memory usage spikes whenever I have to log in though, and using a whole discord.js application for one function seems inefficient.
I tried using the Discord API endpoint, but that was extremely laggy because there is only an endpoint for listing all the servers, not just the count. The endpoint also can only send info on 100 servers at a time, so I'd have to make a lot of different requests.
I'm hoping that there's a way to do this that would use less memory but still be fast. I tried looking into discord.js's source code to see if I could just isolate the functionality I needed, but I wasn't able to even find where in the code the data is requested from Discord. If anyone is able to figure how I could do this, it would be greatly appreciated.
You can try using free online database as a way to "communicate" data between your bot and your express app.
For example, you can use Cloud Firestore. Every 15 minutes (or whatever frequency you want) you can have your bot save server count information (and update time too if you want) into Cloud Firestore. Every time a client loads up your webpage, it'll retrieve the data from Cloud Firestore and be able to display server count and last updated time. (Alternatively you could have your express app retrieve that data every 15 minutes and cache it to send to the client)
You can use this method to share other data from your bot to your express app too.
The solution I ended up needing was a Discord websocket connection. That keeps everything updated live without having to deal with the memory and caching issues that come with discord.js. I've had a few other questions after this one on that topic, check those out if you want to see more on Discord websocket connections.
I am new to socket.io. I have a backend server implemented on nodejs which makes multiple async calls to rest api and send the response back to the client on angular 4.0. I follow the simple http request methodology. However, the issue is that it takes a while for the server to get collect and parse data from each multiple apis and send it back to the client. Therefore, I wanted to implement streaming which will return data to the client as soon as it gets from anywhere. I wanted to know if this can be done efficiently using web sockets and socket.io or is there another way around to do this.
I know, we can make async calls from the client but i want to implement the logic on the server rather than the client.
Note: I donot have realtime data