I am currently working on a project that uses Flask-SocketIO to send things over the internet, but I came across this question.
Question:
Is there any way to send images in Flask-SocketIO? I did some googling but no luck for me.
Socket.IO is a data agnostic protocol, so you can send any kind of information. Both text and binary payloads are supported.
If you want to send an image from the server, you can do something like this:
with open('my_image_file.jpg', 'rb') as f:
image_data = f.read()
emit('my-image-event', {'image_data': image_data})
The client will have to be aware that you are sending jpeg data, there is nothing in the Socket.IO protocol that makes sending images different than sending text or other data formats.
If you are using a JavaScript client, you will get the data as a byte array. Other clients may choose the most appropriate binary representation for this data.
Adding to the accepted answer, I had a problem converting the ArrayBuffer in JavaScript to a jpeg image. I used the code inspired from [https://gist.github.com/candycode/f18ae1767b2b0aba568e][1]:
function receive_data(data) {
var arrayBufferView = new Uint8Array(data['image']);
var blob = new Blob( [ arrayBufferView ], { type: "image/jpeg" } );
var img_url = URL.createObjectURL(blob);
document.getElementById("fig_image").src = img_url;
}
Additionally, I had to increase the max_http_buffer_size of the server like this:
from flask import Flask
from flask_socketio import SocketIO
app = Flask(__name__)
MAX_BUFFER_SIZE = 50 * 1000 * 1000 # 50 MB
socketio = SocketIO(app, max_http_buffer_size=MAX_BUFFER_SIZE)
Default is 1 MB.
Related
i want to connect to a websocket server, when my graphql server starts, and inside a resolver i want to use send and recv functions of the connected websocket for data communication.
brief about my backend,
i have a python rest client that also have a websocket server, i can fetch solo product details and product list via the websocket server.
(in graphql resolver i want to collect my products data and also an inventory data and merge them both for UI. as node is async programming, and all the example are connect to server then, use a then block and communicate, i dont want to do that i want to use the connection object in resolver and the connection should be done once.)
import { WebSocketClient } from './base';
const productWebSocket = new WebSocketClient("ws://192.168.0.109:8000/abi-connection");
productWebSocket.connect({reconnect: true});
export { productWebSocket };
now i will import this productWebSocket and want to use it in resolvers.
this websocket and graphql shi* isnt that popular, but designing this shi* this way gives me a performance boost, as i use utility functions for both of my restapis and websocket server in core-apis. i call this maksuDII+ way of programming.
i couldnt do this with nodejs and got no help. so i implemented graphql with python and got more better control in websocket client.
i search for ws, websocket some other shitty packages in nodejs none worked
websocket.on('connect', ws=> {
websocket.on('message', data => {
// this shit is a argument how am i suppose to get this value in a express api end point.
// search total 5 pages in google got nothing so have to shift to python.
})
})
python Version
from graphql import GraphQLError
from ..service_base import query
from app.websocket.product_websocket import product_ws_connection
from app.websocket.inventory_websocket import inventory_ws_connection
import json
from app.utils.super_threads import SuperThreads
def get_websocket_data(socket_connection, socket_send_data):
socket_connection.send(json.dumps(socket_send_data))
raw_data = socket_connection.recv()
jsonified_data = json.loads(raw_data)
return jsonified_data
#query.field("productDetails")
def product_details(*_, baseCode: str):
product_ws = product_ws_connection.get_connection() # connected client, with proper connection to my websocket server
inventory_ws = inventory_ws_connection.get_connection()
if not product_ws:
raise GraphQLError("Product Api Down")
if not inventory_ws:
raise GraphQLError("Inventory Api Down")
product_ws_data = {
"operation": "PRODUCT_FETCH",
"baseCode": baseCode
}
inventory_ws_data = {
"operation": "STOCK_FETCH",
"baseCode": baseCode
}
# super threads here is a diffrent topic, it a wrapper around standard python Threads.
ws_product_thread = SuperThreads(target=get_websocket_data, args=(product_ws, product_ws_data))
ws_inventory_thread = SuperThreads(target=get_websocket_data, args=(inventory_ws, inventory_ws_data))
ws_product_thread.start() # asking one of my websocket server for data.
ws_inventory_thread.start() # same with this thread.
product_data_payload = ws_product_thread.join() # i get the what websocket gives me as response
inventory_data_payload = ws_inventory_thread.join() # needed this type of shit in nodejs could not have it.
if "Fetched" in product_data_payload["abi_response_info"]["message"] and \
"Fetched" in inventory_data_payload["abi_response_info"]["message"]:
# data filtering here
return product_data
else:
raise GraphQLError("Invalid Product Code")
I have a base64 image being sent from the browser to my flask server. When the server gets the base64 string, it converts it to a format for OpenCV:
image = '{}'.format(image64).split(',')[1]
im_bytes = base64.b64decode(image)
im_arr = np.frombuffer(im_bytes, dtype=np.uint8) # im_arr is one-dim Numpy array
frame = cv2.imdecode(im_arr, flags=cv2.IMREAD_COLOR)
Some processing happens on the frame and then, using flask_signalio, is sent back to the browser using:
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 90]
result, encimg = cv2.imencode('.jpg', updated_frame, encode_param)
socketio.emit('response_back', {'image_data': encimg}, namespace='/test')
In the browser, I have some JS to display the image that it receives:
socket.on('response_back', function(image){
....
});
I have two issues.
Calling socketio.emit('response_back', {'image_data': encimg}, namespace='/test') results in an exception on the server: TypeError: Object of type ndarray is not JSON serializable.
How can I fix this? What is wrong here? It's strange because I am sure I am sending ('emitting') the data correctly, as shown here and here. I've also tried setting up SocketIO to use binary data: socketio = SocketIO(app, binary=True). Nothing works. Always the same error.
How do I deal with the image data back on the client/browser? (i.e. in the socket.on('response_back', function(image) code)?
After nearly an hour, I've found the solution.
First, flask_socketio can send byte data, but NOT ndarray! The solution is to encode to bytes, with .tobytes(). For example, on the server:
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 90]
result, encimg = cv2.imencode('.jpg', updated_frame, encode_param)
socketio.emit('response_back', {'frame': encimg.tobytes()}, namespace='/test')
For the second problem, we need to convert the ArrayBuffer data:
var arrayBufferView = new Uint8Array( result.frame );
var blob = new Blob( [ arrayBufferView ], { type: "image/jpeg" } );
var urlCreator = window.URL || window.webkitURL;
var imageUrl = urlCreator.createObjectURL( blob );
var image_id = document.getElementById('imagebox');
image_id.src = imageUrl;
I want my client to download (not render) a dynamically generated PDF file via pyramid. Right now I do it like this:
def get_pdf(request):
pdfFile = open('/tmp/example.pdf', "wb")
pdfFile.write(generator.GeneratePDF())
response = FileResponse('/tmp/example.pdf')
response.headers['Content-Disposition'] = ('attachment; filename=example.pdf')
return response
From the client point-of-view it's exactly what I need. However,
It leaves behind an orphaned file
It isn't thread-safe (though I could use random filenames)
The docs say:
class FileResponse
A Response object that can be used to serve a static file from disk simply.
So FileResponse is probably not what I should be using. How would you replace it with something more dynamic but indistinguishable for the client?
Just use a normal response with the same header:
def get_pdf(request):
response = Response(body=generator.GeneratePDF())
response.headers['Content-Disposition'] = ('attachment;filename=example.pdf')
return response
I am attempting to use gtts to generate an audio file of text I am passing in as a variable ( eventually I will be scraping the text to be read, but not in this script, that is why I am using a variable) and I want to text myself the .mp3 file I am generating. It is not working though - here is my code. Any idea how to text message an .mp3 file with twilio?
import twilio
from gtts import gTTS
from twilio.rest import Client
accountSID = '********'
authToken = '****************'
twilioCli = Client(accountSID, authToken)
myTwilioNumber = '*******'
myCellPhone = '*****'
v = 'test'
#add voice
tts = gTTS(v)
y = tts.save('hello.mp3')
message = twilioCli.messages.create(body = y, from_=myTwilioNumber, to=myCellPhone)
this is the error i get, but the link it directs me to does not speak to texting mp3 audio files:
raise self.exception(method, uri, response, 'Unable to create record')
twilio.base.exceptions.TwilioRestException:
[31m[49mHTTP Error[0m [37m[49mYour request was:[0m
[36m[49mPOST /Accounts/********/Messages.json[0m
[37m[49mTwilio returned the following information:[0m
[34m[49mUnable to create record: The requested resource /2010-04-01/Accounts/********/Messages.json was not found[0m
[37m[49mMore information may be available here:[0m
[34m[49mhttps://www.twilio.com/docs/errors/20404[0m
Twilio developer evangelist here.
You can't send an mp3 file as the body of a text message. If you do send a body, it should be a string.
You can deliver mp3 files as media messages in the US and Canada. In this case you need to make the mp3 file available at a URL. Then you set that URL as the media_url for the message, like this:
message = twilioCli.messages.create(
from_=myTwilioNumber,
to=myCellPhone,
media_url="http://example.com/hello.mp3"
)
I recommend reading through the documentation on sending media via MMS and what happens to MIME types like mp3 in MMS.
I have used this bit of code below to successfully parse a .wav file which contains speech, to text, using Google Speech.
But I want to access a different .wav file, which I have placed on Google Cloud Storage (publicly), instead of on my local hard drive. Why doesn't simply changing
speech_file = 'my/local/system/sample.wav'
to
speech_file = 'https://console.cloud.google.com/storage/browser/speech_proj_files/sample.wav'
work acceptably?
Here is my code:
speech_file = 'https://console.cloud.google.com/storage/browser/speech_proj_files/sample.wav'
DISCOVERY_URL = ('https://{api}.googleapis.com/$discovery/rest?'
'version={apiVersion}')
def get_speech_service():
credentials = GoogleCredentials.get_application_default().create_scoped(
['https://www.googleapis.com/auth/cloud-platform'])
http = htt|plib2.Http()
credentials.authorize(http)
return discovery.build(
'speech', 'v1beta1', http=http, discoveryServiceUrl=DISCOVERY_URL)
def main(speech_file):
"""Transcribe the given audio file.
Args:
speech_file: the name of the audio file.
"""
with open(speech_file, 'rb') as speech:
speech_content = base64.b64encode(speech.read())
service = get_speech_service()
service_request = service.speech().syncrecognize(
body={
'config': {
'encoding': 'LINEAR16', # raw 16-bit signed LE samples
'sampleRate': 44100, # 16 khz
'languageCode': 'en-US', # a BCP-47 language tag
},
'audio': {
'content': speech_content.decode('UTF-8')
}
})
response = service_request.execute()
return response
I'm not sure why your approach isn't working, but I want to offer a quick suggestion.
Google Cloud Speech API natively supports Google Cloud Storage objects. Instead of downloading the whole object only to upload it back to the Cloud Speech API, just specify the object by swapping out this line:
'audio': {
# Remove this: 'content': speech_content.decode('UTF-8')
'uri': 'gs://speech_proj_files/sample.wav' # Do this!
}
One other suggestion. You may find the google-cloud Python library easier to use. Try this:
from google.cloud import speech
speech_client = speech.Client()
audio_sample = speech_client.sample(
content=None,
source_uri='gs://speech_proj_files/sample.wav',
encoding='LINEAR16',
sample_rate_hertz= 44100)
results_list = audio_sample.sync_recognize(language_code='en-US')
There are some great examples here: https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/speech/cloud-client