I have used this bit of code below to successfully parse a .wav file which contains speech, to text, using Google Speech.
But I want to access a different .wav file, which I have placed on Google Cloud Storage (publicly), instead of on my local hard drive. Why doesn't simply changing
speech_file = 'my/local/system/sample.wav'
to
speech_file = 'https://console.cloud.google.com/storage/browser/speech_proj_files/sample.wav'
work acceptably?
Here is my code:
speech_file = 'https://console.cloud.google.com/storage/browser/speech_proj_files/sample.wav'
DISCOVERY_URL = ('https://{api}.googleapis.com/$discovery/rest?'
'version={apiVersion}')
def get_speech_service():
credentials = GoogleCredentials.get_application_default().create_scoped(
['https://www.googleapis.com/auth/cloud-platform'])
http = htt|plib2.Http()
credentials.authorize(http)
return discovery.build(
'speech', 'v1beta1', http=http, discoveryServiceUrl=DISCOVERY_URL)
def main(speech_file):
"""Transcribe the given audio file.
Args:
speech_file: the name of the audio file.
"""
with open(speech_file, 'rb') as speech:
speech_content = base64.b64encode(speech.read())
service = get_speech_service()
service_request = service.speech().syncrecognize(
body={
'config': {
'encoding': 'LINEAR16', # raw 16-bit signed LE samples
'sampleRate': 44100, # 16 khz
'languageCode': 'en-US', # a BCP-47 language tag
},
'audio': {
'content': speech_content.decode('UTF-8')
}
})
response = service_request.execute()
return response
I'm not sure why your approach isn't working, but I want to offer a quick suggestion.
Google Cloud Speech API natively supports Google Cloud Storage objects. Instead of downloading the whole object only to upload it back to the Cloud Speech API, just specify the object by swapping out this line:
'audio': {
# Remove this: 'content': speech_content.decode('UTF-8')
'uri': 'gs://speech_proj_files/sample.wav' # Do this!
}
One other suggestion. You may find the google-cloud Python library easier to use. Try this:
from google.cloud import speech
speech_client = speech.Client()
audio_sample = speech_client.sample(
content=None,
source_uri='gs://speech_proj_files/sample.wav',
encoding='LINEAR16',
sample_rate_hertz= 44100)
results_list = audio_sample.sync_recognize(language_code='en-US')
There are some great examples here: https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/speech/cloud-client
Related
I have videos saved in azure blob storage and i want to upload them into facebook. Facebook video upload is a multipart/form-data post request. Ordinary way of doing this is download azure blob as bytes using readall() method in azure python sdk and set it in requests post data as follows.
# download video from azure blob
video = BlobClient.from_connection_string(AZURE_STORAGE_CONNECTION_STRING,
AZURE_CONTAINER_NAME,
f"{folder_id}/{file_name}")
video = video.download_blob().readall()
# upload video to facebook
url = f"{API_VIDEO_URL}/{page_id}/videos"
params = {
"upload_phase": "transfer",
"upload_session_id": session_id,
"start_offset": start_offset,
"access_token": access_token
}
response = requests.post(url, params=params, files={"video_file_chunk": video})
Bytes of the file is loaded into the memory and this is not good for larger files. There is a method in azure sdk readinto(stream) that downloads the file into a stream. Is there a way to connect requests streaming upload and readinto() method. Or is there a another way to upload the file directly from blob storage?
Regarding how to upload video in chunk with stream, please refer to the following code
from azure.storage.blob import BlobClient
import io
from requests_toolbelt import MultipartEncoder
import requests
blob_poperties=blob.get_blob_properties()
blob_size=blob_poperties.size # the blob size
access_token=''
session_id='675711696358783'
chunk_size= 1024*1024 #the chunk size
bytesRemaining = blob_size
params = {
"upload_phase": "transfer",
"upload_session_id": session_id,
"start_offset": 0,
"access_token": access_token
}
url="https://graph-video.facebook.com/v7.0/101073631699517/videos"
bytesToFetch=0
start=0 # where to start downlaoding
while bytesRemaining>0 :
with io.BytesIO() as f:
if bytesRemaining < chunk_size:
bytesToFetch= bytesRemaining
else:
bytesToFetch=chunk_size
print(bytesToFetch)
print(start)
downloader =blob.download_blob(start,bytesToFetch)
b=downloader.readinto(f)
print(b)
m = MultipartEncoder(
fields={'video_file_chunk':('file',f) }
)
r =requests.post(url, params=params, headers={'Content-Type': m.content_type}, data=m)
s=r.json()
print(s)
start =int(s['start_offset'])
bytesRemaining -=int(s['start_offset'])
params['start_offset']=start
print(params)
# end uplaod
params['upload_phase']= 'finish'
r=requests.post(url, params=params)
print(r)
I would like to implement the Speaker Recognition API from Microsoft's Cognitive Services for a Speaker Verification project. I already have a Speaker Recognition API key. I got the sample Python code directly from the documentation (on the bottom of the documentation):
https://westus.dev.cognitive.microsoft.com/docs/services/563309b6778daf02acc0a508/operations/563309b7778daf06340c9652
########### Python 3.2 #############
import http.client, urllib.request, urllib.parse, urllib.error, base64
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '{subscription key}',
}
params = urllib.parse.urlencode({
})
try:
conn = http.client.HTTPSConnection('westus.api.cognitive.microsoft.com')
conn.request("POST", "/spid/v1.0/verificationProfiles?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
This is the code sample for the first step, create and save a voice profile.
To conduct Speaker Verification, we need to do 3 steps:
1) Create Profile
2) Create Enrollment
3) Verification
I'm stuck already at the first step now. This is my first time working with APIs in general, so I'm not really sure what parts of the Python code I have to change. I know that I need do insert my API key in 'Ocp-Apim-Subscription-Key' but other than that, what else? For example, if I add my API key in that specific field and let the code run, I received this error message.
b'{"error":{"code":"BadRequest","message":"locale is not specified"}}'
Where do I need to insert the locale ("en-us") for example? It is not really clear to me from the documentation what I need to edit. If you can guide me what I need to insert/add in my API calls I would be very thankful.
Thanks so much in advance!
When you create a Speaker Recognition profile, it has to be linked with a locale, and you specify this locale in the request body. The body should be a JSON object like the following one:
{
"locale":"en-us",
}
For the sample to work, you need to replace "{body}" with the actual body value like this:
conn.request("POST", "/spid/v1.0/verificationProfiles?%s" % params, "{\"locale\":\"en-US\"}", headers)
I am currently working on a project that uses Flask-SocketIO to send things over the internet, but I came across this question.
Question:
Is there any way to send images in Flask-SocketIO? I did some googling but no luck for me.
Socket.IO is a data agnostic protocol, so you can send any kind of information. Both text and binary payloads are supported.
If you want to send an image from the server, you can do something like this:
with open('my_image_file.jpg', 'rb') as f:
image_data = f.read()
emit('my-image-event', {'image_data': image_data})
The client will have to be aware that you are sending jpeg data, there is nothing in the Socket.IO protocol that makes sending images different than sending text or other data formats.
If you are using a JavaScript client, you will get the data as a byte array. Other clients may choose the most appropriate binary representation for this data.
Adding to the accepted answer, I had a problem converting the ArrayBuffer in JavaScript to a jpeg image. I used the code inspired from [https://gist.github.com/candycode/f18ae1767b2b0aba568e][1]:
function receive_data(data) {
var arrayBufferView = new Uint8Array(data['image']);
var blob = new Blob( [ arrayBufferView ], { type: "image/jpeg" } );
var img_url = URL.createObjectURL(blob);
document.getElementById("fig_image").src = img_url;
}
Additionally, I had to increase the max_http_buffer_size of the server like this:
from flask import Flask
from flask_socketio import SocketIO
app = Flask(__name__)
MAX_BUFFER_SIZE = 50 * 1000 * 1000 # 50 MB
socketio = SocketIO(app, max_http_buffer_size=MAX_BUFFER_SIZE)
Default is 1 MB.
I am attempting to use gtts to generate an audio file of text I am passing in as a variable ( eventually I will be scraping the text to be read, but not in this script, that is why I am using a variable) and I want to text myself the .mp3 file I am generating. It is not working though - here is my code. Any idea how to text message an .mp3 file with twilio?
import twilio
from gtts import gTTS
from twilio.rest import Client
accountSID = '********'
authToken = '****************'
twilioCli = Client(accountSID, authToken)
myTwilioNumber = '*******'
myCellPhone = '*****'
v = 'test'
#add voice
tts = gTTS(v)
y = tts.save('hello.mp3')
message = twilioCli.messages.create(body = y, from_=myTwilioNumber, to=myCellPhone)
this is the error i get, but the link it directs me to does not speak to texting mp3 audio files:
raise self.exception(method, uri, response, 'Unable to create record')
twilio.base.exceptions.TwilioRestException:
[31m[49mHTTP Error[0m [37m[49mYour request was:[0m
[36m[49mPOST /Accounts/********/Messages.json[0m
[37m[49mTwilio returned the following information:[0m
[34m[49mUnable to create record: The requested resource /2010-04-01/Accounts/********/Messages.json was not found[0m
[37m[49mMore information may be available here:[0m
[34m[49mhttps://www.twilio.com/docs/errors/20404[0m
Twilio developer evangelist here.
You can't send an mp3 file as the body of a text message. If you do send a body, it should be a string.
You can deliver mp3 files as media messages in the US and Canada. In this case you need to make the mp3 file available at a URL. Then you set that URL as the media_url for the message, like this:
message = twilioCli.messages.create(
from_=myTwilioNumber,
to=myCellPhone,
media_url="http://example.com/hello.mp3"
)
I recommend reading through the documentation on sending media via MMS and what happens to MIME types like mp3 in MMS.
I cannot find a way to to write a data set from my local machine into the google cloud storage using python. I have researched a a lot but didn't find any clue regarding this. Need help, thanks
Quick example, using the google-cloud Python library:
from google.cloud import storage
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
More examples are in this GitHub repo: https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/storage/cloud-client
When we want to write a string to a GCS bucket blob, the only change necessary is using blob.upload_from_string(your_string) rather than blob.upload_from_filename(source_file_name):
from google.cloud import storage
def write_to_cloud(your_string):
client = storage.Client()
bucket = client.get_bucket('bucket123456789')
blob = bucket.blob('PIM.txt')
blob.upload_from_string(your_string)
In the earlier answers, I still miss the easiest way, using the open() method.
You can use the blob.open() as follows:
from google.cloud import storage
def write_file():
client = storage.Client()
bucket = client.get_bucket('bucket-name')
blob = bucket.blob('path/to/new-blob-name.txt')
## Use bucket.get_blob('path/to/existing-blob-name.txt') to write to existing blobs
with blob.open(mode='w') as f:
for line in object:
f.write(line)
You can find more examples and snippets here:
https://github.com/googleapis/python-storage/tree/main/samples/snippets
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('storage', 'v1', credentials=credentials)
filename = 'file.csv'
bucket = 'Your bucket name here'
body = {'name': 'file.csv'}
req = service.objects().insert(bucket=bucket, body=body, media_body=filename)
resp = req.execute()
from google.cloud import storage
def write_to_cloud(buffer):
client = storage.Client()
bucket = client.get_bucket('bucket123456789')
blob = bucket.blob('PIM.txt')
blob.upload_from_file(buffer)
While Brandon's answer indeed gets the file to Google cloud, it does this by uploading the file, as opposed to writing the file. This means that the file needs to exist on your disk before you upload it to the cloud.
My proposed solution uses an "in-memory" payload (the buffer parameter) which is then written to cloud. To write the content you need to use upload_from_file instead of upload_from_filename, everything else being the same.