Why output from google video intelligence not in JSON format - python-3.x

I have been trying to use the google video intelligence API from https://cloud.google.com/video-intelligence/docs/libraries and I tried the exact same code. The response output was supposed to be in json format however the output was either a google.cloud.videointelligence_v1.types.AnnotateVideoResponse or something similar to that.
I have tried the code from many resources and recently from https://cloud.google.com/video-intelligence/docs/libraries but still no JSON output was given. What I got when I checked the type of output I got:
type(result)
google.cloud.videointelligence_v1.types.AnnotateVideoResponse
So, how do I get a JSON response from this?

If you specify an outputUri, the results will be stored in your GCS bucket in json format. https://cloud.google.com/video-intelligence/docs/reference/rest/v1/videos/annotate
It seems like you aren't storing the result in GCS. Instead you are getting the result via the GetOperation call, which has the result in AnnotateVideoResponse format.

I have found a solution for this. What I had to do was import this
from google.protobuf.json_format import MessageToJson
import json
and run
job = client.annotate_video(
input_uri='gs://xxxx.mp4',
features=['OBJECT_TRACKING'])
result = job.result()
serialized = MessageToJson(result)
a = json.loads(serialized)
type(a)
what I was doing was turn the results into a dictionary.
Or for more info, try going to this link: google forums thread

Related

How can I correctly output a dataframe with the values(?) of a JSON as columns?

First time posting.I am Learning how to use Python and decided I will do so using the Riot Games API.
Anyway, I'm trying to output a Legends of Runeterra leaderboard into a DataFrame, however my DataFrame is not mapping 'correctly'. I've done a lot of Googling and have finally given up and thought I'd just ask.
Im betting it's something obvious!
This is my current query - Nice a simple...(This took me 2 hours :P)
import requests
import pandas
response = requests.get("https://europe.api.riotgames.com/lor/ranked/v1/leaderboards?api_key=RGAPI-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx")
response_file = response.json()
data = pandas.DataFrame.from_dict(response_file,orient='columns')
print(data)
It outputs:
I dont want the 'players' key to be the column. I want the Name, Rank and LP to be the columns. I believe these are called values? But I cannot seem to figure out how to do this.
Any help, or links to posts that I have missed that help resolve this would be amazing.
Thank you
EDIT 13:20 13/02/2021
Attached JSON file as requested
https://pastebin.com/ks4AaXQp
I couldnt figure out how to attach a file here, so I threw it in PasteBin.
Try this:
data = pd.read_json(response_file)
If this does not work, post the response_file as a .json file and I'll try to assist you further.
I have managed to resolve this myself.
I didn't specify the 'Key' when creating the DataFrame:
The corrected Code is:
import requests
import pandas
response = requests.get("https://europe.api.riotgames.com/lor/ranked/v1/leaderboards?api_key=RGAPI-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx")
response_file = response.json()
data = pandas.DataFrame(response_file['players'])
print(data)
Output

How do I access JSON elements with urllib and Python3

I'm trying to access an API via urllib in Python3. The API is here:
https://api.zxinfo.dk/doc/#!/zxinfo/getGameById
The used a script I found here which prints the JSON dump OK, but when I try and access individual values I get the error 'KeyError: controls'
Access nested JSON values using Python
How do I access the individual elements?
I have spent today looking at various other questions but I can't get to the bottom of it.
Thanks in advance.
import json
import urllib.request
data = urllib.request.urlopen("https://api.zxinfo.dk/api/zxinfo/games/0001551?mode=compact".read().decode('utf8')
output = json.loads(data)
print(json.dumps(output, indent=2))
for item in output ['controls']:
title=item ['control']
print(title)

Can AWS Lambda write CSV to response?

like the question says, I would like to know if it is possible to return the response request of a lambda function in CSV format. I already know that is possible to write JSON objects as such, but for my current project, CSV format is necessary. I have only seen discussion of writing CSV files to S3, but that is what we need for this project.
This is an example of what I would like to have displayed in a response:
year,month,day,hour
2017,10,11,00
2017,10,11,01
2017,10,11,02
2017,10,11,03
2017,10,11,04
2017,10,11,05
2017,10,11,06
2017,10,11,07
2017,10,11,08
2017,10,11,09
Thanks!

audio file isn't being parsed with Google Speech

This question is a followup to a previous question.
The snippet of code below almost works...it runs without error yet gives back a None value for results_list. This means it is accessing the file (I think) but just can't extract anything from it.
I have a file, sample.wav, living publicly here: https://storage.googleapis.com/speech_proj_files/sample.wav
I am trying to access it by specifying source_uri='gs://speech_proj_files/sample.wav'.
I don't understand why this isn't working. I don't think it's a permissions problem. My session is instantiated fine. The code chugs for a second, yet always comes up with no result. How can I debug this?? Any advice is much appreciated.
from google.cloud import speech
speech_client = speech.Client()
audio_sample = speech_client.sample(
content=None,
source_uri='gs://speech_proj_files/sample.wav',
encoding='LINEAR16',
sample_rate_hertz= 44100)
results_list = audio_sample.async_recognize(language_code='en-US')
Ah, that's my fault from the last question. That's the async_recognize command, not the sync_recognize command.
That library has three recognize commands. sync_recognize reads the whole file and returns the results. That's probably the one you want. Remove the letter "a" and try again.
Here's an example Python program that does this: https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/speech/cloud-client/transcribe.py
FYI, here's a summary of the other types:
async_recognize starts a long-running, server-side operation to translate the whole file. You can make further calls to the server to see whether it's finished with the operation.poll() method and, when complete, can get the results via operation.results.
The third type is streaming_recognize, which sends you results continually as they are processed. This can be useful for long files where you want some results immediately, or if you're continuously uploading live audio.
I finally got something to work:
import time
from google.cloud import speech
speech_client = speech.Client()
sample = speech_client.sample(
content = None
, 'gs://speech_proj_files/sample.wav'
, encoding='LINEAR16'
, sample_rate= 44100
, 'languageCode': 'en-US'
)
retry_count = 100
operation = sample.async_recognize(language_code='en-US')
while retry_count > 0 and not operation.complete:
retry_count -= 1
time.sleep(10)
operation.poll() # API call
print(operation.complete)
print(operation.results[0].transcript)
print(operation.results[0].confidence)
for op in operation.results:
print op.transcript
Then something like
for op in operation.results:
print op.transcript

Issue with Cloudinary in reading and uploading image from binary or base64 string

I am trying to upload and image to cloudinary by passing binary data of an image or base64 string. when i try to pass a base64 string as a data uri I am getting a error with the response of status code 502. but it works fine with small base64 strings.
This works fine
res = cloudinary.uploader.upload("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==")
whereas when I pass some lengthy string it fails with 502 status code.
Might be because the uri cannot handle large value. Is there any other right way for passing lengthy strings?
or how can i pass a binary data as a input to the cloudinary?
I'm Itay from Cloudinary.
In order to understand this issue better we'll need to take a deeper look at your account. We'll need some more information like your cloud name, timestamp of the fault uploads and more.
If you prefer this to be handled more privately, you're more than welcomed to open a support ticket (http://support.cloudinary.com/tickets/new) and we'll be happy to assist.

Resources