How can I use python dictionary? - python-3.x

I already made json secret file like this.
json_data = {
'creon' :
{'token' : ["abcd"]}}
So I want to use exactly like this.
token = app_info['creon']['token']
print(token)
> "abcd"
But, result is like this.
print(token)
> abcd
How can I use the way I wanted?
Last tried result)
import os
import json
app_info = os.getenv('App_info')
with open(app_info, 'r') as f:
app_info = json.load(f)
token = '"'+app_info['creon']['token']+'"'
print(token)
TypeError: expected str, bytes or os.PathLike object, not NoneType

So I see couple of problems. First of all when you are doing it so, you are not getting a string values, but unicode {u'creon': {u'token': [u'abcd']}}, which can't work in you current situation. Now then you need to convert it to string when you get it like so app_info['creon']['token'][0].encode('utf-8').decode('utf-8') and then you can print it properly.
I modified the code to look like this:
import os
import json
app_info = os.getenv('App_info')
with open(app_info, 'r') as f:
app_info = json.load(f)
t = app_info['creon']['token'][0].encode('utf-8').decode('utf-8')
token = f'"{t}"'
print(token)
The second problem TypeError: expected str, bytes or os.PathLike object, not NoneType I think you get it because you haven't set environment variable to the path of your json data. I did it like so in my terminal export app_info=example.json and it worked properly when I executed the command python3 example.py with above python code in the same terminal session with exported environment variable.

If your question is how to print out the quotation marks along with the value, you can do:
print('"' + token + '"')

Related

Passing base64 .docx to docx.Document results in BadZipFile exception

I'm writing an Azure function in Python 3.9 that needs to accept a base64 string created from a known .docx file which will serve as a template. My code will decode the base64, pass it to a BytesIO instance, and pass that to docx.Document(). However, I'm receiving an exception BadZipFile: File is not a zip file.
Below is a slimmed down version of my code. It fails on document = Document(bytesIODoc). I'm beginning to think it's an encoding/decoding issue, but I don't know nearly enough about it to get to the solution.
from docx import Document
from io import BytesIO
import base64
var = {
'template': 'Some_base64_from_docx_file',
'data': {'some': 'data'}
}
run_stuff = ParseBody(body=var)
output = run_stuff.run()
class ParseBody():
def __init__(self, body):
self.template = str(body['template'])
self.contents = body['data']
def _decode_template(self):
b64Doc = base64.b64decode(self.template)
bytesIODoc = BytesIO(b64Doc)
document = Document(bytesIODoc)
def run(self):
self.document = self._decode_template()
I've also tried the following change to _decode_template and am getting the same exception. This is running base64.decodebytes() on the b64Doc object and passing that to BytesIO instead of directly passing b64Doc.
def _decode_template(self):
b64Doc = base64.b64decode(self.template)
bytesDoc = base64.decodebytes(b64Doc)
bytesIODoc = BytesIO(bytesDoc)
I have successfully tried the following on the same exact .docx file to be sure that this is possible. I can open the document in Python, base64 encode it, decode into bytes, pass that to a BytesIO instance, and pass that to docx.Document successfully.
file = r'WordTemplate.docx'
doc = open(file, 'rb').read()
b64Doc = base64.b64encode(doc)
bytesDoc = base64.decodebytes(b64Doc)
bytesIODoc= BytesIO(bytesDoc)
newDoc = Document(bytesIODoc)
I've tried countless other solutions to no avail that have lead me further away from a resolution. This is the closest I've gotten. Any help is greatly appreciated!
The answer to the question linked below actually helped me resolve my own issue. How to generate a DOCX in Python and save it in memory?
All I had to do was change document = Document(bytesIODoc) to the following:
document = Document()
document.save(bytesIODoc)

Output from request.get(url) looks like dict but says it's a string

I am new to this so bare with me. This a bioinformatics related question but I don't think that matters.
Here is my code (python):
import requests
URL = 'https://iupred2a.elte.hu/iupred2a/long/P03255.json'
response = requests.get(URL)
out = response.content
out_str = str(out, encoding='UTF-8')
the output for this starts and ends with {} and looks like a dictionary however, it says it is type string.
It is a json encoded object, that looks quite similar to a python dictionary, but it's not (it is, as you mentioned, a string).
To get the dictionary, use .json() instead of .content:
out = response.json()
References:
https://requests.readthedocs.io/en/master/user/quickstart/#response-content
https://requests.readthedocs.io/en/master/user/quickstart/#json-response-content

python3 get nested dictionary/property from yaml file

I try to figure out how to get nested data as dictionary/property from yaml file.
The code below works if I provide the function with only one level.
example :
result = parse_yaml_file(config_yaml_file, 'section')
but fails if I try something like :
result = parse_yaml_file(yaml_file, 'section.sub-section')
or
result = parse_yaml_file(yaml_file, '[\'section\'][\'sub-section\']')
python3 code :
def parse_yaml_file(yml_file, section):
print('section : ' + section)
data_dict = {}
try:
with open(yml_file) as f:
data_dict = (yaml.load(f))
except (FileNotFoundError, IOError):
exit_with_error('Issue finding/opening ' + yml_file)
if not section:
return data_dict
else:
return data_dict.get(section)
result = parse_yaml_file(yaml_file, 'section.sub-section.property')
print(json.dumps(result, indent=4))
Is it possible to parse only on part/section of the yaml file ?
Or just retrieve one sub-section/property from the parsed result ?
I know I can get it from the dictionary like :
data_dict['section']['sub-section']['property']
but I want it to be flexible, and not hardcoded since the data to grab is provided as argument to the function.
Thanks a lot for your help.
You could try using a library to help search the parsed yaml file e.g. dpath
https://pypi.org/project/dpath/
import yaml
import dpath.util
def parse_yaml(yml_file, section):
with open(yml_file,'r') as f:
data_dict = yaml.load(f)
return dpath.util.search(data_dict,section)
parse_yaml('file.yml','section/sub-section')

What does TypeError: 'OAuth2FlowNoRedirectResult' object is not iterable mean

I am using the dropbox python extension and i get this error message:
TypeError: 'OAuth2FlowNoRedirectResult' object is not iterable
this is the code so far:
import dropbox
flow = dropbox.DropboxOAuth2FlowNoRedirect(app_key, app_secret)
# Have the user sign in and authorize this token
authorize_url = flow.start()
print ('1. Go to: ' + authorize_url)
print ('2. Click "Allow" (you might have to log in first)')
print ('3. Copy the authorization code.')
code = input("Enter the authorization code here: ").strip()
# This will fail if the user enters an invalid authorization code
access_token, user_id = flow.finish(code)
f = open('Top Secret.jpg', 'rb')
response = client.put_file('Top Secret.jpg', f)
print ("uploaded:", response)
f.close()
f, metadata = client.get_file_and_metadata('/Top Secret.jpg')
out = open('Test.jpg', 'wb')
out.write(f.read())
out.close()
print (metadata)
i excluded the app key and the app secret for obvious reasons.
The issue appears to be in this line:
access_token, user_id = flow.finish(code)
The DropboxOAuth2FlowNoRedirect.finish method there returns a single OAuth2FlowNoRedirectResult object. You're trying to unpack to a tuple though (access_token, user_id), effectively trying to iterate over it to do so, which fails because there isn't anything else to iterate over.
There's some sample code showing how to use the DropboxOAuth2FlowNoRedirect.finish in the documentation for DropboxOAuth2FlowNoRedirect.
Modify the following lines of code as follows -
access_token,user_id= flow.finish(code)
to
access_token= flow.finish(code)
The
flow.finish
method does not return a tuple . It returns a single object .

Tweepy Search API Writing to File Error

Noob python user:
I've created file that extracts 10 tweets based on the api.search (not streaming api). I get a screen results, but cannot figure how to parse the output to save to csv. My error is TypeError: expected a character buffer object.
I have tried using .join(str(x) and get other errors.
My code is
import tweepy
import time
from tweepy import OAuthHandler
from tweepy import Cursor
#Consumer keys and access tokens, used for Twitter OAuth
consumer_key = ''
consumer_secret = ''
atoken = ''
asecret = ''
# The OAuth process that uses keys and tokens
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(atoken, asecret)
# Creates instance to execute requests to Twitter API
api = tweepy.API(auth)
MarSec = tweepy.Cursor(api.search, q='maritime security').items(10)
for tweet in MarSec:
print " "
print tweet.created_at, tweet.text, tweet.lang
saveFile = open('MarSec.csv', 'a')
saveFile.write(tweet)
saveFile.write('\n')
saveFile.close()
Any help would be appreciated. I've gotten my Streaming API to work, but am having difficulty with this one.
Thanks.
tweet is not a string or a character buffer. It's an object. Replace your line with saveFile.write(tweet.text) and you'll be good to go.
saveFile = open('MarSec.csv', 'a')
for tweet in MarSec:
print " "
print tweet.created_at, tweet.text, tweet.lang
saveFile.write("%s %s %s\n"%(tweet.created_at, tweet.lang, tweet.text))
saveFile.close()
I just thought I'd put up another version for those who might want to save all
the attributes of a tweepy.models.Status object, if you're not yet sure which attributes of each tweet you want to save to file.
import json
search_results = []
for status in tweepy.Cursor(api.search, q=search_text).items(5000):
search_results.append(status._json)
with open('search_results.json', 'w') as f:
json.dump(search_results, f)
The first block will store the search results into a list of dictionaries, and the second block will output all the tweets into a json file.
Please beware, this might use up a lot of memory if the size of your search results is very big.
This is Twitter's classic error code when something is wrong while sending a wrong image.
Try to find images you are trying to upload and check the format of the images.
The only thing I did was erase the images that MY media player of Windows canĀ“t read and that's all! the script run perfectly.

Resources