OAuth request_token for Etsy problem with URL construction - python-3.x

I am trying to create an app to access Etsy api using python3, I am testing my very basic code in idle3, I need to get an oauth token, I have looked at the etsy documentation here but all is described for php.
below is my code in idle3 (I have changed my keys);
>>>payload = { 'api_key' : 'pvhkg9y4e7', 'shared_secret' : 'ib5msimmo', 'scope' : 'transactions_r,listings_w,billing_r,treasury_r'}
>>> url = "https://openapi.etsy.com/v2/oauth/request_token"
>>> r = requests.get(url, params=payload)
>>> print(r.url)
https://openapi.etsy.com/v2/oauth/request_token?api_key=pvhkg9y4e7&scope=transactions_r%2Clistings_w%2Cbilling_r%2Ctreasury_r&shared_secret=ib5msimmo
>>> r.text
>>>'oauth_problem=parameter_absent&oauth_parameters_absent=oauth_consumer_key%26oauth_signature%26oauth_signature_method%26oauth_nonce%26oauth_timestamp
I need help in creating the correct URL I think I need to change my payload wording to oauth_consumer_key, oauth_signature, but I do not understand how to include oauth_signature_method (I am using request.get) or the oauth_timestamp, and I don't know what oauth_nonce is?
I intend to incorporate the whole into a flask app, so I have looked at flask_oauth here but I am not sure if this will give me the timestamp and nonce.
All advice greatly appreciated, I am following the flask tutorial by miguel grinberg, I need one like that for my etsy app! any suggestions
I also tried request_oauthlib but got this;
>>> from requests_oauthlib import OAuth1
>>>Traceback (most recent call last):
File "<pyshell#15>", line 1, in <module>
from requests_oauthlib import OAuth1
ImportError: No module named 'requests_oauthlib'
Regards
Paul

I wrote to etsy developers, who came back with some php code, I know very little python but no PHP,
So I went back to searching google, and went back to here and used the following code;
import requests
from requests_oauthlib import OAuth1
request_token_url = 'https://openapi.etsy.com/v2/oauth/request_token?scope=transactions_r&listings_w&billing_r&treasury_r'
consumer_key = 'api_key'
consumer_secret = 'secret_key'
oauth = OAuth1(consumer_key, client_secret=consumer_secret)
r = requests.post(url=request_token_url, auth=oauth)
r.content
login_url=https%6%3fthe%26address%26you%2fwant%34goodluck
and it worked!!!!!! I am so happpppy!!!
If you get any other noobs like me perhaps they can be help them with this code.
In terminal I created a virtualenv, I then pip installed requests and request_oauthlib, then in python shell executed the above script.
regards paul

Related

Integrating python-decouple with PRAW?

I've been trying to see if I can use python-decouple to place my bot credentials on a separate .env file.
Auth method is basically right off the praw doc:
reddit = praw.Reddit(
client_id=config('CLIENT_ID'),
client_secret=config('CLIENT_SECRET'),
password=config('PASSWORD'),
user_agent=config('USER_AGENT'),
username=config('USERNAME')
)
However, whenever I try it, it seems to return an 403 auth error. I work my way back, replacing the decouple configs with strings of the actual details, but it doesn't seem to follow through, and the errors that occur seem random depending on what and when things I take out.
Is this a problem with how decouple functions?
Thanks.
I've been trying to see if I can use python-decouple to place my bot credentials on a separate .env file.
Why not use a praw.ini file? This is documented here in PRAW documentation. It's a format for storing Reddit credentials in a separate file from your code. For example, a praw.ini file may look like:
[bot1]
client_id=Y4PJOclpDQy3xZ
client_secret=UkGLTe6oqsMk5nHCJTHLrwgvHpr
password=pni9ubeht4wd50gk
username=fakebot1
[bot2]
client_id=6abrJJdcIqbclb
client_secret=Kcn6Bj8CClyu4FjVO77MYlTynfj
password=mi1ky2qzpiq8s59j
username=fakebot2
You then use specific credentials in your code like so:
import praw
reddit = praw.Reddit('bot2', user_agent='myBot v0.1')
print('Logged in as', reddit.user.me())
I think this is the best solution for working with PRAW credentials.
However, if you really want to do it with python-decouple, here's a working example:
Contents of file .env:
username=k8IA
password=REDACTED
client_id=REDACTED
client_secret=REDACTED
Contents of file connect.py:
import praw
from decouple import config
reddit = praw.Reddit(username=config('username'),
password=config('password'),
client_id=config('client_id'),
client_secret=config('client_secret'),
user_agent='myBot v0.1')
print('Logged in as', reddit.user.me())
Output when running python3 connect.py:
Logged in as k8IA

Google Sheets API Won't Authorize - Python

Hi StackOverflow Community,
Thanks in advance for your help.
I have set up a Python script to make some requests to the Google Sheets API. The code for my script is below:
from googleapiclient import discovery
credentials ='https://www.googleapis.com/auth/spreadsheets' #Should give read/write scope to my spreadsheets
service = discovery.build('sheets', 'v4', credentials=credentials)
spreadsheet_id ='1z2QzPf9Kc02roOwTJUYab4k2dwYu1n-nIbJ5yzWF3YE' #COPY
ranges=['A1:C10']
include_grid_data = False
request = service.spreadsheets().get(spreadsheetId=spreadsheet_id,
ranges=ranges, includeGridData=include_grid_data)
response = request.execute()
The problem is that when I run this I get the following error:
File "C:\Users\evank\AppData\Local\Programs\Python\Python36-32\Lib\site-packages\googleapiclient\_auth.py", line 92, in authorized_http
return credentials.authorize(build_http())
builtins.AttributeError: 'str' object has no attribute 'authorize'
The full code of this file listed in the error is located here: https://github.com/google/google-api-python-client/blob/master/googleapiclient/_auth.py
I'm working on this for an assignment and can't figure out why this error is occurring. Please help!
Thanks,
evank28
I had problems with this.
This link should help: https://medium.com/#rqaiserr/how-to-connect-to-google-sheets-with-python-83d0b96eeea6
Make sure to also read this: https://www.twilio.com/blog/2017/02/an-easy-way-to-read-and-write-to-a-google-spreadsheet-in-python.html
Just note that you need to add this line of code to your project after following the instructions above:
from oauth2client.service_account import ServiceAccountCredentials
and also run this in terminal:
pip install oauth2client
The code will look something like this:
from oauth2client.service_account import ServiceAccountCredentials
import gspread
scope = ['https://spreadsheets.google.com/feeds']
creds = ServiceAccountCredentials.from_json_keyfile_name('client_secret.json', scope)
client = gspread.authorize(creds)
sheet = client.open_by_url("spreadsheetlink.com")
worksheet = sheet.get_worksheet(0)
Also make sure that you share the client_email with editing access (you will see what I mean after reading the articles).
If you have any questions, feel free to reply to me.

Python: Attribute Error: 'module' object has no attribute 'request'

I extremely new to python and practicing Loading a dataset from a url.
When running the following code:
In [1]: myUrl = "http://aima.cs.berkeley.edu/data/iris.csv"
In [2]: urlRequest = urllib.request.Request(myUrl)
I get this error:
File "", line 1, in
urlRequest = urllib.request.Request(myUrl)
AttributeError: 'module' object has no attribute 'request'
1) I tried researching this error and attempted to use import urllib3 again and it imported fine. However when attempting the request I get that error...
2) I attempted to get "help" help("urllib3") in python 3.6.0 and got:
No Python documentation found for 'urllib3'. Use help() to get the
interactive help utility. Use help(str) for help on the str class.
3) I searched Stackoverflow and saw a similar question; tried the suggestions and was not able to move past that line of code...
Am I doing something wrong here?
Thanks in advance for your time
From what I see "request" is not a package, meaning that you can't directly import classes from it.
try :
from urllib.request import Request
myUrl = "http://aima.cs.berkeley.edu/data/iris.csv"
urlRequest = Request(myUrl)

Twython basic, helpm please , can it get easier than this?

This script is giving ma a 500 Error, any ideas?
I am taking the script from a page from python samples and also using the path given to me by my hosting company (and I know it works because I have another script that does work.)
The file has 755 permissions as well as it's directory:
#!/home3/master/bin/python
import sys
sys.path.insert(1,'/home3/master/lib/python2.6/site-packages')
from twython import Twython
twitter = Twython()
trends = twitter.getCurrentTrends()
print trends
There are two problems with this code. The first is you have not included any OAuth data, so the Twitter API will reject whatever you send. The second is there is no getCurrentTrends() attribute in Twython. Did you mean get_available_trends or get_closest_trends?

Robobrowser and local files

I am a beginner using Python 3.6.4 and RoboBrowser 0.5.3.
I have saved some HTML webpage and I am trying to pick up the information in the page.
Most likely incorrectly, I took inspiration from a similar question on beautifulSoup. The beautifulSoup solution works for me (BeautifulSoup 4.6.0).
In contrast, the following, based on roboBrowser, does not seem to work:
from robobrowser import RoboBrowser
br = RoboBrowser(parser='html.parser')
br.open(open("my_file.html"))
with error:
MissingSchema: Invalid URL "<_io.TextIOWrapper
name='my_file.html'
mode='r' encoding='UTF-8'>": No schema supplied. Perhaps you meant
http://<_io.TextIOWrapper
name='my_file.html'
mode='r' encoding='UTF-8'>?
I understand that the code expected a "http"-based url. I tried prepending "file://" to the absolute path of my file, to no avail.
Is there any way to communicate with the library that it is a local file, or perhaps such functionality is not part of roboBrowser?

Resources