Create Google Shortened URLs, Update My CSV File - python-3.x

I've got a list of ~3,000 URLs I'm trying to create Google shortened links of, the idea is this CSV has a list of links and I want my code to output the shortened links in the column next to the original URLs.
I've been trying to modify the code found on this site here but I'm not skilled enough to get it to work.
Here's my code (I would not normally post an API key but the original person who asked this already posted it publicly on this site) :
import json
import pandas as pd
df = pd.read_csv('Links_Test.csv')
def shorternUrl(my_URL):
API_KEY = "AIzaSyCvhcU63u5OTnUsdYaCFtDkcutNm6lIEpw"
apiUrl = 'https://www.googleapis.com/urlshortener/v1/url'
longUrl = my_URL
headers = {"Content-type": "application/json"}
data = {"longUrl": longUrl}
h = httplib2.Http('.cache')
headers, response = h.request(apiUrl, "POST", json.dumps(data), headers)
return response
for url in df['URL']:
x = shorternUrl(url)
# Then I want it to write x into the column next to the original URL
But I only get errors at this point, before I even started figuring out how to write the new URLs to the CSV file.
Here's some sample data:
URL
www.apple.com
www.google.com
www.microsoft.com
www.linux.org
Thank you for your help,
Me

I think the issue is that you didnot include the API key in the request. By the way, the certifi package allows you to secure a connection to a link. You can get it using pip install certifi or pip urllib3[secure].
Here I create my own API key, so you might want to replace it with yours.
from urllib3 import PoolManager
import json
import certifi
sampleURL = 'http://www.apple.com'
APIkey = 'AIzaSyD8F41CL3nJBpEf0avqdQELKO2n962VXpA'
APIurl = 'https://www.googleapis.com/urlshortener/v1/url?key=' + APIkey
http = PoolManager(cert_reqs = 'CERT_REQUIRED', ca_certs=certifi.where())
def shortenURL(url):
data = {'key': APIkey, 'longUrl' : url}
response = http.request("POST", APIurl, body=json.dumps(data), headers= {'Content-Type' : 'application/json'}).data.decode('utf-8')
r = json.loads(response)
return (r['id'])
The decoding part converts the response object into a string so that we can convert it to a JSON and retrieve data.
From there on, you can store the data into another column and so on.
For the sampleUrl, I got back https(goo.gl/nujb) from the function.

I found a solution here:
https://pypi.python.org/pypi/pyshorteners
Example copied from linked page:
from pyshorteners import Shortener
url = 'http://www.google.com'
api_key = 'YOUR_API_KEY'
shortener = Shortener('Google', api_key=api_key)
print "My short url is {}".format(shortener.short(url))

Related

Coinbase Advanced trade API connectivity using python3 is not working

I tried below code based on the coinbase documentaion coinbase doc
The documentation is given for Python2 but i have modified and used it for Python3 because i am trying to connect to advanced trade API in Coinbase Coinbase Advanced trade doc
import datetime
import time
import hmac
import hashlib
import http.client
secret_key='***' #hidden
api_key='***' #hidden
date_time = datetime.datetime.utcnow()
timestamp=int(time.mktime(date_time.timetuple())) # timestamp should be from UTC time and no decimal allowed
method = "GET" # method can be GET or POST. Only capital is allowed
request_path = 'api/v3/brokerage/accounts'
body=''
message= str(timestamp) + method + request_path + body
signature = hmac.new(secret_key.encode('utf-8'), message.encode('utf-8'), hashlib.sha256).hexdigest()
headers={
'accept':'application/json',
'CB-ACCESS-KEY': api_key,
'CB-ACCESS-TIMESTAMP': timestamp,
'CB-ACCESS-SIGN': signature
}
conn = http.client.HTTPSConnection("api.coinbase.com")
payload = ''
conn.request("GET", "/api/v3/brokerage/accounts", payload, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
When executing this code i was expecting account details. But i am getting unauthoried error and error code 401 as return from API.
I was able to connect to Coinbase Pro API earlier and everything was fine till the merger of coinbase and Coinbase Pro.Now unable to figure out how to connect to the Advanced trade feature in coinbase.
OK, so I made two updates to get this running.
Insert a / before api in request_path = '/api/v3/brokerage/accounts' :)
Change the way you generate your timestamp to timestamp = str(int(time.time())).
I thought updating your timestamp to a string would fix it, but it didn't, so I reverted to the way I was generating it. Maybe someone can tell us why one works and one doesn't. I'll certainly update this post if I figure it out.
Here's the full working code. I kept everything else the same but replaced your comments with mine.
import datetime
import time
import hmac
import hashlib
import http.client
secret_key = '***'
api_key = '***'
# This is the first part where you were getting time
#date_time = datetime.datetime.utcnow()
# And this is the second part where you format it as an integer
#timestamp=int(time.mktime(date_time.timetuple()))
# I cast your timestamp as a string, but it still doesn't work, and I'm having a hard time figuring out why.
#timestamp=str(int(time.mktime(date_time.timetuple())))
# So I reverted to the way that I'm getting the timestamp, and it works
timestamp = str(int(time.time()))
method = "GET"
request_path = '/api/v3/brokerage/accounts' # Added a forward slash before 'api'
body=''
message= str(timestamp) + method + request_path + body
signature = hmac.new(secret_key.encode('utf-8'), message.encode('utf-8'), hashlib.sha256).hexdigest()
headers={
'accept':'application/json',
'CB-ACCESS-KEY': api_key,
'CB-ACCESS-TIMESTAMP': timestamp,
'CB-ACCESS-SIGN': signature
}
conn = http.client.HTTPSConnection("api.coinbase.com")
payload = ''
conn.request("GET", "/api/v3/brokerage/accounts", payload, headers)
# You were probably troubleshooting, but the above line is redundant and can be simplified to:
# conn.request("GET", request_path, body, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
I see someone open-sourced a python library for the Advanced Trade endpoints: https://github.com/KmiQ/coinbase-advanced-python

Urllib3 POST request with attachment in python

I wish to make a post request to add an attachment utilising urllib3 in python without success. I have confirmed the API itself is working in postman but cannot work out how to convert this request to python. Appreciating I'm mixing object types I just don't know how to avoid it.
Python code:
import urllib3
import json
api_key = "secret_key"
header = {"X-API-KEY": api_key, "ACCEPT": "application/json", "content-type": "multipart/form-data"}
url = "https://secret_url.com/api/"
http = urllib3.PoolManager()
with open("invoice.html", 'rb') as f:
file_data = f.read()
payload = {
"attchment": {
"file": file_data
}
}
payload = json.dumps(payload)
r = http.request('post', url, headers = header, fields = payload)
print(r.status)
print(r.data)
Postman - which works and properly sends file-name through also (I'm guessing it splits the bytes and filename up?)
Edit: I've also tried the requests library as I'm more familiar with this (but can't use it as the script will be running in AWS lambda). Removing the attachment element form the dict allows it to run but the API endpoint gives 401 presumably because it's missing the "attachement" part to the data structure as per postman below... but when I put this in I get runtime errors.
r = requests.post(url, headers = header, files={"file": open("invoice.html", 'rb')})
For anyone who stumbles upon this from Dr google a few points:
I was completely mis-interpreting the structure of the element. It's actually a string "attachment[file]" not a dict like object.
Postman has the ability to output python code in urllib/request syntax albeit not 100% what I was after. Note: the chrome version (depreciated) outputs gibberish code that only half works so the client version should be used. A short bit of work below shows it working as expected:
http = urllib3.PoolManager()
with open("invoice.html", "rb") as f:
file = f.read()
payload={
'attachment[file]':('invoice.html',file,'text/html')
}
r = http.request('post', url, headers = header, fields = payload)

MissingSchema - Invalid URL Error with Requests module

Trying to get data from the eBay API using GetItem. But requests isn't reading or getting the URL properly, any ideas? Getting this error:
requests.exceptions.MissingSchema: Invalid URL '<urllib.request.Request object at 0x00E86210>': No schema supplied. Perhaps you meant http://<urllib.request.Request object at 0x00E86210>?
I swear I had this code working before but now it's not, so I'm not sure why.
My code:
from urllib import request as urllib
import requests
url = 'https://api.ebay.com/ws/api.dll'
data = '''<?xml version="1.0" encoding="utf-8"?>
<GetItemRequest xmlns="urn:ebay:apis:eBLBaseComponents">
<RequesterCredentials>
<eBayAuthToken>my-auth-token</eBayAuthToken>
</RequesterCredentials>
<ItemID>any-item-id</ItemID>
</GetItemRequest>'''
headers = {
'Content-Type' : 'text/xml',
'X-EBAY-API-COMPATIBILITY-LEVEL' : 719,
'X-EBAY-API-DEV-NAME' : 'dev-name',
'X-EBAY-API-APP-NAME' : 'app-name',
'X-EBAY-API-CERT-NAME' : 'cert-name',
'X-EBAY-API-SITEID' : 3,
'X-EBAY-API-CALL-NAME' : 'GetItem',
}
req = urllib.Request(url, data, headers)
resp = requests.get(req)
content = resp.read()
print(content)
Thank you in advance. Any good reading material for urllib would be great, too.
You are mixing the urllib and the requests library. They are different libraries that can both do HTTP-requests in Python. I'd suggest you use only the requests library.
Remove the line req = urllib.Request(url, data, headers) and replace the resp = ... line with:
r = requests.post(url, data=data, headers=headers)
Print the response body like this:
print(r.text)
Check out the Requests Quickstart here for more examples: https://2.python-requests.org//en/master/user/quickstart/

Bad Request using python_HTTP:400

Im using python 3 to scrape facebook page of "nytimes".
I tried to create my own Api first so i can get my app_id && app_secret.
when i tried to ping NYT's Facebook page to verify that the access_token works and the page_id is valid i got this ERROR.
HTTPError: HTTP Error 400: Bad Request
My code is Below
First connect to my API via id and secret code.
`
import urllib
import datetime
import json
import time
import urllib.request
id = "111111111111"
secret ="123123123123123113"
token = id +"|"+ secret`
Second To ping the page i did:
page_id = 'nytimes'
Then
def testFacebookPageData(page_id, token):
# construct the URL string
base = "https://graph.facebook.com/v2.4"
node = "/" + page_id
parameters = "/?token=%s" % token
url = base + node + parameters
# retrieve data
req = urllib.request.Request(url)
response = urllib.request.urlopen(req)
data = json.loads(response.read())
print (json.dumps(data, indent=4, sort_keys=True))
testFacebookPageData(page_id, token)
Hope that im clear..
I need your help
thank You
According to Facebooks docs for Graph API, the parameter should be named access_token, and not token.
parameters = "/?access_token=%s" % token

Trying to use Yelp API for something other than business_id

I'm a student and only a few weeks into Python, so bear with me. I found a good answer in this link for initially working with Yelp's v3 API to at least get it to successfully make a request by business_id: How to use Yelp's new API
However, I can't seem to figure out how to search by anything other than reviews using the code that someone provided above (copy-pasted here as the below does work, just not what I need it for:
import requests
import yelp
from config import api_key
API_KEY = api_key
API_HOST = 'https://api.yelp.com'
BUSINESS_PATH = '/v3/businesses/'
def get_business(business_id):
business_path = BUSINESS_PATH + business_id
url = API_HOST + business_path + '/reviews'
headers = {'Authorization': f"Bearer {API_KEY}"}
response = requests.get(url, headers=headers)
return response.json()
results = get_business('the-white-horse-pub-kansas-city')
pprint(results)
Again, the code does work if you're only looking up one place by name. But when I try something other than "/reviews" in the url function, such as "search" or "term" or something else going off of the Yelp Fusion API documentation (https://yelp.com/developers/documentation/v3/business_search), I can't get anything to pull. My intent is to pull a bunch of breweries in the local area and then eventually put them in a dataframe, but I can't figure out what parameters or code to use, other than 'review'.
I believe I found an answer where you can at least look for type if you add the first item to the dependencies and the lower items as a separate function:
BUSINESS_PATH_CAT = '/v3/categories/'
def get_all_categories(alias):
url = API_HOST + BUSINESS_PATH_CAT + alias
headers = {'Authorization': f"Bearer {API_KEY}"}
response = requests.get(url, headers=headers)
return response.json()
results = get_all_categories('brewpubs')
pprint(results)
Aside from this though, on the authentication guide for Yelp Fusion, it discusses using Postman. If you're a fellow noob, setting this up can be a lifesaver as this allowed me to actually see the HTTP wording and how it's split up for search terms and how to add them compared to the API documentation.

Resources