flickrAPI upload photos - python-3.x

I'm trying to use python Flickr API to upload photos to my Flickr account. I already got the API key and secret and user them to get information about my albums and photos, but I got some sort of errors trying to upload new photos. Here is my code:
import flickrapi
api_key = u'xxxxxxxxxxxxxxxxxxxxxxxx'
api_secret = u'xxxxxxxxxxxxxxxxxxxx'
flickr = flickrapi.FlickrAPI(api_key, api_secret)
filename = 'd:/downloads/_D4_6263-Enhanced.png'
title = 'Fach Halcones'
description = 'Posting image using API'
tags = 'fidae'+','+'aviation'+','+'extra'+','+'air shows'
flickr.upload(filename, title, description, tags)
When I run the script, I got the following error:
File "uploadPhotos.py", line 15, in module
flickr.upload(filename, title, description, tags)
TypeError: upload() takes from 2 to 4 positional arguments but 5 were given
looking at the Flickr API documentation, it seems to accept up to five arguments (filename,
fileobj, title, description, tags), and I'm passing only four, since fileobj is optional.
I have googled for some examples, but I was unable to find something that does the trick. So, any help would be awesome.
Regards,
Marcio

I found the solution, and I'm sharing it here. There were two issues with my code.
First: We must use kwargs; Second: tags must be separated by space, not commas
Here the final version:
import flickrapi
api_key = u'xxxxxxxxxxxxxxxxxxxxxxxx'
api_secret = u'xxxxxxxxxxxxxxxxxxxx'
flickr = flickrapi.FlickrAPI(api_key, api_secret)
params = {}
params['filename'] = 'd:/downloads/_D4_6263-Enhanced.png'
params['title'] = 'Fach Halcones'
params['description'] = 'Posting image using API'
params['tags'] = '''fidae aviation extra "air shows" '''
flickr.upload(**params)
That's it...

Related

Youtube Data API: getting total comments, likes, dislikes

I have this issue.
I have a list of youtube channels I am polling from the API to get some stats daily.
Total comments, likes and dislikes (all time and all videos)
I have implemented the below, it works, but it loops through every single video one at a time, hitting the API.
Is there a way to make one API call with several video IDs?
Or is there a better way to do this and get these stats?
#find stats for all channel videos - how will this scale?
def video_stats(row):
videoid = row['video_id']
query = yt.get_video_metadata(videoid)
vids = pd.DataFrame(query, index=[0])
df['views'] = vids['video_view_count'].sum()
df['comments'] = vids['video_comment_count'].sum()
df['likes'] = vids['video_like_count'].sum()
df['dislikes'] = vids['video_dislike_count'].sum()
return 'no'
df['stats'] = df.apply(video_stats, axis = 1)
channel['views'] = df['views'].sum()
channel['comments'] = df['comments'].sum()
channel['likes'] = df['likes'].sum()
channel['dislikes'] = df['dislikes'].sum()
According to the docs, you may cumulate in one Videos.list API endpoint call the IDs of several different videos:
id: string
The id parameter specifies a comma-separated list of the YouTube video ID(s) for the resource(s) that are being retrieved. In a video resource, the id property specifies the video's ID.
However, the code you have shown is too terse for to figure out a way of adapting it to such type of (batch) endpoint call.

Premium search API, use of retweets_of and from function

I am using TwitterAPI in python3 for premium search to find archived tweets that are retweeted by user1 from user2 with specific keywords. After some suggestions, I have used https://developer.twitter.com/en/docs/tweets/rules-and-filtering/overview/operators-by-product and https://github.com/geduldig/TwitterAPI to make this code, but when I run the code I am not getting any output or error message.
The code works fine when I am not using the retweets_of and from operators, but these are the rules I want to use to get my data.
I know my code shows a premium Sandbox search, but I will upgrade it to premium Full Archive search when I have the right code.
from TwitterAPI import TwitterAPI
#Keys and Tokens from Twitter Developer
consumer_key = "xxxxxxxxxxxxx"
consumer_secret = "xxxxxxxxxxxxxxxxxxx"
access_token = "xxxxxxxxxxxxxxxxxxx"
access_token_secret = "xxxxxxxxxxxxxxxxx"
PRODUCT = '30day'
LABEL = 'MyLABELname'
api = TwitterAPI(consumer_key, consumer_secret, access_token, access_token_secret)
r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL),
{'query':'retweets_of:user.Tesla from:user.elonmusk Supercharger battery'})
for item in r:
print (item['text'] if 'text' in item else item)
Does someone know what the problem is with my code or is there any other way to use the retweets_of and from operators for a premium search. Is it also possible to add a count operator to my code so it will give numbers as output and not all of the tweets in writing?
You should omit "user." in your query.
Also, by specifying "Supercharger battery", which is perfectly fine, you require both in the search results. However, if you require only either word to be present, you would use "Supercharger OR battery".
Finally, to specify a larger number of results, use the maxResults parameter (10 to 100).
Here is your example with all of the above:
r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL),
{'query':'retweets_of:Tesla from:elonmusk Supercharger OR battery',
'maxResults':100})
Twitter's Premium Search doc may be helpful: https://developer.twitter.com/en/docs/tweets/search/api-reference/premium-search.html

How to download bulk amount of images from google or any website

actually, I need to do a project on machine learning. In that I want a lot of images for training. I searched for this problem, but I failed to do so.
can anyone help me to solve this. Thanks in advance.
I used google images to download images using selenium. It is just a basic approach.
from selenium import webdriver
import time
import urllib.request
import os
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome("path\\to\\the\\webdriverFile")
browser.get("https://www.google.com")
search = browser.find_element_by_name(‘q’)
search.send_keys(key_words,Keys.ENTER) # use required key_words to download images
elem = browser.find_element_by_link_text(‘Images’)
elem.get_attribute(‘href’)
elem.click()
value = 0
for i in range(20):
browser.execute_script(“scrollBy(“+ str(value) +”,+1000);”)
value += 1000
time.sleep(3)
elem1 = browser.find_element_by_id(‘islmp’)
sub = elem1.find_elements_by_tag_name(“img”)
try:
os.mkdir(‘downloads’)
except FileExistsError:
pass
count = 0
for i in sub:
src = i.get_attribute('src')
try:
if src != None:
src = str(src)
print(src)
count+=1
urllib.request.urlretrieve(src,
os.path.join('downloads','image'+str(count)+'.jpg'))
else:
raise TypeError
except TypeError:
print('fail')
if count == required_images_number: ## use number as required
break
check this for detailed explanation.
download driver here
My tip to you is: Use pictures API. This is my favourite: Bing Image Search API
Following text from Send search queries using the REST API and Python.
Running the quickstart
To get started, set subscription_key to a valid subscription key for the Bing API service.
Python
subscription_key = None
assert subscription_key
Next, verify that the search_url endpoint is correct. At this writing, only one endpoint is used for Bing search APIs. If you encounter authorization errors, double-check this value against the Bing search endpoint in your Azure dashboard.
Python
search_url = "https://api.cognitive.microsoft.com/bing/v7.0/images/search"
Set search_term to look for images of puppies.
Python
search_term = "puppies"
The following block uses the requests library in Python to call out to the Bing search APIs and return the results as a JSON object. Observe that we pass in the API key via the headers dictionary and the search term via the params dictionary. To see the full list of options that can be used to filter search results, refer to the REST API documentation.
Python
import requests
headers = {"Ocp-Apim-Subscription-Key" : subscription_key}
params = {"q": search_term, "license": "public", "imageType": "photo"}
response = requests.get(search_url, headers=headers, params=params)
response.raise_for_status()
search_results = response.json()
The search_results object contains the actual images along with rich metadata such as related items. For example, the following line of code can extract the thumbnail URLS for the first 16 results.
Python
thumbnail_urls = [img["thumbnailUrl"] for img in search_results["value"][:16]]
Then use the PIL library to download the thumbnail images and the matplotlib library to render them on a $4 \times 4$ grid.
Python
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
f, axes = plt.subplots(4, 4)
for i in range(4):
for j in range(4):
image_data = requests.get(thumbnail_urls[i+4*j])
image_data.raise_for_status()
image = Image.open(BytesIO(image_data.content))
axes[i][j].imshow(image)
axes[i][j].axis("off")
plt.show()
Sample JSON response
Responses from the Bing Image Search API are returned as JSON. This sample response has been truncated to show a single result.
JSON
{
"_type":"Images",
"instrumentation":{
"_type":"ResponseInstrumentation"
},
"readLink":"images\/search?q=tropical ocean",
"webSearchUrl":"https:\/\/www.bing.com\/images\/search?q=tropical ocean&FORM=OIIARP",
"totalEstimatedMatches":842,
"nextOffset":47,
"value":[
{
"webSearchUrl":"https:\/\/www.bing.com\/images\/search?view=detailv2&FORM=OIIRPO&q=tropical+ocean&id=8607ACDACB243BDEA7E1EF78127DA931E680E3A5&simid=608027248313960152",
"name":"My Life in the Ocean | The greatest WordPress.com site in ...",
"thumbnailUrl":"https:\/\/tse3.mm.bing.net\/th?id=OIP.fmwSKKmKpmZtJiBDps1kLAHaEo&pid=Api",
"datePublished":"2017-11-03T08:51:00.0000000Z",
"contentUrl":"https:\/\/mylifeintheocean.files.wordpress.com\/2012\/11\/tropical-ocean-wallpaper-1920x12003.jpg",
"hostPageUrl":"https:\/\/mylifeintheocean.wordpress.com\/",
"contentSize":"897388 B",
"encodingFormat":"jpeg",
"hostPageDisplayUrl":"https:\/\/mylifeintheocean.wordpress.com",
"width":1920,
"height":1200,
"thumbnail":{
"width":474,
"height":296
},
"imageInsightsToken":"ccid_fmwSKKmK*mid_8607ACDACB243BDEA7E1EF78127DA931E680E3A5*simid_608027248313960152*thid_OIP.fmwSKKmKpmZtJiBDps1kLAHaEo",
"insightsMetadata":{
"recipeSourcesCount":0,
"bestRepresentativeQuery":{
"text":"Tropical Beaches Desktop Wallpaper",
"displayText":"Tropical Beaches Desktop Wallpaper",
"webSearchUrl":"https:\/\/www.bing.com\/images\/search?q=Tropical+Beaches+Desktop+Wallpaper&id=8607ACDACB243BDEA7E1EF78127DA931E680E3A5&FORM=IDBQDM"
},
"pagesIncludingCount":115,
"availableSizesCount":44
},
"imageId":"8607ACDACB243BDEA7E1EF78127DA931E680E3A5",
"accentColor":"0050B2"
}
}

How to get users tagged in a photo from Instagram API?

How can I get public content of all the users tagged in a specific picture, is it possible.
Use this API to get media:
https://api.instagram.com/v1/media/{media-id}?access_token=ACCESS-TOKEN
or
https://api.instagram.com/v1/media/shortcode/{short-code}?access_token=ACCESS-TOKEN
the JSON response will have users_in_photo which will have all the users tagged in the photo
https://www.instagram.com/developer/endpoints/media/
Since this features was developed after instagram guys had deprecated their official client, the only way to get it is using a maintained fork.
If you use python you will be able to use this one, it allows you to get the data you need.
Here a sample script:
#install last version of maintained fork
#sudo pip install --upgrade git+https://github.com/MabrianOfficial/python-instagram
from instagram.client import InstagramAPI
access_token = "YOUR-TOKEN"
api = InstagramAPI(access_token = access_token)
count = 33 #max count allowed
max_id = '' #the most recent posts
hashtag = 'cats' #sample hashtag
next_url = '' #first iteration
while True:
result, next_url = api.tag_recent_media(count, max_id, hashtag,with_next_url=next_url)
for m in result:
if m.users_in_photo:
for uip in m.users_in_photo:
print "user: {} -> {}".format(uip.user.username, uip.user.id)
print "position : ({},{})".format(uip.position.x, uip.position.y)

Tweepy Search API Writing to File Error

Noob python user:
I've created file that extracts 10 tweets based on the api.search (not streaming api). I get a screen results, but cannot figure how to parse the output to save to csv. My error is TypeError: expected a character buffer object.
I have tried using .join(str(x) and get other errors.
My code is
import tweepy
import time
from tweepy import OAuthHandler
from tweepy import Cursor
#Consumer keys and access tokens, used for Twitter OAuth
consumer_key = ''
consumer_secret = ''
atoken = ''
asecret = ''
# The OAuth process that uses keys and tokens
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(atoken, asecret)
# Creates instance to execute requests to Twitter API
api = tweepy.API(auth)
MarSec = tweepy.Cursor(api.search, q='maritime security').items(10)
for tweet in MarSec:
print " "
print tweet.created_at, tweet.text, tweet.lang
saveFile = open('MarSec.csv', 'a')
saveFile.write(tweet)
saveFile.write('\n')
saveFile.close()
Any help would be appreciated. I've gotten my Streaming API to work, but am having difficulty with this one.
Thanks.
tweet is not a string or a character buffer. It's an object. Replace your line with saveFile.write(tweet.text) and you'll be good to go.
saveFile = open('MarSec.csv', 'a')
for tweet in MarSec:
print " "
print tweet.created_at, tweet.text, tweet.lang
saveFile.write("%s %s %s\n"%(tweet.created_at, tweet.lang, tweet.text))
saveFile.close()
I just thought I'd put up another version for those who might want to save all
the attributes of a tweepy.models.Status object, if you're not yet sure which attributes of each tweet you want to save to file.
import json
search_results = []
for status in tweepy.Cursor(api.search, q=search_text).items(5000):
search_results.append(status._json)
with open('search_results.json', 'w') as f:
json.dump(search_results, f)
The first block will store the search results into a list of dictionaries, and the second block will output all the tweets into a json file.
Please beware, this might use up a lot of memory if the size of your search results is very big.
This is Twitter's classic error code when something is wrong while sending a wrong image.
Try to find images you are trying to upload and check the format of the images.
The only thing I did was erase the images that MY media player of Windows can´t read and that's all! the script run perfectly.

Resources