I am using TwitterAPI in python3 for premium search to find archived tweets that are retweeted by user1 from user2 with specific keywords. After some suggestions, I have used https://developer.twitter.com/en/docs/tweets/rules-and-filtering/overview/operators-by-product and https://github.com/geduldig/TwitterAPI to make this code, but when I run the code I am not getting any output or error message.
The code works fine when I am not using the retweets_of and from operators, but these are the rules I want to use to get my data.
I know my code shows a premium Sandbox search, but I will upgrade it to premium Full Archive search when I have the right code.
from TwitterAPI import TwitterAPI
#Keys and Tokens from Twitter Developer
consumer_key = "xxxxxxxxxxxxx"
consumer_secret = "xxxxxxxxxxxxxxxxxxx"
access_token = "xxxxxxxxxxxxxxxxxxx"
access_token_secret = "xxxxxxxxxxxxxxxxx"
PRODUCT = '30day'
LABEL = 'MyLABELname'
api = TwitterAPI(consumer_key, consumer_secret, access_token, access_token_secret)
r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL),
{'query':'retweets_of:user.Tesla from:user.elonmusk Supercharger battery'})
for item in r:
print (item['text'] if 'text' in item else item)
Does someone know what the problem is with my code or is there any other way to use the retweets_of and from operators for a premium search. Is it also possible to add a count operator to my code so it will give numbers as output and not all of the tweets in writing?
You should omit "user." in your query.
Also, by specifying "Supercharger battery", which is perfectly fine, you require both in the search results. However, if you require only either word to be present, you would use "Supercharger OR battery".
Finally, to specify a larger number of results, use the maxResults parameter (10 to 100).
Here is your example with all of the above:
r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL),
{'query':'retweets_of:Tesla from:elonmusk Supercharger OR battery',
'maxResults':100})
Twitter's Premium Search doc may be helpful: https://developer.twitter.com/en/docs/tweets/search/api-reference/premium-search.html
Related
I'm trying to use python Flickr API to upload photos to my Flickr account. I already got the API key and secret and user them to get information about my albums and photos, but I got some sort of errors trying to upload new photos. Here is my code:
import flickrapi
api_key = u'xxxxxxxxxxxxxxxxxxxxxxxx'
api_secret = u'xxxxxxxxxxxxxxxxxxxx'
flickr = flickrapi.FlickrAPI(api_key, api_secret)
filename = 'd:/downloads/_D4_6263-Enhanced.png'
title = 'Fach Halcones'
description = 'Posting image using API'
tags = 'fidae'+','+'aviation'+','+'extra'+','+'air shows'
flickr.upload(filename, title, description, tags)
When I run the script, I got the following error:
File "uploadPhotos.py", line 15, in module
flickr.upload(filename, title, description, tags)
TypeError: upload() takes from 2 to 4 positional arguments but 5 were given
looking at the Flickr API documentation, it seems to accept up to five arguments (filename,
fileobj, title, description, tags), and I'm passing only four, since fileobj is optional.
I have googled for some examples, but I was unable to find something that does the trick. So, any help would be awesome.
Regards,
Marcio
I found the solution, and I'm sharing it here. There were two issues with my code.
First: We must use kwargs; Second: tags must be separated by space, not commas
Here the final version:
import flickrapi
api_key = u'xxxxxxxxxxxxxxxxxxxxxxxx'
api_secret = u'xxxxxxxxxxxxxxxxxxxx'
flickr = flickrapi.FlickrAPI(api_key, api_secret)
params = {}
params['filename'] = 'd:/downloads/_D4_6263-Enhanced.png'
params['title'] = 'Fach Halcones'
params['description'] = 'Posting image using API'
params['tags'] = '''fidae aviation extra "air shows" '''
flickr.upload(**params)
That's it...
I have this issue.
I have a list of youtube channels I am polling from the API to get some stats daily.
Total comments, likes and dislikes (all time and all videos)
I have implemented the below, it works, but it loops through every single video one at a time, hitting the API.
Is there a way to make one API call with several video IDs?
Or is there a better way to do this and get these stats?
#find stats for all channel videos - how will this scale?
def video_stats(row):
videoid = row['video_id']
query = yt.get_video_metadata(videoid)
vids = pd.DataFrame(query, index=[0])
df['views'] = vids['video_view_count'].sum()
df['comments'] = vids['video_comment_count'].sum()
df['likes'] = vids['video_like_count'].sum()
df['dislikes'] = vids['video_dislike_count'].sum()
return 'no'
df['stats'] = df.apply(video_stats, axis = 1)
channel['views'] = df['views'].sum()
channel['comments'] = df['comments'].sum()
channel['likes'] = df['likes'].sum()
channel['dislikes'] = df['dislikes'].sum()
According to the docs, you may cumulate in one Videos.list API endpoint call the IDs of several different videos:
id: string
The id parameter specifies a comma-separated list of the YouTube video ID(s) for the resource(s) that are being retrieved. In a video resource, the id property specifies the video's ID.
However, the code you have shown is too terse for to figure out a way of adapting it to such type of (batch) endpoint call.
I've created a python script with tweepy that replies to suicidal tweets with a link to a support website. However, nothing happens when I run the code and tweet with any of the code words on a different account. I'm opening and running the .py file in command prompt.
Like I said, I've tried using the specific words that should trigger it but it does not reply.
import tweepy
#the following module is a file with the specific keys set in
#a dictionary to the given variable, don't want to show them due to
#privacy/security
from keys import keys
CONSUMER_KEY = keys['consumer_key']
CONSUMER_SECRET = keys['consumer_secret']
ACCESS_TOKEN = keys['access_token']
ACCESS_TOKEN_SECRET = keys['access_token_secret']
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
twts = api.search(q="suicide")
t = ['suicide',
'kill myself',
'hate myself',
'Suicidal',
'self-harm',
'self harm']
for s in twts:
for i in t:
if i == s.text:
sn = s.user.screen_name
m = "#%s You are loved! For help, visit https://suicidepreventionlifeline.org/" % (sn)
s = api.update_status(m, s.id)
It should reply with a help link, but it doesn't and I don't know what I did wrong in my code. Any help?
Replace :
if i == s.text:
with :
if i in s.text:
Or to match words case sensitive, the best should be :
if i.lower() in s.text.lower():
Because Suicidal (an other words from the t array) can't be equal to the tweet text.
I guess you want to check if the text contains this word.
I want to download all historical tweets with certain hashtags and/or keywords for a research project. I got the Premium Twitter API for that. I'm using the amazing TwitterAPI to take care of auth and so on.
My problem now is that I'm not an expert developer and I have some issues understanding how the next token works, and how to get all the tweets in a csv.
What I want to achieve is to have all the tweets in one single csv, without having to manually change the dates of the fromDate and toDate values. Right now I don't know how to get the next token and how to use it to concatenate requests.
So far I got here:
from TwitterAPI import TwitterAPI
import csv
SEARCH_TERM = 'my-search-term-here'
PRODUCT = 'fullarchive'
LABEL = 'here-goes-my-dev-env'
api = TwitterAPI("consumer_key",
"consumer_secret",
"access_token_key",
"access_token_secret")
r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL),
{'query':SEARCH_TERM,
'fromDate':'200603220000',
'toDate':'201806020000'
}
)
csvFile = open('2006-2018.csv', 'a')
csvWriter = csv.writer(csvFile)
for item in r:
csvWriter.writerow([item['created_at'],item['user']['screen_name'], item['text'] if 'text' in item else item])
I would be really thankful for any help!
Cheers!
First of all, TwitterAPI includes a helper class that will take care of this for you. TwitterPager works with many types of Twitter endpoints, not just Premium Search. Here is an example to get you started: https://github.com/geduldig/TwitterAPI/blob/master/examples/page_tweets.py
But to answer your question, the strategy you should take is to put the request you currently have inside a while loop. Then,
1. Each request will return a next field which you can get with r.json()['next'].
2. When you are done processing the current batch of tweets and ready for your next request, you would include the next parameter set to the value above.
3. Finally, eventually a request will not include a next in the the returned json. At that point break out of the while loop.
Something like the following.
next = ''
while True:
r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL),
{'query':SEARCH_TERM,
'fromDate':'200603220000',
'toDate':'201806020000',
'next':next})
if r.status_code != 200:
break
for item in r:
csvWriter.writerow([item['created_at'],item['user']['screen_name'], item['text'] if 'text' in item else item])
json = r.json()
if 'next' not in json:
break
next = json['next']
How can I get public content of all the users tagged in a specific picture, is it possible.
Use this API to get media:
https://api.instagram.com/v1/media/{media-id}?access_token=ACCESS-TOKEN
or
https://api.instagram.com/v1/media/shortcode/{short-code}?access_token=ACCESS-TOKEN
the JSON response will have users_in_photo which will have all the users tagged in the photo
https://www.instagram.com/developer/endpoints/media/
Since this features was developed after instagram guys had deprecated their official client, the only way to get it is using a maintained fork.
If you use python you will be able to use this one, it allows you to get the data you need.
Here a sample script:
#install last version of maintained fork
#sudo pip install --upgrade git+https://github.com/MabrianOfficial/python-instagram
from instagram.client import InstagramAPI
access_token = "YOUR-TOKEN"
api = InstagramAPI(access_token = access_token)
count = 33 #max count allowed
max_id = '' #the most recent posts
hashtag = 'cats' #sample hashtag
next_url = '' #first iteration
while True:
result, next_url = api.tag_recent_media(count, max_id, hashtag,with_next_url=next_url)
for m in result:
if m.users_in_photo:
for uip in m.users_in_photo:
print "user: {} -> {}".format(uip.user.username, uip.user.id)
print "position : ({},{})".format(uip.position.x, uip.position.y)