Preview Image Url in tweepy V2 - python-3.x

I have been trying to get the "preview_image_url" from a users timeline of tweets. I have been using V2 and paginator to get all the tweets as API have limit of 100 tweets per request. I am not able to access the media_fields as it's not present or pops up using dir method as well in python. This is the code I am using right now. the code only provides the media key for a particular tweet.
client = tweepy.Client(bearer_token=my_keys.BEARER_TOKEN)
username_list = ["coinfessions"]
user_id = client.get_user(username="coinfessions").data.id
print(user_id)
for tweet in tweepy.Paginator(client.get_users_tweets, id= str(user_id), exclude=['retweets', 'replies'], expansions = "attachments.media_keys", media_fields = ["url","preview_image_url"],max_results=5 ).flatten(10):
if tweet.attachments is not None:
print(tweet.attachments)
In one of the question posted on stack overflow. Someone suggested using the response as the return type. This method includes a "include" key where you can find the type, media_key and preview_image_url. I tried this method and it worked fine with normal response. However, it's not working well with the paginator. I think, I am doing something wrong here.
Anothor response on similar question points to this official FAQ page on tweepy documentation. If anyone can help me in understanding, how to access Models in V2 response.

You can't access the includes if you are using the flatten methods.
The only way to access them is to iterates through the responses themselves.
paginator = tweepy.Paginator(client.get_users_tweets, [...]) # Without the flatten
for page in paginator: # Each page is a response from the API
page.data # All the tweets in each page
page.includes # All the includes in each page
On a side note, I don't understand why you are using max_results=5. Because of that, you retrieve only 5 results per request and you cancel the advantages of the paginator.

Related

LinkedIn Marketing API ignores pagination and return all elements

I am using the marketing API
I have an issue with the pagination, it seems like it ignores the start and count.
I am using the start and count query parameters and no matter what number I put in the count, I get a response with all the results
I followed this document for pagination:
https://learn.microsoft.com/en-us/linkedin/shared/api-guide/concepts/pagination
I use this endpoint:
https://api.linkedin.com/v2/adAnalyticsV2
my parameters include:
start=0&count=10&q=statistics&timeGranularity=MONTHLY ....
in the response, I received 821 elements without any pagination. instead of 10 per page.
if I use the logic in the docs the values for start and count will not affect the results or the query
what am I doing wrong?
I don't want to use it without pagination and find out later that I missed records.
Thanks,
Roiy
I have used pagination to retrieve the posts of the organization, and its working for me
my api call is
api_url = "https://api.linkedin.com/v2/shares?q=owners&owners=urn%3Ali%3Aorganization%3A123456&sharesPerOwner=1000&count=100&start=0"
one thing importantly i noticed is this does not requires restli2.0 version(dont pass this in header)

Is it possible to gather multiple values with Twilio IVR?

I have a view function that needs to gather multiple pieces of information in one call (it's a quick outbound call - the user answers and is to be immediately prompted for these data points), based on data pulled from a DB. What I'd like the view function to do is something like the following:
group_id = <get group id>
params = data_element_select_params.DataElementSelectParams(group_id=group_id)
data_elements = worker.select(params) # function I wrote which returns a list of objects, in this case objects called DataElements
vr = VoiceResponse()
say_msg = 'Enter {element}, then press star.'
for element in data_elements:
say_message = say_msg.format(element=element.name)
<Gather input with say_message and save it>
Can this be achieved without routing to the same URL over and over? I have not seen any other solution, and I'd rather not continually redirect to the same URL as we'll have to pull the list of elements from the DB again for each element.
Apologies if anything is unclear - please point it out and I'll clarify as quickly as I can.
Twilio developer evangelist here.
You can only use one <Gather> per TwiML document, so no, you can't ask multiple questions and take multiple inputs within the one webhook.
You will need to route to a URL that receives the input from each <Gather> and then asks the next question.
To avoid pulling all the elements from the DB every time, you could investigate saving the elements to the HTTP session and pulling them back out of there. Twilio is a well behaved HTTP client, so you can use things like cookies to store information about the current call/conversation.

SharePoint Rest call is not returning all fields

Goal: Have python program pull data from SharePoint so we can store on database.
Issue: I am able to connect to share point and return data, but I am not getting all of the fields I can see when hitting the UI page. The UI page I am hitting is in the list on REST call but is a Custom View
Update: Using the renderashtml I was at least able to see some of the data points I am looking for. I would hope there is a better solution than this
Code:
import sharepy
connection = sharepy.connect("https://{site}.sharepoint.com")
r = connection.get("https://{site}.sharepoint.com/{page}/_api/web/Lists/getbytitle('{list_name}')/items")
print(r.content)
print(r.json())
#I have also tried
https://{site}.sharepoint.com/{page}/_api/web/lists('{list_id}')/views('{view_id}')
#I was able to return data as html
https://{site}.sharepoint.com/{page}/_api/web/lists('{list_id}')/views('{view_id}')/renderashtml
Research: I have taken a look at the rest documentation for sharepoint and I am under the impression you cannot return data from a view. The solution I saw was to first hit the view and then generate a list of columns and use that to build a query to search the list. I have tied that and those fields are not available when I pull the list but are in the view.
https://social.msdn.microsoft.com/forums/sharepoint/en-US/a5815727-925b-4ac5-8a46-b0979a910ebb/query-listitems-by-view-through-rest-api
https://msdn.microsoft.com/en-us/library/office/dn531433.aspx#bk_View
Are you trying to get the data from known fields, or discover the names of the fields?
Can you get the desired data by listing the fields in a select?
_api/web/lists/getbytitle('Documents')/items?$select=Title,Created,DateOfBirth
or to get all of the fields:
_api/web/lists/getbytitle('Documents')/items?$select=*

YouTube API - Retrieve results specific page

I'm trying to get paginated results from Youtube Data API v3,
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&playlistId=UUCj956IF62FbT7Gouszaj9w&key={YOUR_API_KEY}
now, in the response there's the handy parameter nextPageToken that links to the next page of the results set
"nextPageToken": "CAUQAA"
Is there any way to jump to a specific page of the results, say the 6th?
Apologies, but no, there is not. This is intentional. You can only move forward one page of results at a time.

Instagram: Get photos from a tag after a specified photo

I'm working on an instagram scraper for something and I'm trying to figure out if it's possible to get all photos for a tag that have an id or timestamp later than the last one I have.
The instagram API docs are useless in that they don't have any real info on pagination (which I presume I'll have to abuse).
Does anyone have any ideas?
I've been slogging through the Instagram API for the last couple of days so here's my 2 cents worth:
As far as I can see it if you call the api with /tags/tag-name/media/recent it only return a list if items. If the amount exceeds about 25 you have to make another request with the pagination value returned in the previous request.
In order to gain some control I am initially iterating through all images and storing the results (just the URL not the actual image) to a database. Now I can manipulate however I want. When I feel like updating (I'm doing it manually now but could be a cron job or use the real-time api) I re-read all the images, compare to what I have in my DB and add possible new images. My app then reads out the url and info from my DB (which btw is a heck of a lot faster than going through the instagram api, which will only return about 25 images per request - regardless of any 'count' parameter value you put in the request url) and displays it.
I am developing this for a client who is afraid of people posting nsfw or whatever pics using their dedicated hashtag (for a contest) - with the above set up I can offer them an interface where they can check and mark images that are then displayed in the app.
One thing to watch out for is when a user deletes his picture; you will have to find a way to check for this. Currently (since I'm lazy) I load all images and use jquery to check for an error loading the image. If there is one I delete the image from the DB (via ajax).
I'm not sure the pagination is going to help you: as far as I can see the pagination response has no relation to the id's of the actual image objects on each page - so theoretically a pagination id that jumps to a certain page (i.e. date) might not work tomorrow if enough images have been deleted in the mean time.
to get all images instead of latest 20, just append &count=-1 to your api call - it's that simple.
In either case, there is a timestamp on each json object - or if you prefer, you can use max_tag_id
check out my post here: there any way to show more than 20 photos of the instagram API?
* Update April 2014: count=-1 is no longer available.

Resources