How to get ALL likes for a given media-id - instagram

I am trying to iterate over the user ID of each like for a given {media_id}
https://api.instagram.com/v1/media/{media-id}/likes?access_token=ACCESS-TOKEN
is returning something like this (a data array of approx 300 likes)
{
"data": [{
"username": "jack",
"first_name": "Jack",
"last_name": "Dorsey",
"type": "user",
"id": "66"
},
{
"username": "sammyjack",
"first_name": "Sammy",
"last_name": "Jack",
"type": "user",
"id": "29648"
}]
}
The problem is that it does not return ALL the likes, or any pagination feature.
Is there any workaround to get ALL likes for a given {media_ID}?

You're using the correct API endpoint to get media likes, however this endpoint has a limitation. It only returns a maximum of 100-120 likes per media with no pagination.
Unfortunately there is no workaround!
The same limitation applies for the comments endpoint.

Check this Python library out.
Then you can use this sample code I made; however, It will only get 1000 of the most recent likes.
from InstagramAPI import InstagramAPI
likes_list = []
def get_likes_list(username):
API.searchUsername(username) #Gets most recent post from user
info = API.LastJson
username_id = info['user']['pk']
user_posts = API.getUserFeed(username_id)
info = API.LastJson
media_id = info['items'][0]['id']
API.getMediaLikers(media_id)
f = API.LastJson['users']
for x in f:
likes_list.append(x['username'])
get_likes_list("tailopez")
print(likes_list)

Related

Python: how to replace by blank or NA as exception in a list of properties?

I tried google some hours without solution.
So created this account try to ask by myself.
I am using python to generate a user list and export as csv.
I can print(Users['Id'],Users['Name']) without error as all users have these two fields.
But if I print(Users['Id'],Users['Name'],Users['Email']),
I met errors, if I write try and exception as pass, I got only one user result S3 alex as only alex has the field Email.
May I know if there is a way that I can put all the 6 field names as the table header.
Then whenever a user has no value on any fields, just put NA or leave it blank ? Thanks.
My code looks like this:
fo=open('Userlist.csv','w',newline='')
data_obj=csv.writer(fo)
data_obj.writerow(['cnt','Name','Id'])
cnt=1
result = get_users()
for user in result['Users']:
print(user['Name'],user['Id']) # to see result
data_obj.writerow([user['Name'],user['Id']]) # to write data into csv rows
row_number += 1
fo.close()
#If I print(result), get below result with different fields:
"Users": [
{
"Id": "S-1xxxx",
"Name": "S1 Peter",
"State": "DISABLED",
"UserRole": "USER"
},
{
"Id": "S-2xxxx",
"Name": "S2 Mary",
"State": "DISABLED",
"UserRole": "USER"
},
{
"Id": "S-3xxxx",
"Email": "alex#domain.com",
"Name": "S3 alex",
"State": "ENABLED",
"UserRole": "USER",
"EnabledDate": "2020-1-5"
},
{
"Id": "S-4xxxx",
"Name": "S3 brand",
"State": "DELETED",
}]
[expected result][1]
[1]: https://i.stack.imgur.com/fIMMB.png
with open("Userlist.csv", "w") as f:
w = csv.writer(f)
w.writerow(["Id", "Name", "Email"])
for user in users:
w.writerow([user.get("Id"), user.get("Name"), user.get("Email")])
The with keyword causes python to use a context manager and means you don't need to worry about closing your file. It's not vital to this solution, but it's good practice.
Using get() looks for your key and returns None if it doesn't exist, thus not throwing an error if it doesn't.

Get all the Parent documents base on child reference Id Mongo and Nodejs

Thank you for your help.
I am scratching my head all day, I don't know I am in the right direction or not.
Problem :
I have a document [Doctor] which contains the reference [doctorSpecialities].
I have to GET ALL DOCTORS who have this id in there doctorSpecialities reference Array
Id : 5ef58dd048cdd203a0c07ba8
JSON Structure
{
"doctorSpecialities": [
"5f00cebc8bcdcd0660c12ce2",
"5ef58dd048cdd203a0c07ba8"
]
"_id": "5ef31ae80399ac05eb23e555",
"email": "signup#gmail.com",
"username": "signup#gmail.com",
"DOB": null,
"zip": null,
"phone": "12657334566",
"PMDC": "7658493",
"isVerified": false,
"aboutMe": "About Me",
"achievements": "Achievements",
"address": "padasdad",
"city": "Lahore",
"gender": "Male",
"managePractice": "Manage Practice",
"practiceGrowth": "Practice Growth",
"qualiflication": "Qualifcation",
"state": "eeeeeeee",
"workExperince": "Work Experince",
"doctorAvailability": [],
"doctorReviews": [],
"degreeCompletionYear": "2019-10-10",
"institute": "institute",
"practiceDate": "2020-10-10",
"services": "Dental"
},
Query tried
await Doctor.find({ doctorSpecialities : req.params.id})
await Doctor.find({ doctorSpecialities :{$in [ req.params.id}})
Specialty Collection
doctorCollection = Doctor.find();
doctorCollection.find({"doctorSpecialities": specialty.id})
This is how I did is it wrong?
I tried to user $Lookup but I don't know how to use it in this requirement
Please let me know if you need more information.
Thank you
If you have to get doctors details then you can use
db.collection.find({"doctorSpecialities":"5ef58dd048cdd203a0c07ba8"})
play
It returns all documents where doctorSpecialities field contains 5ef58dd048cdd203a0c07ba8

SequelizeJS primaryKey related include

I am currently working on a REST API, working with SequelizeJS and Express.
I'm used to Django Rest Framework and I'm trying to find a similar function :
I have a table User and a table PhoneNumber.
I want to be able to return a user in JSON, including the list of the primarykeys of its phone numbers like this :
{
"firstName": "John",
"lastName": "Doe",
"phoneNumbers": [23, 34, 54],
}
Is there a way to do this simply and efficiently in sequelize or do I have to write functions that transform the fields like :
"phoneNumbers": [
{ "id": 23, "number": "XXXXXXXXXX" },
{ "id": 34, "number": "XXXXXXXXXX" },
{ "id": 54, "number": "XXXXXXXXXX" }
]
into what I have above ?
Thank you,
Giltho
Sequelize finder-methods accept and option attribute that lets you define which properties of a model it should query for. See http://docs.sequelizejs.com/manual/tutorial/querying.html#attributes
That works for joins too:
User.all({
include: [
{ model: Phonenumber, attributes: ['id']}
]
})
.then(function(users) {
})
will execute
SELECT user.*, phonenumber.id FROM user INNER JOIN phonenumber ON ...
But to turn the phonenumbers into an array of integers [1,2,...] in your api response, you'd still have to map the ids manually into an array, otherwise you get an array of [{"id": 1}, {"id": 2},...].
But actually i recommend not to do that. An array of objects is the more future-proof option than an array of integers. Because your api-client don't need to change anything if you decide some day to expand the phonenumber object with additional attributes.

How many bytes are in a Location ID from the Instagram API?

I cannot find any decent documentation on the Instagram API about this. I know it returns a number through the API which is usually a 2^32 bit int, but once in a while I will get a number that is 2^64. I want to store these numbers in my Cassandra database, but I am not sure if I should store them as Int(2^32) or BigInt(2^64) or even text.
What are your thoughts?
Based on the Instagram API, id's (whether for a User, Media, Location, etc) are returned as strings (as opposed to the float values for "latitude" and "longitude" or the int values returned for fields like count):
{
"data": [{
"id": "788029",
"latitude": 48.858844300000001,
"longitude": 2.2943506,
"name": "Eiffel Tower, Paris"
},
{
"id": "545331",
"latitude": 48.858334059662262,
"longitude": 2.2943401336669909,
"name": "Restaurant 58 Tour Eiffel"
},
{
"id": "421930",
"latitude": 48.858325999999998,
"longitude": 2.294505,
"name": "American Library in Paris"
}]
}
It may be best to store them as text in Cassandra.

Instagram API returns only 4 likes data

I'm using the Instagram API to fetch images with a certain hashtag that have been liked by my organization. But when the API makes the get call, the response comes back with data like this, where the like count is 83 (!) and the actual like data returned only shows 4 (!). I've seen postings here that indicate that Instagram returns about 120 data for likes. How come I'm only getting four?
The api call I'm using is:
https://api.instagram.com/v1/tags/mytag/media/recent/?client_id=myclientID
"likes": {
"count": 83,
"data": [
{
"username": "something",
"profile_picture": "picture",
"id": "idhere",
"full_name": "namehere"
},
{
"username": "",
"profile_picture": "",
"id": "",
"full_name": ""
},
{
"username": "",
"profile_picture": "",
"id": "",
"full_name": ""
},
{
"username": "",
"profile_picture": "",
"id": "",
"full_name": ""
}
]
},
When you fetch medias from Instagram using these endpoints:
/users/<user-id>/media/recent
/tags/<tag-name>/media/recent
You wouldn't have all likes in the response; same for comments. It's just limit set by Instagram. I think it might be really expensive to return all (or a lot) likes/comments in each media users fetch.
But don't worry, If you get medias you want, you will have their id and you could use this endpoint:
/media/<media-id>/likes
And then you will have all likes (use pagination to fetch them all) and do a great stuff with them.
Hope it helps you.
This could be three things:
1) A bug, but that's unlikely (ha a pun!)
2) Pagination. You need to ask for more data in another call with MIN_TAG_ID and/or MAX_TAG_ID set.
3) Privacy. Instagram users have privacy settings on their profiles. Described here. This would definitely lower the count even with pagination.

Resources