Instagram custom search - filter by number of likes - instagram

I'm trying to understand how to build a page that retrives all the images from instagram, that used a specific tag, whit a minimum number of X likes. Current tool on the web doesn't filter for likes number.
I found things like instafeed.js but it seems that it's impossible to use them at the moment because of the new instagram api limits.
I think that it should be quite easy to do this, but I don't know how to proceed :/

It should be pretty straightforward but you have no code to go off in your question. What programming language are you using? You would ideally use their API for this. I'm not sure about limits though, which are you referring to?
Take a look at their docs
https://www.instagram.com/developer/

Related

how to get the comments on social media and make it as your data?

I've proposed a title for our thesis, Movie Success Prediction through Social Media comments using Sentiment Analysis, is there a way you can get the comments on social media (twitter, Instagram, Facebook etc.) and use it for your software? like an API or any other way. is that even possible to use your software on different social media to get the comments for prediction or should i change my title and stick to one social media like Facebook or twitter only?
what's the good algorithm for this?
what programming language and framework/IDE should i use?
I've done lots of research on google and still hoping for more info here. Thank you.
Edit: I'll only use YouTube and YouTube API.
From the title of your question, it seems that the method you need to use is distant supervision. You need to retrieve data with labels you think it is proper for your task. For instance, a tweet containing #perfect hashtag would probably be a positive tweet. So, you can define set of hashtags for your task, negative, positive or even for neutral; then you can retrieve tweets by those via Twitter API. For your task, those should be for movies, therefore your data should contain movie related information in first place.
Given that you will deal with text data and you'd like to create your own dataset, it is better to start with Twitter. Its API works for your needs and it is very well-documented. The language and frameworks are upto your choice, since APIs supports many known languages as well. Personally, I'd start with python or java to quickly solve future problems easier with community support.
For a general survey of this area, you may dive into papers and resources from here:
https://scholar.google.com.tr/scholar?hl=en&q=distant+supervision+sentiment+analysis
Distant supervision could be used to create a sentiment lexicon out of millions English tweets by using sets of negative and positive hashtags as well. You may take a look at Chapter 5 of this thesis ( https://spectrum.library.concordia.ca/980377/1/Ozdemir_MCompSc_F2015.pdf ), this may also give a good insight for your thesis, too.
Hope this helps.
Cheers

How should I create a URL such /api/resource/identifier/resource/identifier

I have a page with categories and page with goods for each category. How should I divide my api endpoints?
is that correct to use something like this:
'/api/categories'
'/api/categories/:categoryId'
'/api/categories/:categoryId/goods'
'/api/categories/:categoryId/goods/:itemId'
or should I use structure like:
'/api/categories'
'/api/categories/:categoryId'
'/api/goods'
'/api/goods/:itemId'
Both answers are correct. There is usually no clear answer to questions like that. You can use one style of URLs, you can use the other style, or you can even use both with either the same resources available with two URL or with redirects.
The style of /api/categories/:categoryId/goods/:itemId has an advantage of nice hierarchy and more information in the URL itself.
But the style of /api/goods/:itemId has an advantage of being able to move goods between categories more easily, or even supporting multiple categories per item or adding tags in the future etc.
This is really something that you have to decide yourself, not only basing your decision on what URL seems better in itself but also considering your particular usage of the API that you're developing - like e.g. how likely it is to change categories, or how likely it is to access the goods without knowing the categories.
I recommend reading the tutorials about API design that were published by Stormpath. See:
https://stormpath.com/blog/fundamentals-rest-api-design
I particularly recommend watching the talks by Les Hazlewood that you can find on the Stormpath website, because he explains a lot of subtle topics like that.

FourSquare vs. Google Places vs. Yelp API

I am trying to create an app that will help users find restaurants/movie theaters/malls/etc. to hang out based on ratings and distance. Other than just the place itself, I would also like to know more detailed information about the place. For example, if I were to look for parks, I would also like to know if theres a basketball or tennis court there. Ratings and popularity would also be an important aspect to prioritize suggestions.
After looking through all three of the APIs, I could not really find any substantial differences other than their search limits. Could anyone really differentiate each API for me? Maybe even recommend one based on my specific need?
Thanks!
The Foursquare API would fit this use case perfectly because you can supply very specific filters through the API. Also, they have extensive coverage around the world, unlike Google or Yelp.
I would check out the venues/explore endpoint and use a categoryId of Parks. You can use a query parameter of "basketball" or "tennis" to find parks that have courts for these.

Extracting user interests from social profiles

This is my first time dabbling in NLP so please excuse my ignorance. I'm looking for a method to extract interests/likes/hobbies from users' social profiles. Here is an example where all the interests/likes/hobbies are in bold:
"I consider myself a pretty diverse character... I'm a professional
wrestler, but I'd take a bullet for Wall•E. I train like a one-man genocide machine in the gym, but I cried at
"Armageddon." I'll head bang to AC/DC, and I'm seriously
considering getting a Legend of Zelda tattoo. I'm 420-friendly. I
like to party it up with the frat crowd one night, hang out with
my Burning Man friends the next, play Halo and World of
Warcraft the next, and jam with friends that aren't any younger than
40 the next. My youngest friend is 16, my oldest friend is 66. I'll
sing karaoke at the bars, and I'm my friends' collective
psychiatrist/shoulder."
The profiles are plain text. There are no meta tags or ids associated with any of it, it's just a paragraph of text.
My naiive idea was to take each noun and match it against Freebase to see if it's an activity/artist/movie/book etc. The problem is that although most entities mentioned will be things the user likes, she will also mention things she doesn't like and I have no means of distinguishing the 2.
I have 2 questions:
What sub field of NLP should I be looking at? Some googleable algorithms/techniques/authors would be greatly appreciated.
How hard is this problem?
Thanks!
First, unless using NLP to do this is a particular objective for you, check your problem domain to see if you can avoid it completely.
For instance:
do these profiles have tags (supplied either by the Site or by the
user)?
what does the Site's API make available (assuming that's how you are accessing this data; if you are scraping it, then this doesn't of course apply)? A good example, Facebook. if you read a user's posts, you'll see words like "wrestler", "karaoke", etc. but if you look at what fields are exposed via the Graph API, you'll see that these activities nearly always have an associated FB ID.
I am not a specialist in this field, but I can recommend a couple of resources directed to NLP and which are accessible to the non-specialist or novice. The first is a text processing API. This simple web service uses REST and JSON IO. It is free and seems to have a fairly large rate limit.
This API appears to rely heavily on the excellent Natural Language Tooolkit (NLTK) which is a mature stable library in python, that includes modules directed to the problem in your Question, e.g., Sentiment Analysis, Tagging and Chunk Extraction, etc.
Which particular sub-domain is most relevant to solving the Question in the OP? I don't know, but I suspect there's a module somewhere in the NLTK that does what you need. Finding that module is hopefully just a matter of skimming the API Documentation (which is organized by module); reading the Getting Started section which contains an excellent survey of NLTK's modules as well as demos for all of each of them.

I can't figure out where to start with GIS application development, or which technology to select

I am very new to GIS development, and to be be frank I have no background about it at all. I searched the web but the tutorials I found seemed to assume the reader has some background information.
the thing is that I am confused about what to read or learn, there seems to be lots of technologies, and I feel lost since some speak about openlayers, geoserver, mapserver, google maps, and open street maps.
So here is what I am supposed to develop, and I hove you could give me an advice about which technology to use, and where should I start reading - given that I know almost nothing -.
Case 1: a closed system for about 20 users only, who can specify locations on the map, and the web application will store the latitude and longitude of the locations and show the markers. I wanted to use google maps api, but I cancelled that since there license requires you to purchase the service if the system is a closed one. so what technology should I use in such case? I need a free option, also I will be only using web server, so if the solution includes using my own geoserver, or something like that I won't be able to do it.
Case 2: I am supposed to display the roads and routes between two given points, and probably add some notes on the map. For this I case I can use my own map server/geo server, but again I want your suggestions.
of course the solution need to be open source
finally, I hope you could tell me what to start reading first,
Start by looking over at https://gis.stackexchange.com/, starting with the tags [web-mapping] and
Some topics in particluar you may want to look at are:
https://gis.stackexchange.com/questions/8113/steps-to-start-web-mapping
https://gis.stackexchange.com/questions/8238/where-how-to-learn-about-getting-started-with-web-gis
https://gis.stackexchange.com/questions/13868/looking-for-a-developer-friendly-web-gis
As for skills and tuorials, look at:
https://gis.stackexchange.com/questions/17227/free-gis-workshops-tutorials-and-applied-learning-material
https://gis.stackexchange.com/questions/913/web-gis-development-skill-sets

Resources