Node.js with Twit - AND Operator issue - node.js

I am having an issue getting the twit node.js bot to recognize multiple parameters when attempting to RT when it is both a listed user in the code, as well as containing a specified hashtag.
var stream = bot.stream('statuses/filter', {follow: approved_users, track: '#somehashtag'}
While this seems to work on an either/or basis how can this me modified so that it is an AND condition?
I've been looking through the documentation, and while I am assuming this should be a relatively simple fix I am not familiar enough with node.js to recognize immediately where the issue is.
Any help on this would be greatly appreciated!

Per the Twitter documentation, it is not possible to AND these filter types:
The track, follow, and locations fields should be considered to be
combined with an OR operator. track=foo&follow=1234 returns Tweets
matching "foo" OR created by user 1234.
The only way you'd be able to do something more complicated in real time would be to use the enterprise PowerTrack API, which is a commercial offering.

Related

Instagram custom search - filter by number of likes

I'm trying to understand how to build a page that retrives all the images from instagram, that used a specific tag, whit a minimum number of X likes. Current tool on the web doesn't filter for likes number.
I found things like instafeed.js but it seems that it's impossible to use them at the moment because of the new instagram api limits.
I think that it should be quite easy to do this, but I don't know how to proceed :/
It should be pretty straightforward but you have no code to go off in your question. What programming language are you using? You would ideally use their API for this. I'm not sure about limits though, which are you referring to?
Take a look at their docs
https://www.instagram.com/developer/

Simple Project in ServiceStack

I'm really struggling with the examples and documentation on ServiceStack. I want to do something really simple but none of the examples given seem to map exactly on what I need. I'm also thrown by the new API section on the website and whether that renders the rest of the (basic) documentation obsolete.
I'm just trying to wrap a number of database entities in a service that exposes CRUD REST and SOAP endpoints (need to retain some SOAP support for use by legacy clients/applications).
Let's call these entities
x: id, description
y: id, name
(they are not related in any way - think I can cope with related ones once I get my head around the very basics)
So I've built a solution:
MyAPI
Global.asx
Web.config
MyAPI.Logic
DB Access code?
MyAPI.SeviceInterface
MyAPiService.cs?
MyAPI.ServiceModel
Operations
x.cs
x.Response
y.cs
y.Response
Types
Don't think I need this but like to overengineer my early projects to make future changes easier
Hopefully this seems sensible
Given the very basic format of entity x, what is the best way to structure x.cs and MyAPIService.cs (I assume entity y would just be treated the same) to achieve basic CRUD operations for both REST and SOAP?
Small point but can I implement two GETs - one that passes in an id (and returns a specific x) and one that doesn't receive an id (and returns a list of all x's)?
I've looked at every link on stackoverflow and servicestack.net already so please no pointers to those - I think I'm just missing the point of the exisitng documentation!!
Many Thanks in Advance
Andy

Spotify Metadata API: Search By Artist

The original plan was to write this as a blog post, entitled "Inefficiencies in the Spotify Metadata API : Or, How the Jackson 5 killed my Browser", but changed my mind at the last minute as I have a habit of missing the obvious in documentation,perhaps an undocumented feature might exist which I have missed, or someone else has solved the issue - so hence this question has a certain blog-post tone about it!
I am developing a small web-app, mostly for a small group of people, which will allow anyone to update a Spotify playlist. As not everyone has Spotify (though I don't know why!), the page will update a database with songs, as App running in Spotify on my laptop polls the database for updates, then using the Spotify Apps API, the playlist is updated, and anyone subscribing to the playlist gets the update. This is ok, though I would like to use push rather than poll, but that's a topic for another day.
I searched around for a Javascript library to use with the Spotify Metadata API, and found one (https://github.com/palmerj3/SpotifyJS) though it was basically a wrapper and still required you to parse the JSON yourself. Thinking I could go one better and put some basic parsing in for the most common fields (title, artist, album, Spotify URI) I started working on my own library/JQuery plugin.
Search by track is not a problem, it's a single call to the spotify metadata API, the results are easily parsed, matching the returned artist with the requested artist (if present) makes for an easy search by title/artist.
Search by Artist (obtain a list of all songs by a particular artist) though, appears to be a pain-in-the-**! As best as I can tell from the docs, this is the process.
Search for the artist: this will return a list of artists which match the query
For each artist, lookup their albums: this will return a list albums
Lookup each album and retrieve a track list
Compare the artist for each track with the search artist, if it matches output
The first step will return a small list of artist matches, Foo Fighters has 2, Silverchair 1, and The Jackson 5 has 4. This small list turns into a larger number of album matches - from memory Foo Fighters returned 112, which then turns into even larger number of track lists. From a Javascript/JQuery perspective, this leads to daisy-chained AJAX request, for each step, and at each step massive numbers of, nearly concurrent GET requests against the Spotify servers.
The initial version I wrote cheated and used synchronous AJAX, and worked ok, as each request must complete before the next would start, though, this would lock the browser up for some time, and removed the possibility of using feedback to the user that the system was running. I then switched to asynchronous requests and all hell broke loose! You immediately hit issues with rate limiting on the Spotify end, which returns resoponses with 502 bad gateway (not listed in the spotify docs as a status by the way), or 503 - both of which JQuery interpreted as status code 0 - which was interesting, requiring debugging in Firebug. I throttled the requests down on the client side, I found that 1 every second was about right, to avoid rate limiting and ensure I got a response containing data each time, however, this then causes massive lock ups in the browser as it had upwards of 30 or 40 GET requests in parallel, all returning pretty much at the same time (though some requests responded after 15+ seconds!) and then parsing all the JSON responses.
I looked into alleviating the load by using a server-side approach, though this has downsides as well:
1. you don't avoid the basic issue in that the API can not handle the task in an efficient manner
2. for a busy site, the bandwidth usage will be against the server, which will also present a single IP, for multiple users you will soon hit the rate limit due to parallel users
The server side does offer caching though which could be beneficial, to this end I found a PHP Library - metatune (https://github.com/mikaelbr/metatune) advertised as the "The ultimate PHP Wrapper for the Spotify Metadata API", but unfortunately only offers the same basic lookup/search as the Spotify Metadata API - i.e.: no listing of all songs by an artist.
Thus, I have now disabled searching by artist, until I find a suitable solution.
Assuming I have not missed anything, it seems, to me at least, to not be an efficient API design, as it encourages you to make large numbers of requests to the Spotify servers, which is not good for me as a client, and not good for Spotify as a server. I can't help but think that if there was a request such as:
ws.spotify.com/search/1/artist.json?q=foo+fighters&extras=tracks
then the issues discussed here would be alleviated, a single request would cover what requires 3 sets of multiple requests currently; rate limiting wouldn't be as big an issue; the overheads to process the data on the client are greatly reduced; the overheads for Spotify to handle would be reduced and the entire service would be more efficient. The fact that the request would return a very large data set is not an issue, as the API already splits data into "pages".
So, my questions to the crowd:
1. Have I missed something obvious in the documentation, or is there a secret request?
2. In the absence of an API request, does anyone have a suggestion on how to make my system more efficient?
3. Has anyone solved this issue before?
Thanks for reading! Took a long time to get to the questions, but I felt it necessary to provide as much reasoning to find the best solution, and also, it illustrates the deficiency in the API, which I hope someone from Spotify will notice!
Finally as an aside, projects like this make me feel like we've swapped Flash for Javascript but the performance is still as bad! Anyone else feel the same?
Cheers!
sockThief
Unless I'm missing something, does this do what you want?
http://ws.spotify.com/search/1/track.json?q=artist:foo+fighters
The artist: prefix tells the search service to only match on artist. You can read more about the advanced search syntax (which also works in the client) here.

How to structure module to be closer/adhere to Node.js philosophy

I am relatively new to Node.js and I am trying to get more familiar with it by writing a simple module. The module's purpose is take an id, scrape a website and return an array of dictionaries with the data.
The data on the website is scattered across pages whereas every page is accessed by a different index number in the URI. I've defined a function that takes the id and page_number, scrapes the website via http.request() for this page_number and on end event the data is passed to another function that applies some RegEx to get the data in a structured way.
In order for the module to have complete functionality, all the available page_nums of the website should be scraped.
Is it ok by Node.js style/philosophy to create a standard for() loop to call the scraping function for every page, aggregate the results of every return and then return them all in once from the exported function?
EDIT
I figured out a solution based on help from #node.js on freenode. You can find the working code at http://github.com/attheodo/katina_node
Thank you all for the comments.
The common method, if you don't want to bother with one of the libraries mentioned by #ControlAltDel, is to to set a counter equal to the number of pages. As each page is processed (ansynchronously so you don't know in what order, nor do you care), you decrement the counter. When the counter is zero, you know you've processed all pages and can move on to the next part of the process.
The problem you will probably encounter is recombining all of the aggregated results. There are several libraries out there that can help, including Async and Step. Or you can use a promises library like Fibers.Promise. But the latter is not really node philosophy and requires direct code changes / additions to the node executable.
With the helpful comments from #node.js on Freenode I managed to find a solution by sequentially calling the scraping function and attaching callbacks, as Node.js philosophy requires.
You can find the code here: https://github.com/attheodo/katina_node/blob/master/lib/katina.js
The code block of interest lies between lines 87 and 114.
Thank you all

Is there a way to get download all the statistics on events at once from Flurry?

We are bumping into limitations with Flurry. We use events and parameters to track some game play info (like number of KO/map) but 1/ the limit of 15 parameters per event is a problem and 2/ the visualisation is not good (for instance Ko/map is shown by map so we have to open each event one after another).
We are trying to build a better visualisation with excel using the CSV files provided by Flurry, but then again we need to download the 50+ CSV files and it's really not convenient.
Is there a way to get all the information in one CSV or to get the information another way?
As a side note Flurry support is not answering any of our emails. :(
thanks for your help!
Have you tried checking out playtomic instead. Sounds like it might match your problem better.
They have an API to access your data. So you should be able to access it realtime.
You might also want to check out www.parse.com

Resources