Is there a maximum number of playlists, or tracks within a playlist, an application, user, or the API may create?
There are several areas in the docs where the Playlist object is mentioned, but nothing that actually discusses thresholds:
Web API > Object Model > Playlist (Full)
Web API > Playlist Guide
Yes, the limit is 10 000 and has been there for a long time. (Worth mentioning that the limit for number of saved tracks and albums is also 10 000.) There's an idea thread on the Spotify Community forums to increase it.
No, there is no limit anymore.
Related
var xhr = new XMLHttpRequest();
xhr.open('GET', 'https://api.github.com/repos/vuejs/vue/issues');
xhr.send();
with above code, I can receive top 30 issues list of vue project. But if I want to get top 30 issues whose issue number is less then 8000, how can I do it?
in the github v3 api doc,there are just a feature that allow you get issues since a time point.
One way using API V3 would be to traverse through the issues and find those that you want. In any case the call to the Issues API returns issues in descending order of the date of creation. Which means you just need to traverse through the issues to find the ones having issue number lower than 8000.
In the particular case of vuejs/vue; You can increase the number of issues displayed per page to 100 and then find issues having number less than 8000 in the second page :
https://api.github.com/repos/vuejs/vue/issues?per_page=100&page=2
I feel this is a better option than using issue Search API (V3), since you do not have to deal with a very low rate limit of Github Search APIs.
I've got over 100 hours of audio associated with video interviews for a documentary that need to be transcribed to text - hopefully with some kind of timecode markers every 30 seconds or so so the video can easily be matched up to the text in the edit suite.
The files are BWAV 24 bit 96khz and WAV 16 bit 48khz and last anywhere from 20 minutes to 2 hours.
What kind of resources need to be setup in a VM to do this kind of activity? I suspect it will be rather compute intensive so the VM might need 32 cores and a fair amount of memory, but there is no need for realtime response so it is ok if priorities are low and it takes several hours to process a file. My budget is miniscule - $300 is about the most we can afford for all the files (which is one reason we aren't sending these files out to a transcription service at $75+/hour).
I've already got a Cloud Platform account but have never used it. There is no point in my floundering around if someone has already done something similar and can give me some help.
According to documentation: https://www.instagram.com/developer/limits/
The rate-limit control works under a "time-sliding" window, the question is:
What's the frequency of increasing for the remaining calls HTTP header (x-ratelimit-remaining) seconds? minutes?, an hour?
Reading the docs. "5000/hr per token for Live apps" (our company App went Live already), I assumed a frequency limiter, being calculated each second or minute, but after several days trying different strategies the value doesnt seem to have any deductible behaviour.
Possible answers (depending how it is coded) could be:
(a sliding window like a frequency limiter)
it increases 1 credit each 720 ms (3600'(1hr) / 5000 (remaining calls)) without a request until reaching 5000, it decays to 0 otherwise.
If we do 1 req. at the correct frequency, we should never lose 5000 calls., So we could spend them strategically: dispersed, cluttered, traffic-adapted.
(a limited sink recharging each hour)
with 5000 remaining, it decays 1 credit per request -no matter the frequency-, after 1 hour passed since that 1st request: it goes back to 5000
it renews to 5000 each 1 hour counting since the token was used to do the 1st request.
it decays 1 credit per request, and it goes to 5000 in a time fixed hour, like at 12:00, 13:00, 14:00, 15:00...
I'm using jInstagram 1.1.7.
After a lot of testing....
I have some temporary conclusions...
Starting from 5000, if you fetch at uniform rate (720ms/req) you will reach 500 like at the minute 50, then instagram will begin to give you credit in portions lesser than 500. So at the minute 60 you'll have 150 remaining calls left, and instagram will give you another credit portion, generally reaching 500 avg. and going down again of course...
If you stop consuming, like 30 minutes aprox. You will acquire again 5000 credits.
Also they give you 5000 remaining calls, they seem to have counters indexed by IP, if you make the request from different IPs with the same credential, they'll act like ignoring the others.
Besides that, instagram have many errors keeping a consistent value for the x-ratelimit-remaining HTTP header they respond on every HTTP request.
It looks related to some overriding, and some kind of race between the servers replicating the last value.
Shame on you instagram, I spent a lot of time adapting my cool throttling algorithm to your buggy behaviour, assuming you had good engineering down there !
Please fix them so we can play fair with you instead of playing hide and seek, stealth tricks..
I use livepkgr to show liveevents to my viewers. Now I'd like to show to them how many people now watch the live event.
Please advise me.
Many Thanks.
Number of connected users available in FMS. Supposing you can customize your player, then it is possible to call FMS for connected users each 10 seconds and show number in player.
Basically I'm trying to replicate YouTube's ability to begin video playback from any part of hosted movie. So if you have a 60 minute video, a user could skip straight to the 30 minute mark without streaming the first 30 minutes of video. Does anyone have an idea how YouTube accomplishes this?
Well the player opens the HTTP resource like normal. When you hit the seek bar, the player requests a different portion of the file.
It passes a header like this:
RANGE: bytes-unit = 10001\n\n
and the server serves the resource from that byte range. Depending on the codec it will need to read until it gets to a sync frame to begin playback
Video is a series of frames, played at a frame rate. That said, there are some rules about the order of what frames can be decoded.
Essentially, you have reference frames (called I-Frames) and you have modification frames (class P-Frames and B-Frames)... It is generally true that a properly configured decoder will be able to join a stream on any I-Frame (that is, start decoding), but not on P and B frames... So, when the user drags the slider, you're going to need to find the closest I frame and decode that...
This may of course be hidden under the hood of Flash for you, but that is what it will be doing...
I don't know how YouTube does it, but if you're looking to replicate the functionality, check out Annodex. It's an open standard that is based on Ogg Theora, but with an extra XML metadata stream.
Annodex allows you to have links to named sections within the video or temporal URIs to specific times in the video. Using libannodex, the server can seek to the relevant part of the video and start serving it from there.
If I were to guess, it would be some sort of selective data retrieval, like the Range header in HTTP. that might even be what they use. You can find more about it here.