Rate limit for libspotify - spotify

I just started to observe the following error message from [CocoaLibSpotify didLogMessage] in my iOS App.
I [snd:381] Rate limited. Waiting 3s
Does anyone know what the libspotify API rate limit is? I saw that the web API limit is 10/second/IP. Is is the same for libspotify?
Thanks in advance.

libspotify APIs are executed on your local machine - there's no rate limit in calling them.
You've provided pretty much zero context for this, but I think what you're seeing is that you're playing a track and CocoaLibSpotify has filled a playback buffer. libSpotify won't give audio data at faster than 1.5x realtime, so it's waiting before it tries to refill the buffer.
That message is perfectly normal, and the "I" in front of it means it's only an informational message.

I have seen that message in the normal Linux desktop client as well. I only get it when I am browsing music and I skip too many tracks too quickly. There is a server side rate limit of how fast you can fetch new tracks in Spotify. This info message probably relates to that limit but it could also be what iKenndac writes.

I'm experiencing the same. When I skip too much through the tracks, it prints out an Information Message "Rate limited waiting 3s".
But maybe they should print out a message in the UI ?
Otherwise it will confuse the unexperienced users.

Related

Find the source of waves of latency from Redis using Node.js

I am investigating some latency issues on my server, and I've narrowed it down but not enough to solve it. I'm hoping someone with more experience with Redis or Node.js can help.
Within a function that is called a few thousand times per minute, scaling up and down with web traffic, I send a GET request to my redis client to check if a process is complete. I've noticed increased latency for my web requests, and it appears as though the redis GET command is taking up the bulk of my server time. Which surprised me, as I always thought redis was wicked fast all the time. And if I look at Redis's "time spent" info, it says everything is under 700 microseconds.
That didn't jive with what I was seeing from my transaction monitoring setup, so I added some logging to my code:
const start = Date.now();
client.get(`poll:${submittedId}`, (err, res) => {
console.log(`${Date.now() - start}`);
//other stuff
})
Now my logs print the number of milliseconds spend on each redis GET. I watch that for a while, and see a surprising pattern.
Most of the time, there are lots of 1s and an occasional number in the 10s or sometimes 100s. Then, periodically, all the gets across the server slow down, reaching up to several seconds for each get to complete. Then after a while the numbers curve back down and things are running smoothly again.
What could be causing this sort of behaviour?
Things I've tried to investigate:
Like I mentioned, I've combed through redis's performance data, as presented on Heroku's redis dashboard, and it doesn't have any complaints or latency spikes.
I confirmed that all these requests are coming from a small number of connections, because I read that opening and closing too many can cause issues.
I looked into connection pooling, thinking maybe the transactions are being queued and causing a backlog, but the internet seems to say this isn't necessary for Redis and Node.
Really appreciate anyone taking the time to advise on this!
Seems like Redis blocked in bgsave. Check if
your Redis used memory is large.
use lastsave command to assure the bgsave span.
close aof always.
use slowlog to assure if other command blocked.

Linux can-bus excessive retransmit

I'm working on a project involving a linux embedded device with CAN bus-support.
I've noticed that if I try to send a CAN-packet without having anything attached to the CAN-bus, the transmit is automatically reattempted by the kernel an unlimited number of times. I can verify this using a scope - the same message is automatically transmitted over and over. This retransmission persists even if I shut down the process which created the message, and even if this process only ever attempts to transmit one single message.
My question is - is this normal behaviour for a linux CAN bus kernel? My worry is that if there is ever something wrong in the device, and it erroneously concludes that it is alone on the bus, the device might possibly swamp the bus making it unusable for other bus participants. I would have expected there to be some sort of retry-limit.
The device is using linux 4.14.48, and the can-chip is Philips SJA1000.
What you are seeing is likely error frames. Compliant behavior is this:
Node is active. It attempts to send a data frame but get no ACK bits set since nobody is listening to it.
It will send out an error frame, which pretty much only consists of 6 dominant bits to purposely break bit stuffing.
The controller will re-attempt to send the message. If a new attempt to send without receiving ACK is done, another error frame will be sent. This will keep repeating automatically.
After 128 errors, the node will go error passive, where it will still send error frames, but now with recessive level where it doesn't disrupt other traffic.
After a total of 256 errors, the node will go bus off and shut up completely.
This should all be handled by the CAN controller hardware, not by the OS. You might need to reset or power cycle the SJA1000 once it goes bus off. If it never goes bus off, then something in the driver code might be continuously resetting the CAN controller after a certain amount of errors.
Mind that microcontroller implementations might act the same and reset upon errors too, since that's typically the only way to re-establish communication after a bus off. This depends on the nature of the CAN application.
Short answer is yes - if ACK is the only TEM error the counter will stop at 128 and not go into BUS OFF. It will go forever. This happened to me as well and I just turned off the re-transmit function from the processor side. Not sure if that is a CAN standard function or not.

Expected performance with getstream.io

The getstream.io documentation says that one should expect retrieving a feed in approximately 60ms. When I retrieve my feeds they contain a field named 'duration' which I take is the calculated server side processing time. This value is steadily around 10-40ms, with an average around 15ms.
The problem is, I seldomly get my feeds in less than 150ms and the average time is rather around 200-250ms and sometimes up to 300-400ms. This is the time for the getting the feed alone, no enrichment etc., and I have verified with tcpdump that the network roundtrip is low (around 25ms), and that the time is actually spent waiting for the server to respond.
I've tried to move around my application (eu-west and eu-central) but that doesn't seem to affect things much (again, network roundtrip is steadily around 25ms).
My question is - should I really expect 60ms and continue investigating, or is 200-400ms normal? On the getstream.io site it is explained that developer accounts receive "Low Priority Processing" - what does this mean in practise? How much difference could I expect with another plan?
I'm using the node js low level API.
Stream APIs use SSL to encrypt traffic. Unfortunately SSL introduces additional network I/O. Usually you need to pay for the increased latency only once because Stream HTTP APIs supports HTTP persistent connection (aka keep-alive).
Here's a Wireshark screenshot of the TCP traffic of 2 sequential API requests with keep alive disabled client side:
The 4 lines in red highlight that the TCP connection is getting closed each time. Another interesting thing is that the handshaking takes almost 100ms and it's done twice (the first bunch of lines).
After some investigation, it turns out that the library used to make API requests to Stream's APIs (request) does not have keep-alive enabled by default. Such change will be part of the library soon and is available on a development branch.
Here's a screenshot of the same two requests with keep-alive enabled (using the code from that branch):
This time there is not connection reset anymore and the second HTTP request does not do SSL handshaking.

Using Fleck Websocket for 10k simultaneous connections

I'm implementing a websocket-secure (wss://) service for an online game where all users will be connected to the service as long they are playing the game, this will use a high number of simultaneous connections, although the traffic won't be a big problem, as the service is used for chat, storage and notifications... not for real-time data synchronization.
I wanted to use Alchemy-Websockets, but it doesn't support TLS (wss://), so I have to look for another service like Fleck (or other).
Alchemy has been tested with high number of simultaneous connections, but I didn't find similar tests for Fleck, so I need to get some real info from users of fleck.
I know that Fleck is non-blocking and uses Async calls, but I need some real info, cuz it might be abusing threads, garbage collector, or any other aspect that won't be visible to lower number of connections.
I will use c# for the client as well, so I don't need neither hybiXX compatibility, nor fallback, I just need scalability and TLS support.
I finally added Mono support to WebSocketListener.
Check here how to run WebSocketListener in Mono.
10K connections is not little thing. WebSocketListener is asynchronous and it scales well. I have done tests with 10K connections and it should be fine.
My tests shows that WebSocketListener is almost as fast and scalable as the Microsoft one, and performs better than Fleck, Alchemy and others.
I made a test on a Windows machine with Core2Duo e8400 processor and 4 GB of ram.
The results were not encouraging as it started delaying handshakes after it reached ~1000 connections, i.e. it would take about one minute to accept a new connection.
These results were improved when i used XSockets as it reached 8000 simultaneous connections before the same thing happened.
I tried to test on a Linux VPS with Mono, but i don't have enough experience with Linux administration, and a few system settings related to TCP, etc. needed to change in order to allow high number of concurrent connections, so i could only reach ~1000 on the default settings, after that he app crashed (both Fleck test and XSocket test).
On the other hand, I tested node.js, and it seemed simpler to manage very high number of connections, as node didn't crash when reached the limits of tcp.
All the tests where echo test, the servers send the same message back to the client who sent the message and one random other connected client, and each connected client sends a random ~30 chars text message to the server on a random interval between 0 and 30 seconds.
I know my tests are not generic enough and i encourage anyone to have their own tests instead, but i just wanted to share my experience.
When we decided to try Fleck, we have implemented a wrapper for Fleck server and implemented a JavaScript client API so that we can send back acknowledgment messages back to the server. We wanted to test the performance of the server - message delivery time, percentage of lost messages etc. The results were pretty impressive for us and currently we are using Fleck in our production environment.
We have 4000 - 5000 concurrent connections during peak hours. On average 40 messages are sent per second. Acknowledged message ratio (acknowledged messages / total sent messages) never drops below 0.994. Average round-trip for messages is around 150 miliseconds (duration between server sending the message and receiving its ack). Finally, we did not have any memory related problems due to Fleck server after its heavy usage.

Set visibilitytimeout to 7days means automatically delete for azure queue?

It seems that new azure SDK extends the visibilitytimeout to <= 7 days. I know by default, when I add a message to an azure queue, the live time is 7days. When I get message out, and set the visibilitytimeout to 7days. Does that mean I don't need to delete this message if I don't care about message reliable? the message will disappear later 7 days.
I want to take this way because DeleteMessage is very slow. If I don't delete message, doesn't it have any impact on performance of GetMessage?
Based on the documentation for Get Messages, I believe it is certainly possible to set the VisibilityTimeout period to 7 days so that messages are fetched only once. However I see some issues with this approach instead of just deleting the message once the process is done:
What happens when you get the message and start processing it and somehow the process fails? If you set the visibility timeout to be 7 days, then the message would never appear in the queue again and thus the process it was supposed to do never gets done.
Even though the message is hidden, it is still there in the queue thus you keep on incurring storage charges for that message. Even though the cost is trivial but why keep the message when you don't really need it.
A lot of systems rely on Approximate Messages Count property of a queue to check on the health of processes which are performed by messages in a queue. Please note that even though you make the message hidden, it is still there in the queue and thus will be included in total messages count in the queue. So if you're building a system which relies on this for health check, you will always find your system to be unhealthy because you're never deleting the messages.
I'm curious to know why you find deleting messages to be very slow. In my experience this is quite fast. How are you monitoring message deletion?
Rather than hacking around the problem I think you should drill into understanding why the deletes are slow. Have you enabled logs and looked at the e2elatency and serverlatency numbers across all your queue operations. Ideally you shouldn't be seeing a large difference between the two for all your queue operations. If you do see a large difference then it implies something is happening on the client that you should investigate further.
For more information on logging take a look at the following articles:
http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/
http://msdn.microsoft.com/en-us/library/azure/hh343262.aspx
Information on client side logging can also be found in this post: e client side logging – which you can learn more about in this blog post. http://blogs.msdn.com/b/windowsazurestorage/archive/2013/09/07/announcing-storage-client-library-2-1-rtm.aspx
Please let me know what you find.
Jason

Resources