API usage Limit in Flurry? - flurry

Flurry says that The rate limit for the API is 1 request per second. In other words, you may call the API once every second. I could not understand this.it means whenever the event occurs in mobile Application the request sends to server not as whole thing.Am I right? any help please?

When registering events you don't need to worry about API Limits. All events you fire are stored locally and when your session finishes the whole event package is sent to flurry server.

You could create a queue in your application with the events you want to register with the API, and continuously try to send all items in that queue, with a one second interval.

Related

IIS applicaiton HTTP method stop running

I have Web application on IIS server.
I have POST method that take a long time to run (Around 30-40min).
After period time the application stop running (Without any exception).
I set Idle timeout to be 0 and It is not help for me.
What can I do to solve it?
Instead of doing all the work initiated by the request before responding at all:
Receive the request
Put the information in the request in a queue (which you could manage with a database table, ZeroMQ, or whatever else you like)
Respond with a "Request recieved" message.
That way you respond within seconds, which is acceptable for HTTP.
Then have a separate process monitor the queue and process the data on it (doing the 30-40 minute long job). When the job is complete, notify the user.
You could do this through the browser with a Notification or through a WebSocket or use a completely different mechanism (such as by sending an email to the user who made the request).

PubSub REST subscription pull not returning with all messages

We use the REST service API to pull messages from a PubSub subscription. Messages ready to be serviced are acknowledged, leaving other messages unacknowledged to be serviced during a later execution cycle.
During an execution cycle, we send a single reqeust to the pull service REST API with returnImmediately=true and maxMessages=100.
While testing we encountered a situation when only 3 "old" messages would be returned during each execution cycle. Newly published messages were never included in the request to pull. We verified new messages were successfully arriving at the subscription by monitoring the Undelivered messages in Stackdriver monitoring.
Does the pull REST API not include all undelivered messages?
Does it ignore the maxMessages parameter?
How should all messages, up to the maximum specified, be read with the REST API?
Notes:
We worked around the problem by sending 2 parallel requests to the pull API and merging the results. We found the workaround (requiring parallel requests) discussed here.
Update Feb. 22, 2018
I wrote an article on our blog that explains why we forced to use the PubSub service REST API.
A single pull call will not necessarily return all undelivered messages, especially when returnImmediately is set to true. The pull promises to return at most maxMessages, it does not mean that it will always return maxMessages if there are that many messages available.
The pull API tries to balance returning more messages with keeping end-to-end latency low. It would rather return a few messages quickly than wait a long time to return more messages. Messages need to be retrieved from storage or from other servers and so sometimes all of those messages aren't immediately available to be delivered. A subsequent pull request would then receive other messages that were retrieved later.
If you want to maximize the chance of receiving more messages with a pull request, set returnImmediately to false. This still will not guarantee that messages will all be delivered in a single pull request, even if maxMessages is greater than the number of messages yet to be delivered. You should still send subsequent pull requests (or even more ideally, several pull requests at the same time) to retrieve all of the messages.
Alternatively, you should consider switching to the Google Cloud Pub/Sub client libraries, which handle all of this under the hood and deliver messages to a callback you specify as soon as they are available.

How to make multiple API calls with rate limits per user using RabbitMQ?

In my app I am getting data on behalf of different users via one API which has a rate limit of 1 API call every 2 seconds per user.
Currently I am storing all the calls I need to make in a single message queue. I am using RabbitMQ for this.
There is currently one consumer who is taking one message at a time, doing the call, processing the result and then start with the next message.
The queue is filling up faster than this single consumer can make the API calls (1 call every 2 seconds as I don't know which user comes next and I don't want to hit API limits).
My problem is now that I don't know how to add more consumers which in theory would be possible as the queue holds jobs for different users and the API rate limit is per user so e.g. I could do 2 API calls every 2 seconds if they are from different users.
However I have no information about the messages in the queue. Could be from a single user, could be from many different users.
Only solution I see right now is to create separate queues for each user. But I have many different users (say 1,000) and would rather stay with 1 queue.
If possible I would stick with RabbitMQ as I use this for other similar tasks as well. But if I need to change my stack I would be willing to do so.
App is using the MEAN stack.
You will need to maintain a state somewhere, I had a similar application and what i did was maintain state in Redis, before every call check if user has made request in last 2 seconds eg:
Redis key:
user:<user_id> // value is epoch time-stamp
update Redis once request is made.
refrence:
redis

API with Work Queue Design Pattern

I am building an API that is connected to a work queue and I'm having trouble with the structure. What I'm looking for is a design pattern for a worker queue that is interfaced via a API.
Details:
I'm using a Node.js server and Express to create an API that takes a request and returns JSON. These request can take a long time to process (very data intensive) so this is why we use a queuing system (RabbitMQ).
So for example lets say I send a request to the API that will take 15 min to process. The Express API formats the request and puts it in a RabbitMQ (AMQP) queue. The next available worker takes the request off the queue and starts to process it. After its done (in this case 15 min) it saves the data into a MongoDB. .... now what .....
My issue is, how do I get the finished data back to the caller of the API? The caller is a completely separate program that contacts the API via something like an Ajax request.
The worker will save the processed data into a database but I have no way to push back to the original calling program.
Does anyone have any API with a work queue resources?
Please and thank you.
On the initiating call by the client you should return to the client a task identifier that will persist with the data all the way to MongoDB.
You can then provide an additional API method for the client to check the task's status. This method should take a single parameter, the task identifier, and check if a document with that identifier has made in into your collection in MongoDB. Return false if it doesn't exist yet, true when it does.
The client will have to repeatedly poll (but maybe at a 1 minute interval) the task status API method until it returns true.

Azure Storage Queue - correlate response to request

When a Web Role places a message onto a Storage Queue, how can it poll for a specific, correlated response? I would like the back-end Worker Role to place a message onto a response queue, with the intent being that the caller would pick the response up and go from there.
Our intent is to leverage the Queue in order to offload some heavy processing onto the back-end Worker Roles in order to ensure high performance on the Web Roles. However, we do not wish to respond to the HTTP requests until the back-end Workers are finished and have responded.
I am actually in the middle of making a similar decision. In my case i have a WCF service running in a web role which should off-load calculations to worker-roles. When the result has been computed, the web role will return the answer to the client.
My basic data structure knowledge tells me that i should avoid using something that is designed as a queue in a non-queue way. That means a queue should always be serviced in a FIFO like manner. So basically if using queues for both requests and response, the threads awaiting to return data to the client will have to wait untill the calculation message is at the "top" of the response queue, which is not optimal. If storing the responses by using Azure tables, the threads poll for messages creating unnecessary overhead
What i belive is a possible solution to this problem is using a queue for the requests. This enables use of the competeing consumers pattern and thereby load-balancing. On messages sent into this queue you set the correlationId property on the message. For reply the pub/sub part ("topics") part of Azure service bus is used togehter with a correlation filter. When your back-end has processed the request, it published a result to a "responseSubject" with the correlationId given in the original request. Now this response ca be retrieved by your client by calling CreateSubscribtion (Sorry, i can't post more than two links apparently, google it) using that correlation filter, and it should get notified when the answer is published. Notice that the CreateSubscribtion part should just be done one time in the OnStart method. Then you can do an async BeginRecieve on that subscribtion and the role will be notified in the given callback when a response for one of it's request is available. The correlationId will tell you which request the response is for. So your last challenge is giving this response back to the thread holding the client connection.
This could be achieved by creating Dictionary with the correlationId's (probably GUID's) as key and responses as value. When your web role gets a request it creates the guid, set it as correlationId, add it the hashset, fire the message to the queue and then call Monitor.Wait() on the Guid object. Then have the recieve method invoked by the topic subscribition add the response to the dictionary and then call Monitor.Notify() on that same guid object. This awakens your original request-thread and you can now return the answer to your client (Or something. Basically you just want your thread to sleep and not consume any ressources while waiting)
The queues on the Azure Service Bus have a lot more capabilities and paradigms including pub / sub capabilities which can address issues dealing with queue servicing across multiple instance.
One approach with pub / sub, is to have one queue for requests and one for the responses. Each requesting instance would also subscribe to the response queue with a filter on the header such that it would only receive the responses targeted for it. The request message would, of course contain the value to the placed in the response header to drive the filter.
For the Service Bus based solution there are samples available for implementing Request/Response pattern with Queues and Topics (pub-sub)
Let worker role keep polling and processing the message. As soon as the message is processed add an entry in Table storage with the required corelationId(RowKey) and the processing result, before deleting the processed message from the queue.
Then WebRoles just need to do a look up of the Table with the desired correlationId(RowKey) & PartitionKey
Have a look at using SignalR between the worker role and the browser client. So your web role puts a message on the queue and returns a result to the browser (something simple like 'waiting...') and hook it up to the worker role with SignalR. That way your web role carries on doing other stuff and doesn't have to wait for a result from asynchronous processing, only the browser needs to.
There is nothing intrinsic to Windows Azure queues that does what you are asking. However, you could build this yourself fairly easily. Include a message ID (GUID) in your push to the queue and when processing is complete, have the worker push a new message with that message ID into a response channel queue. Your web app can poll this queue to determine when processing is completed for a given command.
We have done something similar and are looking to use something like SignalR to help reply back to the client when commands are completed.

Resources