Stargate API Rate Limiter - cassandra

Planning a migration from DataStax Enterprise to Astra DB, and I'm curious on some points:
If we impose rate limiting on API endpoints exposed by Stargate?
If answer is yes, what kind of rate limiting algorithm do we impose?
What are the expected errors if the rate limiting threshold is exceeded?

By default, Astra DB instances on the free tier are limited to 4K operations per second. Paying customers can have the rate limits raised for their databases.
To respond to your questions directly:
Limits are not imposed on Stargate API endpoints directly -- the limits are on the database itself.
We can't share that information publicly here but if get in contact with us directly, we would be happy to discuss.
The clients will get generic timeout and/or authentication errors.

Related

Azure Cosmos db : requests exceeding rate limit when bulk deleting records

I have one user bulk deleting some 50K documents from one container using a stored procedure.
Meanwhile another user is trying to login to the web app (connected to same cosmos db) but the request fails due to rate limit being exceeded.
What should be the best practice in this case in order to avoid service shortages like the one described?
a) Should I provision RUs by collection?
b) Can I set a cap on the RU's consumed by bulk operations from code when making a request?
c) is there any other approach?
More details on my current (naive/newbie) implementation:
Two collections : RawDataCollection and TransformedDataCollection
Partition key values are the customer account number
RU set at the database level (current dev deployment has minimum 400RUs)
Bulk insert/delete actions are needed in both collections
User profile data (for login purposes, etc.) is stored in RawDataCollection
Bulk actions are low priority in terms of service level, meaning it could be put on hold or something if a higher priority task comes in.
Normally when user logs in, retrieves small amounts of information. This is high priority in terms of service level.
It is recommended to not use Stored Procedures for bulk delete operations. Stored procedures only operate on the primary replica meaning they can only leverage 1/4 of total RU/s provisioned. You will get better throughput usage and more efficiency doing bulk operations using SDK client in Bulk Mode.
Whether you provision throughput at the database level or container level depends on a couple of things. If you have a large number of containers that get roughly the same number of requests and storage, database level throughput is fine. If the requests and storage is asymmetric then provision those containers which diverge greatly from the others with their own dedicated throughput. Learn more about the differences.
You cannot throttle requests on a container directly. You will need to implement Queue-based load leveling in your application.
Overall if you've provisioned 400 RU/s and trying to bulk delete 50K records, you are under provisioned and need to increase throughput. In addition, if you're workload is highly variable with long periods of little to no requests with short periods of high volume, you may want to consider using Serverless throughput or Autoscale

What is throttled search query in azure search?

I'm using azure search for my app, and lately i'm facing performance issues.
Currently i'm investigating problem and i came across the following article:
https://learn.microsoft.com/en-us/azure/search/search-performance-optimization#scaling-azure-search-for-high-query-rates-and-throttled-requests
It says:
Scaling Azure Search for high query rates and throttled requests
When you are receiving too many throttled requests or exceed your target
latency rates from an increased query load, you can look to decrease
latency rates in one of two ways: Increase Replicas: A replica is like
a copy of your data allowing Azure Search to load balance requests
against the multiple copies. All load balancing and replication of
data across replicas is managed by Azure Search and you can alter the
number of replicas allocated for your service at any time. You can
allocate up to 12 replicas in a Standard search service and 3 replicas
in a Basic search service. Replicas can be adjusted either from the
Azure portal or PowerShell. Increase Search Tier: Azure Search comes
in a number of tiers and each of these tiers offers different levels
of performance. In some cases, you may have so many queries that the
tier you are on cannot provide sufficiently low latency rates, even
when replicas are maxed out. In this case, you may want to consider
leveraging one of the higher search tiers such as the Azure Search S3
tier that is well suited for scenarios with large numbers of documents
and extremely high query workloads.
Now i can't figure out what throttled requests means.
Google didn't help!
Azure Search starts throttling requests when the error rate (requests failing with 207 or 503 status codes) exceeds a certain threshold. The best strategy is to use an exponential retry policy on 207 and 503 responses to control the load and avoid throttling altogether.
Throttled requests have the throttle-reason response header that contains information about why the request was throttled. It appears we haven't documented that; we'll work on fixing that.

How to make API rate limit policy in loopback

I just want to make an API requests rate limiting per account plan so let's say that we have users and every user have a plan that has some limits of how many API requests per day they can make.
So now, How can i make an API limit policy in loopback 3.x.
Thanks
If you're planning on using Loopback on IBM Bluemix hosting you can use their API Connect service that includes customer plan based policies with API level throttling, monitoring, API billing and many other API management features.
StrongLoop API Microgateway is used by API Connect but is now open sourced (Apr 2017).
Since Loopback is just a layer on top of Express, you can alternatively just use an Express lib.
For rate limiting on a single standalone Loopback server you can use one of these Express libs:
express-rate-limit
express-throttle
If you plan to use this on a cluster of Loopback servers you'll need to store the API call counts as part of the shared server state of each user or user session. The weapon of choice for this is Redis since it's a high performance in memory data store that can be scaled. Rate limiting Express libs that support Redis include:
strict-rate-limiter
express-brute
express-limiter
Finally, you could also implement rate limiting on a reverse proxy. See Nginx Rate Limiting
This is an access control policy.
You can handle this by custom roles created by role resolver.
By creating a custom role and checking in that resolver callback if the current user exceeded from rate limit or not.
such a policy can only* be made with a database, such as redis/memcached. For my projects I rely on redback which is based on Redis. It has a built in RateLimit helper (among others) and it takes care of some raceconditions and atomic transactions.
* if you don't have a database, you could store it in-memory (in a hash or array) and use intervals to flush it, but I'd go with redback :)

What's the Azure bandwidth pricing?

I'm a bit confused by Azure price calculator. In particular it doesn't explain the bandwidth pricing.
I'm considering Azure for a restful api that is going to use blobs for most data storage together with a sql server database for a subset that is easier to manage with a relational approach.
In this application a lot of data will enter the system through the ReST api, but a small fraction will be exposed to the clients (mainly as summary reports). Still the total bandwidth required should be in the order of 50 GiB/mo.
In the Azure's pricing page related to data transfer I see the pricing is only related to outgoing data, but I cannot figure how this relates to a ReST api that will be hosted in Azure App Service.
I mean, it could just mean that I'm going to pay for the bandwidth consumed by HTTPS responses (and not by HTTPS requests), but it seems a bit hard to estimate what this pricing is going to be.
Within a given region, there are no transfer costs at all. You mentioned using App Service, blobs, and SQL Database. As long as those services are within a single region, there are zero bandwidth costs as data flows between them and any other service within that region.
Bandwidth is billed specifically for outbound transfer. So, essentially you're metered for all data leaving a given region.
If you look at the page Data Transfers Pricing Details
Data Transfers refer to data moving in and out of Azure data centres other than those explicitly covered by the Content Delivery Network or ExpressRoute pricing.
Inbound data transfers
(i.e. data going into Azure data centres): Free
Outbound data transfer prices are set at a sliding scale depending on location and bandwidth used.
inbound traffic is free so the data coming in can be removed from the equation. Outbound is not free, and you saw the pricing page.
Data transfer is everything that is going out from every operation you execute.
And it is hard to estimate the traffic pricing - i would recommend to register the Azure trial and test it for a month and see how it is going. Because your data is not only what is returned, there is a lot of payloads coming with that.
But if you estimate 10 GB/month of outbound traffic, then it will start from $0.087 per GB starting from fifth GB (because first 5 are free). There are different regions described at the pricing page as well, so you should apply the pricing according to the region where your website is.

Limitations of Amazon RDS's Trial

I'm not sure whether this is the right place to ask this question, but I'd like to give Amazon RDS's trial a go. Previously I've used Microsoft SQL Azure's trial and they cut me off as soon as I overshot the limit, preventing me from paying a single cent.
However, with Amazon RDS's trial, it seems that I will be charged as soon as I exceed their limits. I'd just like to know if there's anything in particular I should look out for, that I might miss out, and be charged because of that.
Of course, I'd prefer it if there is a way for me to prevent me from exceeding the free-of-charge limits.
Many thanks...
As far as I know you can't set a hard limit. As far as I can tell the only limits you can hit are the time one or the IO limit: RDS won't magically grow your storage size or your instance size for you.
You can however setup a billing alert: amazon billing charges are available as a metric in cloud watch (amazon's monitoring system), so you can create alerts base on them (for example to send you an email). You can set this up from the account activity page or you can configure the alerts as you would with any other cloudwatch metric.

Resources