Sometimes slowness observed, in retrieving object from Azure Redis cache with key of pattern "http:____my.website.com", whereas time to retrieve object with key "abc_xyz_def_test_test" is almost consistent no spike in retrieving time like in other case mentioned. Size of the both the objects stored against "http:____my.website.com" (or) "abc_xyz_def_test_test" are of almost same. Also verified serializing the object to custom type, not playing foul here.
Is the slowness because of the key pattern? Please clarify. Also how to overcome this slowness issue.
Azure redis P1 tier (without cluster) used in this case. Redis metrics like CPU/Memory are normal as shown in Azure portal.
According to the best practices documentation, we need to consider several ways to improve the performance.
Chopping up bigger data into multiple keys.
Configure your client library to use a connect timeout of at least 15 seconds.
Scale up to P2 Premium to get High network bandwith.
Configure Redis clustering for a Premium Azure Cache for Redis.
Related
I created a container in CosmosDB that tracks metadata about each API call (timestamp, user id, method name, duration, etc.). The partition key is set to UserId and each id is a random Guid. This container also helps me enforce rate limiting for each user. So far so good. Now, I want to periodically clean up this container by moving records to an Azure Table (or something else) for long-term storage and generate reporting. Migrating records also helps me avoid the 20GB logical partition size limit.
However, I have concerns about whether cross-partition queries will bite me eventually. Say, I want to migrate all records that were created a week ago. Also, let's assume I have millions of active users. Thus, this container sees a lot of activity and I can't specify a partition key in my query. I'm reading that we should avoid cross-partition queries when RU/s and storage size are both big. See this. I have no idea how many physical partitions I'm going to end up dealing with in the future.
Is my design completely off? How can I efficiently migrate records? I'm hoping that the CosmosDB team can see this and help me find a solution to this problem.
The easier approach would be to use a time to live and just write events\data to both cosmos db and table storage at the same time, so that it stays in table storage forever, but is gone from Cosmos DB when TTL expires. You can specify TTL at document level, so if you need some documents to live longer - that can be done.
Another approach might be using the change feed.
Based on your updated comments:
You are writing a CosmosDb doc for each API request.
When an API call is made, you are querying CosmosDB for all API calls within a given time period with the partition being the userId. If the document count exceeds the threshold, return an error such as a HTTP 429.
You want to store API call information for longterm analysis.
If your API is getting a lot of use from a lot of users, using CosmosDB is going to be expensive to scale, both from a storage and a processing standpoint.
For rate limiting, consider this rate limiting pattern using Redis cache. The StackExchange.Redis package is mature, and has lots of guidance and code samples. It'll be a much lighter weight and scalable solution to your problem.
So for each API call, you would:
Read the Redis key for the user making the call. Check to see if it exceeds your threshold.
Increment the user's Redis key.
Write the API invocation into to Azure Table Storage, probably with the partition key being the userId, and the rowkey being whatever makes sense for you.
We currently use Redis as our persistent cache for our web application but with it's limited memory and cost I'm starting to consider whether Table storage is a viable option.
The data we store is fairly basic json data with a clear 2 part key which we'd use for the partition and row key in table storage so I'm hoping that would mean fast querying.
I appreciate one is in memory and one is out so table storage will be a bit slower but as we scale I believe there is only one CPU serving data from a Redis cache whereas with Table storage we wouldn't have that issue as it would be down to the number of web servers we have running.
Does anyone have any experience of using Table storage in this way or comparisons between the 2.
I should add we use Redis in a very minimalist way get/set and nothing more, we evict our own data and failing that leave the eviction to Redis when it runs out of space.
This is a fairly broad/opinion-soliciting question. But from an objective perspective, these are the attributes you'll want to consider when deciding which to use:
Table Storage is a durable, key/value store. As such, content doesn't expire. You'll be responsible for clearing out data.
Table Storage scales to 500TB.
Redis is scalable horizontally across multiple nodes (or, scalable via Redis Service). In contrast, Table Storage will provide up to 2,000 transactions / sec on a partition, 20,000 transactions / sec across the storage account, and to scale beyond, you'd need to utilize multiple storage accounts.
Table Storage will have a significantly lower cost footprint than a VM or Redis service.
Redis provides features beyond Azure Storage tables (such as pub/sub, content eviction, etc).
Both Table Storage and Redis Cache are accessible via an endpoint, with many language-specific SDK wrappers around the API's.
I find some metrials about the azure redis and table, hope that it can help you.There is a video about Azure Redis that also including a demo to compare between table storage and redis about from 50th minute in the videos.
Perhaps it can be as reference. But detail performance it depends on your application, data records and so on.
The pricing of the table storage depends on the capacity of table storage, please refer to details. It is much cheaper than redis.
There are many differences you might care about, including price, performance, and feature set. And, persistence of data, and data consistency.
Because redis is an in-memory data store it is pretty expensive. This is so that you may get low latency. Check out Azure's planning FAQ here for a general understanding of redis performance in a throughput sense.
Azure Redis planning FAQ
Redis does have an optional persistence feature, that you can turn on, if you want your data persisted and restored when the servers have rare downtime. But it doesn't have a strong consistency guarantee.
Azure Table Storage is not a caching solution. It’s a persistent storage solution, and saves the data permanently on some kind of disk. Historically (disclaimer I have not look for the latest and greatest performance numbers) it has much higher read and write latency. It is also strictly a key-value store model (with two-part keys). Values can have properties but with many strict limitations, around size of objects you can store, length of properties, and so on. These limitations are inflexible and painful if your app runs up against them.
Redis has a larger feature set. It can do key-value but also has a bunch of other data structures like sets and lists, and many apps can find ways to benefit from that added flexibility.
See 'Introduction to Redis' (redis docs) .
CosmosDB could be yet another alternative to consider if you're leaning primarily towards Azure technologies. It is pretty expensive, but quite fast and feature-rich. While also being primarily intended to be a persistent store.
I'm designing an application using Azure Storage Blobs/Table/Queue, handling massive amount of data.
One important aspect of the application, is that work will be done if a given key don't exist, and determining the existence of a key is a frequent and intensive task.
I need to optimize as much possible billable transactions from existence checks of keys.
It could be either against blobs or tables.
I looked at this document Understanding Windows Azure Storage Billing – Bandwidth, Transactions, and Capacity It seems that 404 errors are not counted only from anonymous requests.
I was also thinking of using a BatchTableOperation to check 100 keys at once, maybe using a Replace or Merge, and determine in the results if the key indeed existed (haven't tried, actualy I got the idea while writing)
Any good hack are welcomed.
You should use Windows Azure Caching:
Load all existing keys in the cache
Each time you add a record to Table Storage, also add it to cache
Once you've done that, have your application check cache first. If the item is not present there, check Table Storage just to be sure (to cover edge cases). But 99% of the time, if the item has already been processed the key will be available in the cache and you won't need to query Table Storage (this will drastically reduce transactions to Table Storage).
If using Windows Azure Caching is not an option there are alternatives, like using MemoryCache, save all keys in a file, ...
I'm working on Integration project where third party will call our web service in Azure. For performance reason I would like to store 2 table data (more than 1000 records) on to the app fabric cache.
Could anyone please suggest if this is the right design pattern?
Depending on how much data this is (you don't mention how wide the tables are) you have a couple of options
You could certainly store it in the azure cache, this will cost though.
You might also want to consider storing the data in the http runtime cache which is free but not distributed.
You choice would largely depend on the size of the data, how often it changes and what effect is caused if someone receives slightly out of date data.
I want to host my WCF services in the Azure clouds for scalability reasons. For example there will be some read data action. And it will be under High Load (1000+ user/sec).
(Like in my previous question)
Also I have a limitation in 1 sec timeout for any request.
My service will be connected with SQL Azure. I chosing it because of small latency (not more than 7ms according to microsoft's benchmark)
How many concurrent connections can hold SQL Azure per instance/database?
Is there any ability to scale SQL Azure when i will reach the limit of connections per instance?
Other solutions, options for my scenario?
Thanks.
One thing to keep in mind is that you will need to make sure you are leveraging connection pooling to its maximum. Using a service account instead of different logins is an important step to ensure proper connection pooling.
Another consideration is the use of MARS. If you have many requests coming through, you may want to pool them together into a single request, hence a single connection, and return multiple resultsets. In this post I discuss how to implement one-way queuing of SQL statements; this may not work for you as-is because you may be expecting a response, but it may give you some ideas on how to implement a batch of requests to minimize the number of connections and minimize wait time.
Finally you can take a look at this tool I wrote last year to test connection/statements against SQL Azure. The tool automatically turns off connection pooling to measure the effects of concurrency. You can download it here.
Finally, I also wrote the Enzo Shard Library on codeplex. Let me know if you have any questions if you decide to investigate the library for your project. Note that the library will evolve to support the future capabilities of SQL Azure Data Federation as well.
It appears there is no direct limit to the number of connections available per SQL Azure instance, but Microsoft state that they reserve the right to throttle connections in situations where resource use is regarded as "excessive".
There's some information on this here, also details on what may happen in this situation here.
A good work-around is to consider "sharding", where you partition your data on some easily-definable criteria and have multiple databases. This does, of course, incur additional cost. A neat implementation of that is here: http://enzosqlshard.codeplex.com/
Also: Azurescope have had some interesting benchmarks here: http://azurescope.cloudapp.net/BestPractices/#ed6a21ed-ad51-4b47-b69c-72de21776f6a (unfortunately, removed early 2012)
Is there any ability to scale SQL Azure when i will reach the limit of connections per instance?
In addition to the Enzo sql sharding suggestion, there are a couple of Microsoft products/features under construction to assist with scaling SQL Azure. These are CTP (at best) but may provide some scalability options for you by allowing you to spread the load across multiple SQL Azure databases:
SQL Azure federations - http://convective.wordpress.com/2011/05/02/sql-azure-federations/
SQL Azure datasync http://www.microsoft.com/windowsazure/sqlazure/datasync/