I need my Web API to have caching to store some data. I have been researching caching on Azure and I can see that Microsoft recommends to use Redis cache.
Can I use the normal server's in-memory cache to store simple data which will only be accessed by the Web API or is Redis my only option.
Are there any limitations of server memory on Azure?
Of course, you can. Azure Redis cache is not the only option. You can consider to use Microsoft.Extensions.Caching.Memory like MemoryCache to store your simple data, and make sure the size of your data is less than the available tails memory size on Azure App Service which be restricted by the memory limit of Azure App Services for different tiers, as the figure below from App Service limits.
Meanwhile, even you can consider to implement your Web API by using Azure Functions with HTTP trigger which support more memory, please refer to Functions limits.
Hope it helps.
Related
I have a Web app hosted in azure App service. The application has an in-memory cache. Planning to enable the auto-scale in app service when the server traffic is high.
What will happen to the in-memory cache?
What is the best way to handle this?
Well, you will have n in-memory cache instances. That might be ok, but you might want to look at a distributed cache like Azure Redis Cache or another ready made distributed implementation of IMemoryCache as found here (assuming you use .Net):
Distributed SQL Server cache
Distributed Redis cache
Distributed NCache cache
If you keep using the in-memory cache, each new web app instance will start with an empty cache and it will fill based on the requests to that particular instance.
In an Azure Web App I need to efficiently query the MaxMind GeoIP2 City Database (due to the volume of queries and the latency requirements we cannot use the MaxMind's rest API).
I'm wondering what's the best approach for storing the db (binary MMDB format, accessed via the official .NET api) so that it's easy to update with minimal downtime (we are going to subscribe Monthly updates) and still cost effective as to what regards Azure storage and transactions.
Apparently block blobs are the way to go, but I'm not sure about the monthly updates and the fact that the GeoIP2 api load in memory the whole db (I do not know if this would be a problem for the Web App, if I need a web worker to keep it up or I need something else), but actually I do not know yet how large the file is.
What's the most cost effective solution that preserve low latency over a huge volume?
According to the API docs you must have the database available in a file system (the API doesn't know anything about Azure storage and related REST API). So, regardless where you permanently store it, you'll need to have it on a disk somewhere.
I have no idea how large the database footprint is, but Web Apps, Cloud Services (web/worker roles) and Virtual Machines (whether Linux or Windows) all have local disks. And you have read/write access to these disks. So, you'd need to copy the database binary file (or csv) to local disk from somewhere. At this point, when you initialize the SDK, you'd create a DatabaseReader and point it to your locally-downloaded copy of the database file.
You mentioned storing the database in blob storage. There's nothing stopping you from doing so and simply downloading a copy to local disk. And there's nothing stopping you from storing multiple versions in multiple blobs. Note: You may also take advantage of Azure File storage (an SMB share). Which you choose is up to you.
As far as most cost effective solution: You'll need to do the pricing workup yourself to see what's most effective. You'd also need to evaluate how much RAM is available for the given size VM/role instance/Web App you choose. You mentioned Web Apps in your question: Web App instances scale from 0.5GB to 14GB, depending on the tier you choose (again, you'll need to evaluate this).
I'm working on a distributed application that runs on Windows Azure, but I'm new of this kind of environment. I have a question about server-side state management.
Where should I store global almost static data?
Because it is a distributed environment if a user makes a request to the application, there is no guarantee that subsequent requests will be routed to the same server and so I think that I should use Sql Azure or Table Storage Session Provider (but I've read that can be performance issues) to store the data.
I can also use Windows Azure AppFabric Caching that enables session maintenance.
What is the better solution to store global information that don't need to be secured? Is there something similar to "Application" (like Application["key"] = value)?
Thanks
Please see my responses on the following thread:
Microsoft Azure .NET 4.5 WebForms App : Session TimeOut / InProc / Single Instance
Specifics are below:
If you want to maintain session state you have to use one of the
following options
SQL Session State Provider using Azure SQL
Azure Table Session State
Session State with Azure Redis Cache
You can find details on how to do this at the following links:
Session State Management in Windows Azure Web Roles
Session state with Azure Redis cache in Azure App Service
The easiest way in my opinion is using Azure Redis Cache as noted in
the link above.
Let me know if this helps!
Definitely at that time perhaps, but now Azure Storage Tables :)
Azure API Management has promises of 1000 requests per second for an instance. (I don't know this is a correct rate but let's assume it is). My question is how can we scale web service without scaling its infrastructure just by scaling API Management instance.
For example if Azure API Management supports 1000 requests per second for an instance, then backend service also should support the same request handling threshold in its infrastructure. If this is the case what is really meant by scaling up the web service by Azure API Management.
By using Azure API management you can turn on caching easily, which can significantly reduce the traffic to your back-end. In addition, your API Management instance can be scaled up easily to have more VMs behind it. However, if the back-end cannot handle the traffic (after caching), then you might need a more scalable back-end :)
Miao is correct. However remember Azure API Management scaling will only work with GET request. Plus cache size provided by API Management is of only 1GBas of today [may increase in future]; with no monitoring as of today. So if you need monitoring of API Management cache then use external cache like Redis.
When you talk about scalability it will be at all layers. API Management consumption plan can be good option to think through for auto scaling. Then think of Azure VMSS or App service auto scale for scaling backed APIs. And if your backend APIS are talking to DB then think of something like Autoscale for DB on Azure like SQL Azure HyperScale.
So scalability is not only at API Management level but think carefully at all layers.
Sample implementation of Cache in API Management is here - https://sanganakauthority.blogspot.com/2019/09/improve-azure-api-management.html
According to MSDN, an azure service can conatins any number of worker roles. According to my knowledge a worker role can be recycled at any time by Windows Azure Fabric. If it is the true, then:
Worker role should be state less OR
Worker role should persist its state to Windows Azure storage services.
But i want to make a service which conatains client data and do not want to use Azure storage service. How I can accomplish this?
The velocity (whatever it is called) component of AppFabric is a distributed cache and can be used in these situations.
Azure's web and compute roles are stateless means all its local data is volatile and if you want to maintain the state you need to use some external resource to maintain that state and logic in your app to handle that. For simplicity you can use Azure drive but again internally its a blob storage.
You can write to local storage on the worker role by using the standard file IO APIs - but this will be erased upon instance shutdown.
You could also use SQL Azure, or post your data off to another storage service by HTTP (e.g. Amazon S3, or your own server).
However, this is likely to have performance implications. Depending on how much data you'll be storing, how frequently, and how big it is, you might be better off with Azure Storage!
Why don't you want to use Azure Storage?
If the data could be stored in Azure you have a good number of choices: Azure distributed cache, SQL Azure, blob, table, queue, or Azure Drive. It sounds like you need persistence, but can't use any of these Azure storage mechanisms. If data security is the problem, could you encrypt/hashing the data? Understanding why would be useful.
One alternative might be not persist at all, by chaining/nesting synchronous web service calls together, thus achieving reliable messaging.
Another might be to use Azure Connect to domain join Azure compute resource to your local data centre (if you have one), and use you on-premise storage.