I have an API running in Azure Kubernetes Service (AKS) and for each request I need to create ab auto-increment id (UUID is not an option). This needs to be persistent in some way or other. I have thought of two solutions :
Use and simple table and increase id for each request.
Use azure file system to store value in a file and increase id for each request. May be through an Azure API's.
However these two solution seems like over kill to just have an auto-increment id. Is there any better weight solution for this which i don't know of?
Related
I have managed to get the C# and db setup using ListMappings. However, when I try to deploy the split/merge tool to Azure cloud classic the service it states 'The requested VM tier is currently not available in East US for this subscription. Please try another tier or deploy to a different location.' We tried a few other regions with the same result. Do you know if there is a workaround or updated version? Is the split / merge service even still relevant? Has anyone got this service to run on Azure lately?
https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-overview-split-and-merge
The answer to the question on whether it is still relevant, in my opinion is ...no. Split\merge is no longer relevant with the maturation of elastic pools. Elastic pools with one data base per tenant seem the sustainable way to implement multi tenancy with legacy code. The initial plan was to add keys to each of our tables to have multiple tenants per database. Elastic pools give us the same flexibility without having to make breaking changes our existing code.
Late post here, but we are implementing ElasticScale for a client to split ~50 clients into a database-per-tenant model. I don't think the SplitMerge tool will be used over the long term, just for the initial data migration from one db to many shards, but it has been handy for that purpose. We are using the ElasticScale SDK to allow a single API to route queries to the appropriate shard(s) based on sharding key. Happy to compare notes with you if you are still working on this.
-- I am exploring Azure functionality and am wondering if Azure Table Storage can be an easy way for holding application configuration for an entire environment. It would be easy to see and change (adding list values etc.). Can someone please guide me on whether this is a good idea? I would expect this table to hold no more than 2000 rows if all our applications were moved over to Azure.
Partition Key --> Project Name + Component Name (Azure Function/Logic App)
Row Key --> Parameter Key
Value column --> Parameter Value
-- For securing password/keys, I can use the Azure Key Vault.
There are different ways of storing application configurations:
Key Vault (as you stated) for sensitive information. Ex. tokens, keys, connection strings. It can be standardized and extended to any type of resources for ease of storing and retrieving these.
Application Settings, found under each App Service. This approach assumes you have an App Service for each of your app.
Release Pipeline, such as Azure DevOps Services (AzDo). AzDo has variables that can be global to the release pipeline or some that can be specific to each stages
I am exploring Azure functionality and am wondering if Azure Table
Storage can be an easy way for holding application configuration for
an entire environment. It would be easy to see and change (adding list
values etc.). Can someone please guide me on whether this is a good
idea?
Considering Azure Tables is a key/value pair store, it is certainly a good idea to store application configuration values there. Only thing I would recommend is that you incorporate some kind of caching layer between your application and table storage so that you don't end up making calls to table storage every time you need to fetch a setting.
I would expect this table to hold no more than 2000 rows if all our
applications were moved over to Azure.
Considering the number of entities is going to be less than 2000, I think your design would have no impact in querying the entities however I think your design is good. For best performance, please ensure that you're including both PartitionKey and RowKey while querying. At the very least, include PartitionKey in your query.
Please see this for more details: https://learn.microsoft.com/en-us/azure/cosmos-db/table-storage-design-guide.
For securing password/keys, I can use the Azure Key Vault.
That's the way to go for storing sensitive data in Azure.
Have you looked at the App Configuration service?
There are client libraries in .NET, Java, TypeScript and Python to interact with the service that you can leverage in your application.
Based on the Microsoft Azure Elastic Scale sample apps online I have been able to create my Shard Map Manager (SMM) and elastic pool databases in Azure. My architecture is separate database per tenant. I am using Entity Framework in my web application. I am using a byte[] hash as my Shard Key based on an alphanumeric customer name. The customer name is entered as part of customer login so I can determine the unique shard key at the time of login to be passed to the SMM.
My questions are:
1.) Since each tenant has its own database, do I still need to include the hashed customer name/shard Key in each row of the customer tables?
2.) I don't understand where the shard key information gets passed to the SMM during a call to the server. Is it within the context of the entity or does it need to be a part of the query itself? Any sample of this would be greatly appreciated!
You access the Shard Map Manager database when finding the connection string for a particular tenant. Once you have the connection string, you connect to a tenant-specific database. Inside the database you don't need to use the shard key at all.
The Elastic Database Tools library has an implementation of data dependent routing (DDR). But you might find it overkill for when you have a simple single tenant sharding pattern implementation. You can always just query the shard map database (or custom configuration store) at startup and load a Dictonary<string,string> to store the CustomerName -> ConnectionString lookup.
We have an azure web app & a db we want to replicate all over the world.
So, we use Traffic manager to redirect the User to the closest hosted Web app , and with a location setting in the web app, It knows to which database it should go against.
Now, my question is , as the mode is One database Writeable (Primary) and the replicas being read only , how do me or azure handle that at the moment of calling the database?
For example, if from my app I am going to Add a record to database, I cant use the nearest DB connection string, I need to go against the Primary one.
Should I handle this? or I will go always against the nearest one even if its read-only an azure will handle the write transferring it to the primary db ?
In the case I am the one that should manage that, then I should handle 2 connection strings, one for the primary DB writeable, and one for the closest db readable, and I should split my services , categorized by write/read actions
and following this scenario, if I have a Store procedure which WIRTES AND READS, how would I handle that?
This is a common issue when it comes to using Azure SQL in geo-replication mode. You cannot use traditional LB techniques such as Azure Traffic Manager. In this case, you should be using the retry pattern on your database connections, working from the primary down to the alternate names as required.
AFAIK, there is no easy way to tell, after connected to a database, if you are on a primary or a read-only secondary. As per this link there are some stored procs you can call to understand the topology. You can understand this using Azure PS/API, but then you would have to build that logic in to your application.
In short:
You need to handle your database connections and employ retry
patterns,etc
You should implement CQRS to separate read/write workloads from
each other if you want to take advantage of read-only secondaries
Hope that helps.
I have REST Service hosted as AzureWeb App & Another Cloud-Service WorkerRole, both need to share few common info like DB Connection string / Storage Connection string Etc.,
What is the right way to do this?
Since your question is rather broad I will try to answer in a similar way - A good practice in distributed application and micro service architectures is to have services query a single store for their configuration by so allowing your configuration to be consistent and easily changed.
In these cases you would probably want to set up some kind of database known to all services as they initialize. Depending on how complex your config data is, you can decide between several options on Azure:
Easy, quick store for simple key value pairs such as strings: consider Azure Table Storage
For more complex document like configurations (e.g. JSON): consider DocumentDB
In some rare cases where latency and throughput is a concern and you might even want to consider an in-memory store such as Azure Redis cache, though mostly for configuration data this is an overkill.
Note that all of the suggested services above are Azure managed services meaning you get availability, redundancy and robustness out of the box. This is important since the configuration store you use can be a single point of failure in your system.