Azure Table Storage for housing Application Configuration - azure

-- I am exploring Azure functionality and am wondering if Azure Table Storage can be an easy way for holding application configuration for an entire environment. It would be easy to see and change (adding list values etc.). Can someone please guide me on whether this is a good idea? I would expect this table to hold no more than 2000 rows if all our applications were moved over to Azure.
Partition Key --> Project Name + Component Name (Azure Function/Logic App)
Row Key --> Parameter Key
Value column --> Parameter Value
-- For securing password/keys, I can use the Azure Key Vault.

There are different ways of storing application configurations:
Key Vault (as you stated) for sensitive information. Ex. tokens, keys, connection strings. It can be standardized and extended to any type of resources for ease of storing and retrieving these.
Application Settings, found under each App Service. This approach assumes you have an App Service for each of your app.
Release Pipeline, such as Azure DevOps Services (AzDo). AzDo has variables that can be global to the release pipeline or some that can be specific to each stages

I am exploring Azure functionality and am wondering if Azure Table
Storage can be an easy way for holding application configuration for
an entire environment. It would be easy to see and change (adding list
values etc.). Can someone please guide me on whether this is a good
idea?
Considering Azure Tables is a key/value pair store, it is certainly a good idea to store application configuration values there. Only thing I would recommend is that you incorporate some kind of caching layer between your application and table storage so that you don't end up making calls to table storage every time you need to fetch a setting.
I would expect this table to hold no more than 2000 rows if all our
applications were moved over to Azure.
Considering the number of entities is going to be less than 2000, I think your design would have no impact in querying the entities however I think your design is good. For best performance, please ensure that you're including both PartitionKey and RowKey while querying. At the very least, include PartitionKey in your query.
Please see this for more details: https://learn.microsoft.com/en-us/azure/cosmos-db/table-storage-design-guide.
For securing password/keys, I can use the Azure Key Vault.
That's the way to go for storing sensitive data in Azure.

Have you looked at the App Configuration service?
There are client libraries in .NET, Java, TypeScript and Python to interact with the service that you can leverage in your application.

Related

Azure Split/Merge Service, is it still relevant?

I have managed to get the C# and db setup using ListMappings. However, when I try to deploy the split/merge tool to Azure cloud classic the service it states 'The requested VM tier is currently not available in East US for this subscription. Please try another tier or deploy to a different location.' We tried a few other regions with the same result. Do you know if there is a workaround or updated version? Is the split / merge service even still relevant? Has anyone got this service to run on Azure lately?
https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-overview-split-and-merge
The answer to the question on whether it is still relevant, in my opinion is ...no. Split\merge is no longer relevant with the maturation of elastic pools. Elastic pools with one data base per tenant seem the sustainable way to implement multi tenancy with legacy code. The initial plan was to add keys to each of our tables to have multiple tenants per database. Elastic pools give us the same flexibility without having to make breaking changes our existing code.
Late post here, but we are implementing ElasticScale for a client to split ~50 clients into a database-per-tenant model. I don't think the SplitMerge tool will be used over the long term, just for the initial data migration from one db to many shards, but it has been handy for that purpose. We are using the ElasticScale SDK to allow a single API to route queries to the appropriate shard(s) based on sharding key. Happy to compare notes with you if you are still working on this.

How to Share Connection String / Config Items between Azure WebApp & Cloud Service?

I have REST Service hosted as AzureWeb App & Another Cloud-Service WorkerRole, both need to share few common info like DB Connection string / Storage Connection string Etc.,
What is the right way to do this?
Since your question is rather broad I will try to answer in a similar way - A good practice in distributed application and micro service architectures is to have services query a single store for their configuration by so allowing your configuration to be consistent and easily changed.
In these cases you would probably want to set up some kind of database known to all services as they initialize. Depending on how complex your config data is, you can decide between several options on Azure:
Easy, quick store for simple key value pairs such as strings: consider Azure Table Storage
For more complex document like configurations (e.g. JSON): consider DocumentDB
In some rare cases where latency and throughput is a concern and you might even want to consider an in-memory store such as Azure Redis cache, though mostly for configuration data this is an overkill.
Note that all of the suggested services above are Azure managed services meaning you get availability, redundancy and robustness out of the box. This is important since the configuration store you use can be a single point of failure in your system.

Azure Mobile Services Easy Tables - Am I On The Right Track?

I'm working on a simple mobile application in order to learn more about app development in general. I'm using Xamarin and C# to make a cross-platform app.
The end goal is to make a listing of users that are willing to be contacted to play golf. I want users to be able to enter their name and email address on one page, save the entries in a table using Azure SQL Database, and then display them in a list on another page in the app.
I've done some pretty extensive research on my own, but now I think it's time to get some real-life interaction to help guide me along. So here's my actual question...
It looks like the "Getting Started" tutorial here is close to what I want to do. But it seems like the database the app in the example uses is stored locally, whereas I want to create a table that all users will be able to access. Is following this walkthrough the right move for me? If not, what should I do instead?
Bear in mind that I'm committed to using Azure Mobile Services, so please refrain from answers suggesting I use a different platform.
Thanks guys!
If you use Azure Storage directly from the client app, then make sure you are not using Shared Key authentication. Otherwise, anyone could simply steal the credentials from the app and get full access to your blob account. To learn more, see Shared Access Signatures and the SO question Azure blob storage and security best practices.
From the official documentation:
Exposing either of your account keys opens your account to the possibility of malicious or negligent use. Shared access signatures provide a safe alternative that allows other clients to read, write, and delete data in your storage account according to the permissions you've granted, and without need for the account key.
For new projects, you should use Azure Mobile Apps instead of Azure Mobile Services. The new service offers a number of features, and it is where all future investments will be.
For instance, there is now support for blob storage syncing along with regular offline data sync, and it uses SAS tokens to connect securely. Here's a tutorial for Xamarin.Forms: Connect to Azure Storage in your Xamarin.Forms app. It includes a sample that you can deploy to your own Azure subscription with one click.
For your specific question, you could modify the Todo sample (or look at the more full-featured Field Engineer sample) and add tables for Players and Games.
There are a number of offering on the Azure platform that will allow you to store your golf players. However, the page you linked to is for BLOB storage, and I would not recommend using that.
There is Azure table storage. Which is a NoSQL store on the Azure platform. It's highly scalable and schema-less, so very flexible. You can leverage the Azure SDK to read and write to it - or go REST if that's what you prefer. Check out the tutorial here: https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-tables/
Then there is Azure SQL, which is SQL server offered on the Azure platform. This is a traditional relational database store, but more scalable ( since it's on the Azure Platform ). You can also use this solution, but it does require a bit of extra work, since you probably want to use an ORM like Entity Framework.
So in all - I would go for Azure table storage. It's really easy to get started with and will do what you want to do.

Azure addon - accessing WADPerformanceCountersTable?

If I write an Azure addon, can it access the WADPerformanceCountersTable table (of the business application that provisioned this addon)? Especially in terms of security/permissions.
E.g. say I wanted my addon to monitor some performance counters, and send an email alert if they pass some thresholds (regardless of whether there are already such commercial products, I'm just interested in the technical capability). What will I have to do? I'm guessing WADPerformanceCountersTable isn't publicly exposed to the entire worlds - so how can I make them accessible to my addon?
thanks very much
WADPerformanceCountersTable is nothing different from other Azure tables, and it's stored in the storage defined by Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString in the configuration file. You will need the storage account name/key pair to read from this table.
FYI, here is an article about how to effectively fetching performance counter data from this table: http://gauravmantri.com/2012/02/17/effective-way-of-fetching-diagnostics-data-from-windows-azure-diagnostics-table-hint-use-partitionkey/

Windows Azure and multiple storage accounts

I have an ASP.NET MVC 2 Azure application that I am trying to switch from being single tenant to multi-tenant. I have been reviewing many blogs and posts and questions here on Stack Overflow, but am still trying to wrap my head around the specifics of what's right for this particular app.
Currently the application stores some information in a SQL Azure database, as well as some other info in an Azure Storage Account. I'm considering writing the tenant provisioning code to simply create a new database for a new tenant, along with a new azure storage account. This brings me to the following question:
How will I go about testing this approach locally? As far as I can tell, the local Azure Storage Emulator only has 1 storage account. I'm not sure if I'm able to create others locally. How will I be able to test this locally? Or will it be possible?
There are many aspects to consider with multitenancy, one of which is data architecture. You also have billing, performance, security and so forth.
Regarding data architecture, let's first explore SQL storage. You have the following options available to you: add a CustomerID (or other identifyer) that your code will use to filter records, use different schema containers for different customers (each customer has its own copy of all the database objects owned by a dedicated schema in a database), linear sharding (in which each customer has its own database) and Federation (a feature of SQL Azure that offers progressive sharding based on performance and scalability needs). All these options are valid, but have different implications on performance, scalability, security, maintenance (such as backups), cost and of course database design. I couldn't tell you which one to choose based on the information you provided; some models are easier to implement than others if you already have a code base. Generally speaking a linear shard is the simplest model and provides strong customer isolation, but perhaps the most expensive of all. A schema-based separation is not too hard, but requires a good handle on security requirements and can introduce cross-customer performance issues because this approach is not shared-nothing (for customers on the same database). Finally Federations requires the use of a customer identifyer and has a few limitations; however this technology gives you more control over performance distribution and long-term scalability (because like a linear shard, Federation uses a shared-nothing architecture).
Regarding storage accounts, using different storage accounts per customer is definitively the way to go. The primary issue you will face if you don't use separate storage accounts is performance limitations, such as the maximum number of transactions per second that can be executed using a single storage account. As you are pointing out however, testing locally may be a problem; however consider this: the local emulator does not offer 100% parity with an Azure Storage Account (some functions are not supported in the emulator). So I would only use the local emulator for initial development and troubleshooting. Any serious testing, including multitenant testing, should be done using real storage accounts. This is the only way you can fully test an application.
You should consider not creating separate databases, but instead creating different object namespaces within a single SQL database. Each tenant can have their own set of tables.
Depending on how you are using storage, you can create separate storage containers or message queues per client.
Given these constraints you should be able to test locally with the storage emulator and local SQL instance.
Please let me know if you need further explanation.

Resources