Where to store complex configurations for an Azure Functions app? - azure

I already know about the Azure App Configuration for storing application configurations such as connection strings for my Azure apps. However, I am now working on an Azure Functions app where I have to store a more complex configuration for my application.
The configuration consists of mappings where for each entry I have a key/id and multiple values associated with it. Ideally, I'd like to store this in a database table, but setting up a whole database just to store this configuration seems a bit excessive to me. There will be about 200 entries in this table and I don't expect this number to grow much in the future.
Is there a way to store this in a way how it can easily be edited later using an Azure App Configuration, or do I really need to create a new database just for this purpose? Is there maybe another alternative which I didn't consider so far?

Following suggestion is under the assumption that you are not going to edit that data frequently
One way to do is to create a hash table and store in configuration section in Function App. During run time, you can access the data. And for editing you just need to copy whole data from config section , edit it (using notepad++) and update it back to config section.
Though this is not an ideal way , it’s far better than having an dedicated DB just for this purpose ( plus the DB cost )

Related

Azure portal. What is the best practise way to store and make deal with not single line application settings

Could someone advise, what is the best way to work with big application configurations which look like xml/json data? These data contain diff information(mostly static, but rarely it can be changed), but all of these data don't have security value.
For instance, it can be item options for an user control(like dropdown) in an application page or static data which is used as markup on the basis of which an web page creates a user control for page and so on.
I have several approaches for this:
Key vault. As I can undestand, the main idea of this storage is to work with security data like connection string, passwords and so on. How about using it to work with bigger and wider settings? The big plus for me, that this way contains built-in cache functionality, but it doesn't look like best practise way for me.
Storage account/Cosmos db - as far as I see, both of these ways are used similar and can be used for my target. The question is what is the most economic and productive way for me and will these ways better then the Key Vault way?
So, what is the most common solution for this target?
Thanks.
Hm, I think this answer realy depends on your requirements.
First of all, you can store / update and retrieve a complex configuration object in Azure App Services using the appsettings.json.
If you want to stick to files (xml / json) then you could use Azure Blob Storage.
If you just want to store larger configurations in a NoSQL store you could consider using Azure Table Storage (be aware that a single entry in the Table storage can only contain 252 properties and has a size limit of 1 MiB).
If you need to query your configuration by a configuration property (not by a key) or you think you will exceed the Azure Table Storage limits, then you could consider using CosmosDb

aspnet_regsql and deployment to Azure

I'm pretty new to Azure and trying to work on deploying an already existing MVC 3 website (I'm late to the project).
It has membership information (where the tables should be genned from aspnet_regsql) and it links those tables to application specific tables. To get it into a working state I need to insert some form of "default data" as the code does (unfortunately) make some assumptions about what should be in the database.
No bother, I have an app that creates a default database and inserts the required data. I can then import that into Azure, this doesn't work as Azure demands clustered indexes. This is because aspnet_regsql creates some auth table keys as unclustered so I'm now left having to alter these tables as part of the process to make the primary keys clustered.
I was just wondering if aspnet_regsql had been superceded somehow due to Azure demanding clustered indexes? Am I missing a trick here or is writing a script to modify the clustering of these indexes the sensible approach?
Found the solution elsewhere here:
http://support.microsoft.com/kb/2006191/de
If you use the Universal Providers, you don't need the scripts.
Check out Hanselman's post. The Universal providers will manage the database creation if you are working with SQL Server, Compact Edition, or Windows Azure Database
There are a lot of references to updated scripts including some on my own blog that are no longer needed.

Azure: SQL Compact possible?

I have a RESTful service running on azure. Currently, it has zero persistence. (It is just a REST gateway to another api.) I run it in a single, minimal Azure instance, and expect this will handle all the load this will ever get.
I now need to add some very lightweight persistence to it. A simple table, of 40-200 rows, eight data columns. The data is very static.
Doing the whole SQL Azure thing seems big overkill for my needs.
My thoughts have been to use:
An XML file, and load it into memory, as the db. XML file is
deployed with code.
Some better way to deploy XML, so it can be
rolled out/updated easier
SQL Compact (can I do this on Azure?)
___ ?
What is the right path here?
Thank you!
SQL Server Compact would need to store its data somewhere in persistent manner, so you would need to sync it regularly to a persistent storage and that's a lot of extra work and I have no idea how to do that reliably, so it's likely not a very good idea.
For your simple table the Azure Table Storage might be just enough. If that's not enough then SQL Azure is the next choice.
You can use the XML file as your store, there is no harm it it, rather this is a very easy and cost efficient solution, but there is a catch. As you mentioned currently you are using only azure instance, in this case you can store the XML file in your App_Data, but if in future if you want to shift to 2 azure instance, you will have to replicate the App_Data folder. In other words you will need to keep App_Data folder in sync.
Suggestion
Instead of storing file in App_Data store it in BLOB, you can retrieve it using WebClient and the store it in memory.
Pros: The advantage of BLOB is, you don't have to sync it.
Cons: There is a cost associated on the number of transactions you can make. This will depend upon how many times you update the file.
Summary
If you are going to work with only one Azure Instance, use App_Data
More than one Azure Instance, use BLOB with no syncing or use App_Data with sync.
Do not use Azure Table, as BLOB is the designated store provided for this purpose only.
EDIT
From MSDN post
As far as I know, Windows Azure does not support SQL Compact Edition. SQL Compact Edition stores data in file system which will not be synchronized in multiple instances (a web role may be deployed to more than one instance. An instance is similar to a virtual machine). And files stored in file system will lost when the instance is restarted or reimaged.
Hope this helps you.

Windows Azure App Fabric Cache whole Azure Database Table

I'm working on Integration project where third party will call our web service in Azure. For performance reason I would like to store 2 table data (more than 1000 records) on to the app fabric cache.
Could anyone please suggest if this is the right design pattern?
Depending on how much data this is (you don't mention how wide the tables are) you have a couple of options
You could certainly store it in the azure cache, this will cost though.
You might also want to consider storing the data in the http runtime cache which is free but not distributed.
You choice would largely depend on the size of the data, how often it changes and what effect is caused if someone receives slightly out of date data.

GAE: best practices for storing secret keys?

Are there any non-terrible ways of storing secret keys for Google App Engine? Or, at least, less terrible than checking them into source control?
In the meantime, Google added a Key Management Service: https://cloud.google.com/kms/
You could use it to encrypt your secrets before storing them in a database, or store them in source control encrypted. Only people with both 'decrypt' access to KMS and to your secrets would be able to use them.
The fact remains that people who can deploy code will always be able to get to your secrets (assuming your GAE app needs to be able to use the secrets), but there's no way around that as far as I can think of.
Not exactly an answer:
If you keep keys in the model, anyone who can deploy can read the keys from the model, and deploy again to cover their tracks. While Google lets you download code (unless you disable this feature), I think it only keeps the latest copy of each numbered version.
If you keep keys in a not-checked-in config file and disable code downloads, then only people with the keys can successfully deploy, but nobody can read the keys without sneaking a backdoor into the deployment (potentially not that difficult).
At the end of the day, anyone who can deploy can get at the keys, so the question is whether you think the risk is minimized by storing keys in the datastore (which you might make backups of, for example) or on deployer's machines.
A viable alternative might be to combine the two: Store encrypted API keys in the datastore and put the master key in a config file. This has some potentially nice features:
Attackers need both access to a copy of the datastore and a copy of the config file (and presumably developers don't make backups of the datastore on a laptop and lose it on the train).
By specifying two keys in the config file, you can do key-rollover (so attackers need a datastore/config of similar age).
With asymmetric crypto, you can make it possible for developers to add an API key to the datastore without needing to read the others.
Of course, then you're uploading crypto to Google's servers, which may or may not count as "exporting" crypto with the usual legal issues (e.g. what if Google sets up an Asia-Pacific data centre?).
There's no easy solution here. Checking keys into the repository is bad both because it checks in irrelevant configuration details and because it potentially exposes sensitive data. I generally create a configuration model for this, with exactly one entity, and set the relevant configuration options and keys on it after the first deployment (or whenever they change).
Alternately, you can check in a sample configuration file, then exclude it from version control, and keep the actual keys locally. This requires some way to distribute the keys, though, and makes it impossible for a developer to deploy unless they have the production keys (and all to easy to accidentally deploy the sample configuration file over the live one).
Three ways I can think of:
Store it in DataStore (may be base64 encode to have one more level
of indirection)
Pass it as environment variables through command-line params during deployment.
Keep a configuration file, git-ignore it and read it from server. Here this file itself can be a .py file if you are using a python deployment, so no reading & storing of .json files.
NOTE: If you are taking the conf-file route, dont store this JSON in the static public folders !
If you are using Laravel and want to store your keys in Datastore - this package can make that easy while managing performance using caching. https://github.com/tommerrett/laravel-GAE-secret-manager
Google app engine by default create credential for app engine and inject it in side the environment.
Google Cloud client libraries use a strategy called Application Default Credentials (ADC) to find your application's credentials. When your code uses a client library, the strategy checks for your credentials in the following order:
First, ADC checks to see if the environment variable GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is set, ADC uses the service account file that the variable points to.
If the environment variable isn't set, ADC uses the default service account that Compute Engine, Google Kubernetes Engine, Cloud Run, App Engine, and Cloud Functions provide, for applications that run on those services.
If ADC can't use either of the above credentials, an error occurs.
So point 2 means if you grant the permissions to your service account using IAM Admin you do not have to worry about the passing json keys it will aromatically works.
eg.
Suppose your application running in App Engine Standard and it wants the access to the Google Cloud Storage. To do this you do not have to create new service account just grant the access to the ADC.
REF https://cloud.google.com/docs/authentication/production#finding_credentials_automatically

Resources