Changing Azure .cscfg settings on Role Start - azure

I'm trying to create a startup script or task that changes the configuration settings in the cscfg file, that executes on role start.
I'm able to access these settings, but haven't been able to successfully change them. I'm hoping for pointers on how to change settings on Role Start, or if it's even possible.
Thanks.
EDIT: What I'm trying to accomplish
I'm trying to make a service to more easily configure configuration values on Azure applications. Right now, if I want to change a setting that it the same over 7 different environments, I have to change it in 7 different .cscfg files.
My thought is I can create a webservice, that the application will query for its configuration values. The webservice will look in a storage place, like Azure Tables, and return the correct configuration values. This way, I can edit just one value in Tables, and it will be changed in the correct environments much more quickly.
I've been able to integrate this into a deployment script pretty easily (package the app, get the settings, change the cscfg file, deploy). The problem with that is every time you want to change a setting, you have to redeploy.

Black-o, given that your desire appears to be to manage the connection settings among multiple deployments (30+), I would suggestion that perhaps your need would be better met by using a separate configuration store. This could be Azure Storage (tables, or perhaps just a config file in a blob container), a relational database, or perhaps even an external configuration service.
These options require only a minimum amount of information to be placed into the cscfg file (just enough to point at and authorize against the configuration store), and allow you to maintain all the detail settings side by side.
A simple example might use a single storage account, put the configuration settings into Azure Tables, and use a "deployment" ID as the partition key. The config file for deployment then just needs the connection info for the storage location (unless you want to get by with a shared access signature), and its deployment ID. Then can then retrieve the configuration settings on role startup and cache them locally for performance improvements (either in a distributed memory cache or perhaps on the temp "local storage" drive for each instance).
The code to pull all this together shouldn't take more then a couple hours. Just make sure you also account for resiliency in case your chosen configuration provider isn't available.

The only way to change the settings during runtime is via Management API - craft the new settings and execute "Update Deployment" operation. This will be rather slow because it honors update domains. So depending on your actual problem there might be a much better way to solve it.

Related

What is the recommended way to store environment variables in Azure Functions for different environments?

Currently, I'm storing all key/value pairs in Application Settings, but I'm not happy with this approach. What is the recommended way to store settings for dev, test, stage, and prod? I need to make sure that prod settings are not visible to developers. Is there a way to create 4 different JSON files and define access permissions on them? Or do I need to create 4 different Function apps (or subscriptions)?
Azure App Configuration is a relatively new service that sounds like it could help in terms of managing the config values centrally with more control than individual instance App Settings.
Beyond that, you could perhaps build segregation by limiting devs to pushing code only and not accessing the hosting environment (Azure portal, etc). The layer in between would be something like Azure DevOps or Github Actions that has access to Azure, while devs are limited to pushing code that triggers deployment.
Also worth reminding ourselves that devs ultimately have a lot of access by virtue of writing the code. If they want to get at runtime data, they can, somehow. If you consider the devs untrusted, you may have bigger problems. If it's just a matter of preventing mistakes, a solid devops process is the key.

Is there a good way to share configuration between apps in Azure?

We have a large system built in Azure apps. It is made up of an App Service for our API and several Functions Apps for backend processing.
What's the best way to allow these apps to share configuration?
We use ARM templates currently to set up the environment variables for each app, which is fine for deploy-time, but there's nothing to keep the config in sync between the apps.
A use case might be a feature flag that controls whether a sub-system is operational. We might want this flag to be used in the API and a Functions App. At present we can manually go in and set the variable in each of the apps, but it would be easier to manage if we only had to do it in one location.
Ideally, any update to the config would be detected by Azure and trigger a restart of the service, as currently happens with the native implementation.
Is there a good, off-the-shelf, way to do this? Or will I be rolling my own with a table in a database and a lightweight function?
One way would be to use the new App Configuration service: https://learn.microsoft.com/en-us/azure/azure-app-configuration/overview.
It is meant for sharing configuration settings across components.
Note it is not meant for secrets, that's what Key Vault is for.
There is a guidance/design pattern for this from Microsoft, it can be found from here.
Best Practice in Architecture: You can use the external configuration store pattern and use a Redis Cache to share the configuration between multiple applications as described in here: https://learn.microsoft.com/en-us/azure/architecture/patterns/external-configuration-store
The approach is you can get this data from Appsettings for each environement (this can be automated in CI/CD pipeline). On first connection you store the data in RedisCache.
For senstive data: Use Keyvault to store the secrets/keys/certificates.

Azure Availability Set : Servers Sync

I am sorry if my question is very general - I just started with azure...
We are working with Azure and we currently have only one VM.
I would like to add a second VM and to place them both under the same availability set. The thing is that using our APP our users are posting files to the server and then a third party (twilio) should read the files. I need the two servers to be sync immediately to make sure the files are exists in both server.
Is it possible?
I be happy to get the outlines of how this should be done.
You can use azure files and attach it to both machines and store your files over there
Availability sets have nothing to do with VM syncing; it's about ensuring your VMs aren't upgraded at the same time or placed all in the same physical location.
You'll need to store your uploaded outside of your VMs in a common area (whether in blobs, or an Azure File share, or even in a database). Otherwise, you'll need to create your own way of syncing (copying) content between your VMs.
There's no right way to store your data, and the options I listed each have their pros and cons. You'll need to choose what's right for your app.

Getting web page hit count with IIS logs in Azure

I have a website hosted in Azure as a cloud service (not as a website), and I need to get the hit count for every web page of the site.
I enabled Azure Diagnostics, and I see the IIS logs copied to my blob storage, however this logs contain very few data (only one hit to a javascript file).
Furthermore, putting "Verbose" or "All" in the diagnostics configuration of the web role doesn't seem to affect the results, I get only one line (an access to a css file, or an image file, etc).
I'm using Azure SDK 2.0.
Is it possible to use the included IIS logs generated by azure to get a hit count? What should I need to change in the diagnostics configuration?
Or should I need a different approach to achieve this?
The IIS logs it produces are the same ones you'd find on a Windows Server anywhere. Note that depending on the settings you provided to the diagnostics it might take a little while before the data is moved to the storage account. Setting the level of verbosity for the configuration determines what is moved from the instances over to the storage account. Did you give it plenty of time to move the data over before looking at the file in storage again? Sometimes it just brings over what it has, and of course, there could be buffering which means when the file was brought over not everything was in it, etc.
You should be able to get this information from the logs, and yes, you should be able to do it from the IIS logs. That being said, if what you are after is a hits per page I would suggest actually a different approach. Look at an analytics provider like Google Analytics or one of the competitors to that. You'll get a massive amount of information beyond just page hits and no need to worry about parsing log files, etc.

Worker & Web Role in same application

We have a WebRole which deals with request coming in off a WCF service, this validates then puts the messages into an Azure queue in Cloud storage. In the same application we have a WorkerRole which reads the information from the queue and makes a call to our persistence layer which processes the request and returns the result. We were wondering why the worker roles didn't pick up any of our configuration settings and hence was not providing the Trace information we were looking for. We realised that the worker role was likely looking for an app.config and couldn't find it.
Is there a way to point the worker role to the web config, or to be able to load our Enterprise Library settings into the ServiceConfiguration.cscfg file which in either case would mean both could read from a common place?
Many thanks in advance
Kindo
As far as I'm aware there is no way for your worker role to get access to the web config of a web role out of the box.
If you move your configuration items to the ServiceConfiguration.csfg file and both the worker and web role are in the same cloud project, the settings will be in the same file. But because the web role and the worker role are different projects within that cloud project, their settings are in different sections of that .csfg file. If you want the settings to be the same for both of them, you will have to duplicate the settings.
Putting your settings in this file gives you the advantage that you can change the settings while the roles are running and have the roles respond however you like e.g. you might want certain settings to restart the roles, for others you may just want to update a static variable. In order to update a web.config or app.config you need to redeploy that role.
You do need to be aware though that the ServiceConfiguration file is not a replacement for a webconfig. If you're using tools that look for their settings in a web or app config, unless they're particularly smart and aware of the Azure environment, they won't go looking for settings in the ServiceConfiguration file.
I know you didn't ask this question, but if you're expecting your worker role to be providing an almost synchronous response to your web role by using a request queue and a response queue, you'll probably find that this won't scale very well. If you want to do synchronous calls to a worker role in Azure you're best off using WCF (this isn't highlighted anywhere in the guides). As you said that all of this is just a WCF service anyway, the answer to your problem may be to just do away with the worker role.
You cannot share web.config's OR .cscfg's across Roles in Azure, as there is no guarantee a role is in the same host, cluster, or even datacenter as another role.
If you are simply trying to share items like connection Strings, app-specific variables, etc., I would simply create your own "config manager" class that would obtain some XML and parse it into a collection of settings. This way, you could store that config on Azure Blob Storage and changes would be as simply as updating that blob and signaling your apps to reload. (very easy using the Service Management API).

Resources