I'm using an azure devops pipeline to push a json config file to azure app config. According to the documentation there's a setting that can be enabled:
Delete all other Key-Values in store with the specified prefix and label: Default value is Unchecked.
Checked: Removes all key-values in the App Configuration store that match both the specified prefix and label before pushing new key-values from the configuration file.
Unchecked: Pushes all key-values from the configuration file into the App Configuration store and leaves everything else in the App Configuration store intact.
When the setting is enabled, it sounds as if the operation performs two steps: a delete and then an update. I don't want that the application checks for config to find it missing.
Is it possible to update all the config at once atomically, like a http put?
From the App Configuration service perspective, each key-value is always updated (and deleted or created) individually via separate requests, so there is no atomic operation when changes of multiple key-values are involved. Applications should be designed to be tolerable of the transitioning states. Alternatively, you can consider using other mechanisms to notify applications what is good timing to pick up/refresh configuration.
Related
Terraform 1.0.x
It's my first time using an artifactory backend to store my state files. In my case it's a Nexus repository, and followed this article to set up the repository.
I have the following
terraform {
backend "artifactory" {
# URL of Nexus-OSS repository
url = "http://x.x.x:8081/repository/"
# Repository name (must be terraform)
repo = "terraform"
# Unique path for this particular plan
subpath = "exa-v30-01"
# Nexus-OSS creds (nust have r/w privs)
username = "user"
password = "password"
}
}
Since the backend configuration does not accept variables for the username and password key/value pairs, how can I hide the credentials so they're not in plain site when I store my files in our Git repo?
Check out the "Partial Configuration" section of the Backend Configuration documentation. You have three options:
Specify the credentials in a backend config file (that isn't kept in version control) and specify the -backend-config=PATH option when you run terraform init.
Specify the credentials in the command line using the -backend-config="KEY=VALUE" option when you run terraform init (in this case, you would run terraform init -backend-config="username=user" -backend-config="password=password").
Specify them interactively. If you just don't include them in the backend config block, and don't provide a file or CLI option for them, then Terraform should ask you to type them in on the command line.
For settings related to authentication or identifying the current user running Terraform, it's typically best to leave those unconfigured in the Terraform configuration and use the relevant system's normal out-of-band mechanisms for passing credentials.
For example, for s3 backend supports all of the same credentials sources that the AWS CLI does, so typically we just configure the AWS CLI with suitable credentials and let Terraform's backend pick up the same settings.
For systems that don't have a standard way to configure credentials out of band, the backends usually support environment variables as a Terraform-specific replacement. In the case of the artifactory backend it seems that it supports ARTIFACTORY_USERNAME and ARTIFACTORY_PASSWORD environment variables as the out-of-band credentials source, and so I would suggest setting those environment variables and then omitting username and password altogether in your backend configuration.
Note that this out-of-band credentials strategy is subtly different than using partial backend configuration. Anything you set as part of the backend configuration -- whether in a backend block in configuration or on the command line -- will be saved by Terraform into a local cache of your backend settings and into every plan file Terraform saves.
Partial backend configuration is therefore better suited to situations where the location of the state is configured systematically by some automation wrapper, and thus it's easier to set it on the command line than to generate a configuration file. In that case, it's beneficial to write out the location to the cached backend configuration so that you can be sure all future Terraform commands in that directory will use the same settings. It's not good for credentials and other sensitive information, because those can sometimes vary over time during your session and should ideally only be known temporarily in memory rather than saved as part of artifacts like the plan file.
Out-of-band mechanisms like environment variables and credentials files are handled directly by the backend itself and are not recorded directly by Terraform, and so they are a good fit for anything which is describing who is currently running Terraform, as opposed to where state snapshots will be saved.
I have a cluster and one of the namespaces generates a lot of useless logs and I dont want to funnel them to Azure Log Analytics due to cost. Is there any way to config ALA to not accept or record data from that namespace?
Answer below is correct. Here are some links to azure documentation aand a config map template to control container agent config
https://learn.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-agent-config
https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/Kubernetes/container-azm-ms-agentconfig.yaml
You could try the settings as below to exclude specific namespaces
[log_collection_settings.stderr]
enabled = true
exclude_namespaces = ["kube-system", "dev-test"]
I created an environment variable in Azure App Service. However, I'm not able to pick the value from Azure while it is published.
So I added it to the appsettings.json, and it works.
My question would be, if I add an environment variable in Azure configuration settings, shall I add it in the appsettings.json as well, or is having it in the Azure environment settings enough?
When I navigate to
https://your-web-name.scm.azurewebsites.net/Env.cshtml
I can clearly see the variable is present there. Why is this not being picked up in the code? Am I doing something wrong?
appSettings["MailKitPassword"] <-- This is not being picked up, so I have to hard-code it.
In order to retrieve it you should use Environment.GetEnvironmentVariable("APPSETTING_MailKitPassword")
As Thiago Mentioned, you need to use GetEnvironmentVariable method to retrieve the AppSettings values,
so your code should be
Environment.GetEnvironmentVariable("APPSETTING_MailKitPassword")
However i would recommend you to store the passwords in Azure KeyVault.
Reading through the azure documentation and various posts on here, i understand there should be a number of settings available to web apps running on azure, among them WEBSITE_HOSTNAMEand WEBSITE_SITE_NAME. These should also overwrite any existing configuration appsettings with same key.
When i attempt to run my app, it is reading the key from the config file (i.e. its not being overwritten by azure). If i remove the value from the config, i get an exception about not being able to pick up a config value.
Is there a step im missing? Are these values only available at certain tiers?
Those values are only available as environment variables, so you'll need to read them from there.
App settings set in the Web App blade override settings and become env variables, but these are just environment variables.
I want to specify a custom machine key for my websites running on Azure, so I can swap between staging and production and keep the environment consistent between the two without users being "logged out" whenever I do a swap (because otherwise the machine key changes and the user's cookies can't be decrypted anymore). I've previously been setting this in the web.config file, but I don't really like having this value stored in source control (I'm continuously deploying changes to the server). Connection strings can be specified in the Azure portal to avoid this problem. Is there a solution for machine keys?
In your web.config reference an external config file for the machinekey section:
<system.web >
<machineKey configSource="mkey.config"/>
</system.web>
Create a file mkey.config like this:
<machineKey
validationKey="32E35872597989D14CC1D5D9F5B1E94238D0EE32CF10AA2D2059533DF6035F4F"
decryptionKey="B179091DBB2389B996A526DE8BCD7ACFDBCAB04EF1D085481C61496F693DF5F4" />
Upload the mkey.config file to Azure web site using ftp instead of web deploy.
I'm not 100% sure I understand you question correctly, so I'll answer both possible interpretations.
Interpretation #1: Before you swap prod and stage your users got key A when they were accessing the (old) prod. When you do the swap you want users to keep getting key A when they hit the new prod.
Use App Settings. You can set them using the Portal or Powershell. Those are key value strings that you can set and they are accessible as environment variables from your site. When you swap you prod and stage slots the app settings that were on the old prod all move to the new prod, so your customers will see the same values for them.
Interpretation #2: Before you swap prod and stage your users got key A when they were accessing the (old) prod, and key B when the accessed the old staging slot. When you do the swap you want users to getting key B when they hit the new prod and key A when the access the new staging slot
Using sticky settings. Those are app settings that you set for the site but you configure them to stay with the site that they were on, meaning that when you swap sites you swap the settings as well. You can make app settings sticky by using the following powershell command.
Set-AzureWebsite -Name mysite -SlotStickyAppSettingNames #("myslot", "myslot2")
Full details in this link: http://blog.amitapple.com/post/2014/11/azure-websites-slots/#.VMftXHl0waU