In azure devops: how can I inject my pipeline variables as appsettings when deploying a docker container to azure? - azure

I'm having trouble setting my appsettings in a deployed docker container on azure.
My setup:
I have a .NET core app
My build pipeline builds a docker image and pushes it to my container registry on azure.
My release pipeline pulls the image based on a tag and deploys it to an azure web app.
I need to deploy the image to multiple environments. Every environment has different appsettings. I defined the variables in my pipeline "variables tab":
And I need to send these to my azure so they can be used.
When I manually add them it works, but i want to extract them from my variables, so I only have to add them once. (see screenshot 1)
Edit: The screenshot above works. But this is not what I'm looking for. As I'd have to edit the appsettings pipeline each time I add or remove a new appsetting. Also I believe that removing an appsetting here will just leave it on the deployed environment.
I'm deploying an existing docker image, so i'm unable to edit the appsetting.json file. I also won't make different docker files for each environment.
Is there a way to achieve this? How can I extract / list the variables defined in my pipeline as docker variables or appsettings?

You can define pipeline variables in your pipeline and have them attached to a specific scope (read stage) or the release scope (applied to all stages).
E.g. I have a variable defined as EnvironmentConnectionString which is defined in two scopes:
Scope Test: "EnvironmentConnectionString = server=test-db; ...."
Scope QA: "EnvironmentConnectionString = server=qa-db;..."
Score Release: "logging_flag=enabled"
Then you can set this up in your "Application and Configuration Settings" like
- ConnectionString $(EnvironmentConnectionString)
- Logging $(logging_flag)
Note the $(variable name) syntax for using these variables
When the different stages of the pipeline run, they automatically pick up the values specific to the stage and apply to azure app settings.

You can have different variable groups for different stages. These Variable Groups should have same variables defined with different values.
For example: The Dev Variable Group and Release group both have variables Port, RequestTimeout... The Port in Dev is 4999 while the Port in Release could be 5000. We can link these groups to specific Stage scope, Dev variable group for Dev stage and Release group for Release stage.
[![enter image description here][1]][1]
Make sure all your stages have same settings like this, and then the variables with be replaced with correspondings values for different scopes.
Update:
Each stage in the pipeline is independent, they represent different environments. So we have to define the settings of stage or settings of the tasks within the stages one by one. We have to define the appsettings input one by one.
[1]: https://i.stack.imgur.com/ukbjs.png

Related

Environments are not automatically created in Azure Devops when declared in yaml pipeline config

I'm new to Azure Devops. I would like to have devops pipeline environments to be created automatically during pipeline flow. So the 5th line below should create environment if it does not exist:
- deployment: Deploy
displayName: Deploy job
pool:
vmImage: $(vmImageName)
environment: 'production'
Instead I'm getting:
What am I missing?
To automate environment creation I could also use Terraform but this time I cannot find terraform resource config responsible for that.
I had a similar problem and found that the documentation lists some possible reasons for why this can happen:
Quote from learn.microsoft.com:
Q: Why am I getting error "Job XXXX: Environment XXXX could not be
found. The environment does not exist or has not been authorized for
use"?
A: These are some of the possible reasons of the failure:
When you author a YAML pipeline and refer to an environment that does not exist
in the YAML file, Azure Pipelines automatically creates the
environment in some cases:
You use the YAML pipeline creation wizard in the Azure Pipelines web
experience and refer to an environment that hasn't been created yet.
You update the YAML file using the Azure Pipelines web editor and save
the pipeline after adding a reference to an environment that does not
exist.
In the following flows, Azure Pipelines does not have
information about the user creating the environment: you update the
YAML file using another external code editor, add a reference to an
environment that does not exist, and then cause a manual or continuous
integration pipeline to be triggered. In this case, Azure Pipelines
does not know about the user. Previously, we handled this case by
adding all the project contributors to the administrator role of the
environment. Any member of the project could then change these
permissions and prevent others from accessing the environment.
If you are using runtime parameters for creating the environment, it
will fail as these parameters are expanded at run time. Environment
creation happens at compile time, so we have to use variables to
create the environment.
A user with stakeholder access level cannot create the environment as
stakeholders do not access to repository.
In our case, the problem was using runtime parameters for creating the environment.
You have the environment name 'production' hardcoded, so your problem might be related to one of the other cases.

Azure DevOps dynamic Release Pipeline creation

I am currently planning on a type of multi-tenant system, were different resource groups with a set of AppServices are deployed for customers via ARM Templates. Hence, each customer has its own Resource Group and set of AppServices. Currently we use Azure DevOps to deploy to a set of AppServices used for Development and Quality Assurance before it gets to Production. I am now trying to incorporate DevOps into the mix, automating a pipeline creation of some sort... (it would be a copy of an existing pipeline but only changing the Target AppServices). Which is were my question comes from, Is there a way to dynamically create or edit a Release pipeline to add the deployment of those new AppServices, without the need of manually edit or create a pipeline an adding those newly created AppServices, I was thinking something around the lines of being able to copy a yaml file template then replacing the necessary info to point to those AppServices after they have been created, but I am not totally sure where could I store the new yaml file so that it is picked up by Azure DevOps, or how could I would accomplish these, with the main idea being that all of this continues to be part of an automated process (if possible).
Thanks a lot for any help, any suggestion is appreciated.
EDIT:
The question is not about how to Deploy an ARM Template through the DevOps release pipeline (I plan on using a PowerShell Script/REST API to accomplish that), instead, is about when the AppServices Resources are created, I need to deploy code to those newly created AppServices and also update that code when necessary (Hopefully through a Release Pipeline), somehow generate a new release pipeline each time I deploy a new set of Resources. So that, when there is a new update, I could easily have that pipeline triggered and that set if AppServices can be updated (created as part of the automation process "dynamically"). (I Already have a similar pipeline that deploys to a "static" set of AppServices).
This is possible as you eluded to with YAML Pipelines. Based upon the scenario you have subscribed each repository would have it's own pipeline.yml file that will define the trigger, pool etc. It would also reference a repository that will house your yaml template.
The template would accept whichever parameters you may required (resource group, app service name, etc...) The triggering pipeline associated with each repository would pass this information leveraging the teamplate.
By doing this CI/CD can be set up to trigger on the individual pipelines and deploy the appropriate code all while leveraging the same YAML template.
The repository reference would be similar to:
resources:
repositories:
- repository: YAMLTemplates
type: git
name: OrginazationName/YAML Project Name
With the call to the template being similar to:
- template: azure-ARM-template.yml#YAMLTemplate
parameters:
appServiceName: 'AppServiceName'
resourceGroupName: 'ResourceGroupName'
UPDATE
At a high level the YAML pipeline would consist of the following. If all App Services are similar as stated and ARM Templates are similar this how it could be constructed and triggered based on a folder path:
Build necessary artifacts
Publish Pipeline
Deploy Azure Resource Group Task
Deploy App Settings Task (if applicable)
Deploy App Service
Release the deployment pieces for each environment in appropriate stages to help alleviate the amount of copying and pasting each of the above tasks can be part of a template either individually at a task, combination of tasks, or all in one. This would allow for defining the YAML once and referencing it and including app specific components as needed as parameters to the templates.

Release Azure Functions and file transformations

I have a lot of Azure Functions projects to deploy on Azure. I set build and pipeline for them. For example, this is one Release for an Azure Function.
Under Variables I defined all variables for the environments (one for dev, one for stage and one for production).
There is only one step for deploying the Azure Functions on Azure. I want to add/replace in the local.settings.json the right settings for an environment. I'm not be able to find how to configure that.
In other project, if I use Azure App Service Deploy, there is a section File Transforms & Variable Substitution Options.
How can I do the same in the release of an Azure Functions? What is the correct strategy or best practice?
Update and Solution
I thought it was much straightforward. I think this is the solution. In the App settings under Application and Configuration Settings, I have to specified each variable and its value using the ... in that line.
I can type or copy in this field. The syntax is
-variableName "$(variablename)"
I'm using quotes because if in the value there is any space (for example in the connection string you have Initial Catalog) DevOps raises an error. For array, I'm still using :.
Another way is to use File Transform task to substitute the variables in local.settings.json file with pipeline variables. See here for more information.
With File Transform task, you donot have to specify each variable and its value in App settings of deploy Azure Functions task.
You can add a File Transform task before the deploy Azure Functions task. Then define the variables(eg. KeyVaultSettings.ClientId) in your pipeline variables.
Then set the Package or folder, file format and Target files in File Transform task. See below:
This is what I've done in my Azure Functions pipeline (it's yaml, but you'll get the idea).
Create one stage per environment in your pipeline
Create your pipelines variables and asign a different value based on scope (stage)
Create a configuration entry (see picture) in your pipeline and asign the variable value.
Consume the configuration entry in your Azure Function (in my case I use Environmental Variables for that)
Use pipeline environment in your azure function configuration

Azure docker: linuxFxVersion vs DOCKER_CUSTOM_IMAGE_NAME

I have these three release steps in Azure De Ops:
AzureRmWebAppDeployment#4 - which deploys a docker image to my staging site
AzureResourceGroupDeployment#2 which deploys all the appsettings and properties with an ARM template
AzureAppServiceManage#0 - it swaps staging into production
Step 1 is applied so I am sure that the docker image is pulled to the staging slot (without it and only applying the ARM the swap begins before the pull is over, and I dont like that). Step 2 is to be sure that all environment variables and properties. Step 1 adds the DOCKER_CUSTOM_IMAGE_NAME environment variable and by that triggers a docker pull, but in step 3 I manually set the linuxFxVersion property. Both points to the same image tag. I don't set the DOCKER_CUSTOM_IMAGE_NAME in my ARM template, so when I deploy my ARM, only linuxFxVersion is set. But in essence it pulls nothing, because step 2 has already pulled the image.
Is there anything wrong in removing the DOCKER_CUSTOM_IMAGE_NAME? Or? What is the difference between linuFxVersion and DOCKER_CUSTOM_IMAGE_NAME? Do I need both, or is one of them good enough?
LinuxFxVersion and DOCKER_CUSTOM_IMAGE_NAME are both ways to specify the image to be used in a Linux function app or a Linux web app.
LinuxFxVersion is given higher precedence. If that value is invalid or empty,it would use DOCKER_CUSTOM_IMAGE_NAME. LinuxFxVersion is recommended since it can be used to set both custom container images as well as blessed images.

Azure DevOps Incremental pipeline

I have 2 components in my project and have 2 different build and release pipelines. For both of them am mentioning different paths for build and publish. In the release pipeline task "Create or Update Azure Resources" both are having same linked and main templates. But here the problem is if i deploy the second component then all the configurations and code in the first component is getting removed... Now i can see only second component configurations. I have selected deployment mode as incremental both in the pipeline and templates as well.
Since not sure what does your template.json/template.parameter.json file and your task configuration looks like, I just share my configuration for you refer.
In the Azure resource group deployment task, you can use Override template parameters to override the parameter value and this override only affect the current job:
At first deploy, I am using the name MerlinAngular. In the second deploy, the webappname and serviceplan name is Change-Merlin. You can see the second one result does not replace and override the first one.
What I am using is template.parameter.json file,
then use the Override template parameters option to achieve incremental deploy.
You can try with this way.

Resources