Specify local tf state file to azurerm provider in pipeline - azure

I have been working on deploying terraform package using azure devops pipeline.
We have our tf state file locally, and no plans to move to azure storage account. Could you please help how can we define the attribute values in terraform init step in pipeline.
- task: TerraformTaskV2#2
displayName: Terraform init
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: 'some directory'
backendServiceArm: 'some service conn'
**backendAzureRmContainerName: ??
backendAzureRmResourceGroupName: ??
backendAzureRmStorageAccountName: ??
backendAzureRmKey: **
What should be the values for Resource group, storage account name, container name. If I don't specify these values, pipeline is failing with below error
##[error]Error: Input required: backendAzureRmStorageAccountName
Any help on this is much appreciated. Thanks in advance.

I'm unsure if you can use the TerraformTaskV2 without utilizing a cloud provider's backend. In the README for said task it doesn't show options for using a local backend, only the following for terraform init:
... AzureRM backend configuration
... Amazon Web Services(AWS) backend configuration
... Google Cloud Platform(GCP) backend configuration
I haven't had experience with this yet, but you could look at the extension Azure Pipelines Terraform Tasks, which does explicitly support a local backend:
The Terraform CLI task supports the following terraform backends
local
...
Just a note on working in teams:
if you're working in a team deploying infrastructure, using a local backend can lead to potential undefined state and/or undesirable outcomes. The benefits of choosing a good backend can offer "...support locking the state while operations are being performed, which helps prevent conflicts and inconsistencies." - docs

Related

How to manipulate remote Terraform state files on Azure Blob storage

I'm working with a subscription that has a few different deployed environments (dev, test, staging, etc.). Each environment has its own storage account, containing an associated Terraform state file. These environments get deployed via Azure DevOps Pipelines.
It's easy enough to get at the .tfstate files that have been created this way, through the portal, CLI, etc.
But is it possible to access these state files using the 'terraform state' commands, for example using Azure Cloud Shell? If so, how do you point them at the right location?
I've tried using the terraform state commands in a Cloud Shell, but it's not clear how to point them to the right location or if this is indeed possible.
For these requirement, you need AzurePowerShell task to achieve your requirement.
1, First, if you can achieve your requirement via powershell feature in azure portal, then it is possible using the AzurePowerShell task to achieve the same thing(AzurePowerShell is running on the agent based on the service connection/service principal you provided.).
- task: AzurePowerShell#5
inputs:
azureSubscription: 'testbowman_in_AAD' #This service connection related to service principal on Azure side.
ScriptType: 'InlineScript'
Inline: |
# Put your logic here.
# Put your logic here.
azurePowerShellVersion: 'LatestVersion'
2, Second, you can use AzCopy to download the file and then do operations to it. DevOps microsoft host agent support this tool.
running this command : terraform state pull > state.tfstate (you can give like thils dev.tfstate extension tfstate is important)in the Azure cloud shell.
All you need to move to the terraform file directory
enter image description here
and run this command terraform state pull > dev.tfstate
enter image description here

Can't Deploy Image to Azure App Service via GitHub Actions

I've tried so many different things here, so i'm super excited to see what the issue is :) (but, im assuming it's something silly on my end)
Here's the pertinent bit of my GitHub actions YML:
deploy:
name: Update Azure App Service
needs: [tests]
runs-on: ubuntu-latest
steps:
- name: Deploy to Azure
uses: azure/webapps-deploy#v2
with:
app-name: $APP_SERVICE_NAME
images: $IMAGE_NAME
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
GitHub actions logs (debug mode):
##[debug]Evaluating condition for step: 'Deploy to Azure'
##[debug]Evaluating: success()
##[debug]Evaluating success:
##[debug]=> true
##[debug]Result: true
##[debug]Starting: Deploy to Azure
##[debug]Loading inputs
##[debug]Evaluating: secrets.AZURE_WEBAPP_PUBLISH_PROFILE
##[debug]Evaluating Index:
##[debug]..Evaluating secrets:
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'AZURE_WEBAPP_PUBLISH_PROFILE'
##[debug]=> '***'
##[debug]Result: '***'
##[debug]Loading env
Run azure/webapps-deploy#v2
::add-mask::***
::add-mask::***
##[debug][GET] https://SNIP.scm.azurewebsites.net:443/diagnostics/runtime
##[debug]loaded affinity cookie ["ARRAffinity=SNIP;Path=/;HttpOnly;Secure;Domain=SNIP-qa-ause.scm.azurewebsites.net","ARRAffinitySameSite=SNIP;Path=/;HttpOnly;SameSite=None;Secure;Domain=SNIP.scm.azurewebsites.net"]
##[debug]getAppRuntime. Data: {"statusCode":200,"statusMessage":"OK","headers":{"connection":"close","content-type":"application/json; charset=utf-8","date":"Fri, 22 Jul 2022 06:27:05 GMT","server":"Kestrel","set-cookie":["ARRAffinity=SNIP;Path=/;HttpOnly;Secure;Domain=app-be-SNIP.scm.azurewebsites.net","ARRAffinitySameSite=SNIP;Path=/;HttpOnly;SameSite=None;Secure;Domain=app-be-SNIP.azurewebsites.net"],"transfer-encoding":"chunked"},"body":{"nodejs":[],"system":{"os_name":"Unix 4.15.0.169","os_build_lab_ex":"","cores":2}}}
##[debug]App Runtime OS: Unix 4.15.0.169
Error: Deployment Failed with Error: Error: Publish profile is invalid for app-name and slot-name provided. Provide correct publish profile credentials for app.
##[debug]Deployment failed
##[debug]Node Action run completed with exit code 1
##[debug]AZURE_HTTP_USER_AGENT='GITHUBACTIONS_DeployWebAppToAzure_0d7e9cbfcd2b52a8a34111a0798ab22d77203fc1f56b2732d2431bfaeb336577'
##[debug]AZURE_HTTP_USER_AGENT=''
##[debug]Finishing: Deploy to Azure
The obvious solution here is what the error says: secret is wrong. It's not. I've added secret to GitHub repository, with the value of the Azure App Service. Done this twice, to make sure i've downloaded it correct, and copied as is.
Other things i've tried:
Tried setting WEBSITE_WEBDEPLOY_USE_SCM to true in the App Service configuration.
Tried setting slot-name to Production in the GitHu action step (even though, there is no slots here)
Other things, that might help solve my problem:
I'm building a .NET image as a Docker image, and pushing this to Azure Containr Registry. I'm then trying to 'update' this, in Azure App Service
The Azure App Service is running Linux, not Windows
"Deployment Center" in App Service still has "Source: Container Registry", pinned to a static image. This is because this was required when i setup App Service. But i assume that my GitHub actions builds will override this?
Any help would be great. :) Thanks in advance!
EDIT
I know i can:
Push to a constant tag, e.g latest
Set the image name to latest in Azure App Service
Set continuous deployment to "on" in Azure App Service
And this will automatically update App Service, because of the webhook.
But this will not help production scenarios, as i need to create different image tags, for different versions i can rollback. Since the version is different, i can't "pin" to a constant tag. Therefore, i need some kind of task to update Azure App Service with a specific tag. I'd prefer to have the same CI for dev/prod.

Environments are not automatically created in Azure Devops when declared in yaml pipeline config

I'm new to Azure Devops. I would like to have devops pipeline environments to be created automatically during pipeline flow. So the 5th line below should create environment if it does not exist:
- deployment: Deploy
displayName: Deploy job
pool:
vmImage: $(vmImageName)
environment: 'production'
Instead I'm getting:
What am I missing?
To automate environment creation I could also use Terraform but this time I cannot find terraform resource config responsible for that.
I had a similar problem and found that the documentation lists some possible reasons for why this can happen:
Quote from learn.microsoft.com:
Q: Why am I getting error "Job XXXX: Environment XXXX could not be
found. The environment does not exist or has not been authorized for
use"?
A: These are some of the possible reasons of the failure:
When you author a YAML pipeline and refer to an environment that does not exist
in the YAML file, Azure Pipelines automatically creates the
environment in some cases:
You use the YAML pipeline creation wizard in the Azure Pipelines web
experience and refer to an environment that hasn't been created yet.
You update the YAML file using the Azure Pipelines web editor and save
the pipeline after adding a reference to an environment that does not
exist.
In the following flows, Azure Pipelines does not have
information about the user creating the environment: you update the
YAML file using another external code editor, add a reference to an
environment that does not exist, and then cause a manual or continuous
integration pipeline to be triggered. In this case, Azure Pipelines
does not know about the user. Previously, we handled this case by
adding all the project contributors to the administrator role of the
environment. Any member of the project could then change these
permissions and prevent others from accessing the environment.
If you are using runtime parameters for creating the environment, it
will fail as these parameters are expanded at run time. Environment
creation happens at compile time, so we have to use variables to
create the environment.
A user with stakeholder access level cannot create the environment as
stakeholders do not access to repository.
In our case, the problem was using runtime parameters for creating the environment.
You have the environment name 'production' hardcoded, so your problem might be related to one of the other cases.

Azure infrastructure creation methods

I'm wondering what is the best way to create and manage an Azure infrastructure. By infrastructure, I mean a set of resources used by a project. E.g. An Application Service Plan, a web service, a SQL server etc.
Currently, I see that there are a couple of ways to do this programmatically in a CD fashion:
By uploading a template with the needed resources
By creating each resource using its own PowerShell Module: E.g. Az.Websites, Az.Sql, Az.IotHub etc.
By using Az CLI, which is approximately the same as 2.
What are the pros and cons of each method?
You can try with azure ARM templates. It support all your mentioned applications to deploy using simple json structure. once you prepared the ARM template you can deploy the template using Azure DevOps release pipeline. for more details check the microsoft documentation
trigger:
- master
pool:
vmImage: 'windows-latest'
steps:
- task: AzureFileCopy#4
inputs:
SourcePath: 'templates'
azureSubscription: 'copy-connection'
Destination: 'AzureBlob'
storage: 'demostorage'
ContainerName: 'projecttemplates'
name: AzureFileCopy
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: 'copy-connection'
subscriptionId: '00000000-0000-0000-0000-000000000000'
action: 'Create Or Update Resource Group'
resourceGroupName: 'demogroup'
location: 'West US'
templateLocation: 'URL of the file'
csmFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.json$(AzureFileCopy.StorageContainerSasToken)'
csmParametersFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.parameters.json$(AzureFileCopy.StorageContainerSasToken)'
deploymentMode: 'Incremental'
deploymentName: 'deploy1'
Basically, when you want to build an infrastructure, all the vehicles you use will do the same job. but the difference is speed and convenience. operations you will do using the interface in a shorter time using the CLI. You can do the If you are working with multiple cloud providers (AWS, GCP, Azure) I recommend using terraform. so you don't need to be knowledgeable about all cloud providers to build the infrastructure.
We suggest using ARM templates for a couple reasons. ARM templates uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. ARM templates are more consistent and are idempotent. If you rerun a PowerShell or CLI command numerous times you can get different results. More pros can be found here, I am not going to re-write our docs.
The downside of ARM templates is that they can get complex, especially when you start nesting templates or start using Desired State Configuration. We have recenlty released Bicep (Preview) to reduce some of the complexity.
PowerShell and CLI are pretty similar in the pros/cons but there are times I find one is easier to use (e.g. it's easer to configure Web with CLI but AzureAD needs PowerShell). CLI of course is better is you are running on a non-windows client but now you can run PowerShell in Linux so that is not a hard/fast rule.
The downside with PowerShell or CLI is you must understand the dependencies of your infrastructure and code the script accordingly. ARM templates can take care of this orchestration and deploy everything in the proper order. This can also make PowerShell/CLI slower to deploy resources send they are not deployed in parallel where possible unless you code your script in an async manner.
I would be remiss if I didn't mention Terraform. Terraform is great if you want consistency in deployments across clouds like Azure, AWS and GCP.

Is it possible to do continuous deployment CI/CD of an Azure Function through a Linux Environment via Azure DevOps?

When creating a function in Azure through a Linux environment it seems CI/CD is completely missing from it's capabilities as I can't see any actual files. My VS code tells me this
Error: This plan does not support viewing files.
and when I try to deploy my files to the server through the Azure pipeline everything works except for the
Azure App Service Deploy
Which tells me this.
2020-04-21T19:48:37.6676043Z ##[error]Failed to deploy web package to App Service.
2020-04-21T19:48:37.6689536Z ##[error]Error: Error: Failed to deploy web package to App Service. Conflict (CODE: 409)
I did get it working directly through VS Code with a windows environment and didn't notice any of those issues.
Can you confirm this is not possible through Linux or perhaps there is a solution for what I am looking for.
is it possible to do continuous deployment CI/CD of an Azure Function through a Linux Environment via Azure DevOps?
The answer is Yes.
To deploy a Azure Function, you should use Azure Function App task instead of Azure App Service Deploy task. For below example.
steps:
- task: AzureFunctionApp#1
inputs:
azureSubscription: '<Azure service connection>'
appType: functionAppLinux
appName: '<Name of function app>'
#Uncomment the next lines to deploy to a deployment slot
#Note that deployment slots is not supported for Linux Dynamic SKU
#deployToSlotOrASE: true
#resourceGroupName: '<Resource Group Name>'
#slotName: '<Slot name>'
Please check out this document Continuous delivery by using Azure DevOps for detailed examples.

Resources