Security hole in Azure Pipelines? - azure

I've been researching Azure DevOps and I've come across what looks like a pretty obvious security hole in Azure pipelines.
So, I'm creating my pipeline as YAML and defining 2 stages: a build stage, and a deployment stage. The deployment stage looks like this:
- stage: deployApiProdStage
displayName: 'Deploy API to PROD'
dependsOn: buildTestApiStage
jobs:
- deployment: deployApiProdJob
displayName: 'Deploy API to PROD'
timeoutInMinutes: 10
condition: and(succeeded(), eq(variables.isRelease, true))
environment: PROD
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp#1
displayName: 'Deploy Azure web app'
inputs:
azureSubscription: '(service connection to production web app)'
appType: 'webAppLinux'
appName: 'my-web-app'
package: '$(Pipeline.Workspace)/$(artifactName)/**/*.zip'
runtimeStack: 'DOTNETCORE|3.1'
startUpCommand: 'dotnet My.Api.dll'
The Microsoft documentation talks about securing this by adding approvals and checks to an environment; in the above case, the PROD environment. This would be fine if the protected resource here that allows publishing to my PROD web app - the service connection in azureSubscription - were pulled from the PROD environment. Unfortunately, as far as I can tell, it's not. It's associated instead with the pipeline itself.
This means that when the pipeline is first run, the Azure DevOps UI prompts me to permit the pipeline access to the service connection, which is needed for any deployment to happen. Once access is permitted, that pipeline has access to that service connection for evermore. This means that from then on, that service connection can be used no matter which environment is specified for the job. Worse still, any environment name specified that is not recognized does not cause an error, but causes a blank environment to be created by default!
So even if I setup a manual approval for the PROD environment, if someone in the organization manages to slip a change through our code review (which is possible, with regular large code reviews) that changes the environment name to 'NewPROD' in the azure-pipelines.yml file, the CI/CD will create that new environment, and go ahead and deploy immediately to PROD because the new environment has no checks or approvals!
Surely it would make sense for the service connection to be associated with the environment instead. It would also make sense to have an option to ban the auto-creation of new environments - I don't really see how that's particularly useful anyway. Right now, as far as I can tell, this is a huge security hole that could allow deployments to critical environments by anyone who has commit access to the repo or manages to slip a change to the azure-pipelines.yml file through the approval process, introducing a major single point of failure/weakness. What happened to the much-acclaimed incremental approach to securing your pipelines? Am I missing something here, or is this security hole as bad as I think it is?

In your example, it seemed you created/used an empty environment, there is no deployment target. Currently, only the Kubernetes resource and virtual machine resource types are supported in an environment.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
The resource in your example is a service connection, so you need to go the service connection and define checks for this service connection.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass

Related

Azure YAML Pipeline: deploy to Function App without overwriting existing function

I have one Function App, already created in Azure, to which I need to deploy two separate Azure Functions hosted in different repos:
(A) HttpTrigger
(B) QueueTrigger
I would like to do this using a YAML pipeline.
Each Azure Function has its separate YAML pipeline, but every time I run pipeline B, the deployment works ok but function A is overwritten by function B.
Is there a way to keep both?
Below is the deployment to DEV, which appears in both pipelines. I thought there was a flag to say "don't delete anything you find deployed", but there isn't.
What am I missing?
#Deploy to DEV
- stage: DEV
displayName: Deploy to DEV
dependsOn: Build
variables:
- group: my-dev-variables
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/dev'))
jobs:
- job: Deploy
steps:
#Download artifact to make it available to this stage
- task: DownloadPipelineArtifact#2
inputs:
source: 'current'
path: '$(Pipeline.Workspace)'
#Deploy
- task: AzureFunctionApp#1
displayName: Deploy Linux function app
inputs:
azureSubscription: $(azureRmConnection.Id)
appType: 'functionAppLinux'
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/**/*.zip'
deploymentMethod: auto
I’m not sure how projects are structured in Python, but I think what you are trying to do is not possible by having separate repos that you deploy from. However, you should be able to achieve what you want by adding the both functions to the same project and then deploy them from the same repo.
Depending on the app service plan you are running (and your needs), you could also consider having two separate function apps and run them both on the same app service plan.

Azure devops service connection and central pipeline

I have a requirement of giving multiple teams access to a shared resource in azure. I therefore want to limit how people can publish changes to the shared resource.
The idea is to limit the use of a service connection to a specific pipeline, as per this documentation. However if the pipeline is stored in their own repo the developer could change it. This would not give me enough control. I therefore found that it was possible using a template from a central repo. Using a shared repo, would then allow me to have a service connection solely for the template?
So how I imagine doing the above is I need to grant project X a service connection for my BuildTemplates Repo. But this is basically just access to the repo and to be able to use the shared templates. Then in BuildTemplates repo I can have a service connection for my template A.
Now the developer in project X - creates her deployments and configurations for her pipeline with her own service connection scoped for her resources. Then she inherits a template from BuildTemplates Repo and passes relevant parameters for the template A.
She cannot alter the template pipeline A and only the template pipeline A can publish to the shared resource, because of the scoped service connection. I can therefore create relevant guards for the shared azure resource in the template pipeline A - so I restrict how developer X can publish to my shared azure resource.
does this make sense and is it viable?
The pipeline part in A cannot be edited by developer in X ?
The service connection in A will not propagate out so developer in X can use it in an inappropriate way?
Update
The above solution does not seem to be viable since the pipeline template is executed in the source branch scope.
Proposed Solution
The benefits I see with the above suggestion doe not seem possible, because of the issues. However one can utilise pipeline triggers, as a viable solution. This however results in a new issue. When a pipeline is triggered by Developer Y in Y's repository and it succeeds. Then a trigger is made in MAIN repository and the pipeline in MAIN fails e.g., because the artifacts from Y introduced an Issue. How does developer Y get notified about the issues in MAIN pipeline?
Here is my solution, in same Azure organization, we can create a Azure Project, then create a repo to save common pipeline template.
All the repos in other Azure project can access this pipeline template.
UserProject/UserRepo/azure-pipelines.yml
trigger:
branches:
include:
- master
paths:
exclude:
- nuget.config
- README.md
- azure-pipelines.yml
- .gitignore
resources:
repositories:
- repository: devops-tools
type: git
name: PipelineTemplateProject/CommonPipeline
ref: 'refs/heads/master'
jobs:
- template: template-pipeline.yml#devops-tools
PipelineTemplateProject/CommonPipeline/template-pipeline.yml
Since the inline script of pipeline has 5000 characters limitation,
you can put your script(not only powershell, but also other languages) in PipelineTemplateProject/CommonPipeline/scripts/test.ps1
# Common Pipeline Template
jobs:
- job: Test_Job
pool:
name: AgentPoolName
steps:
- script: |
echo "$(Build.RequestedForEmail)"
echo "$(Build.RequestedFor)"
git config user.email "$(Build.RequestedForEmail)"
git config user.name "$(Build.RequestedFor)"
git config --global http.sslbackend schannel
echo '------------------------------------'
git clone -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" -b $(ToolsRepoBranch) --single-branch --depth=1 "https://PipelineTemplateProject/_git/CommonPipeline" DevOps_Tools
echo '------------------------------------'
displayName: 'Clone DevOps_Tools'
- task: PowerShell#2
displayName: 'Pipeline Debug'
inputs:
targetType: 'inline'
script: 'Get-ChildItem -Path Env:\ | Format-List'
condition: always()
- task: PowerShell#2
displayName: 'Run Powershell Scripts'
inputs:
targetType: filePath
filePath: 'DevOps_Tools/scripts/test.ps1'
arguments: "$(System.AccessToken)"
Notes:
Organization Setting - Settings - Disable Limit job authorization scope to current project for release pipelines
Organization Setting - Settings - Limit job authorization scope to current project for non-release pipelines
Check some option in project setting as well.
So the normal user only access their own repo, cannot access DevOps project, and DevOps owner can edit template pipeline only.
For the notification issue, I use an Email extention "rvo.SendEmailTask.send-email-build-task.SendEmail#1"

Azure infrastructure creation methods

I'm wondering what is the best way to create and manage an Azure infrastructure. By infrastructure, I mean a set of resources used by a project. E.g. An Application Service Plan, a web service, a SQL server etc.
Currently, I see that there are a couple of ways to do this programmatically in a CD fashion:
By uploading a template with the needed resources
By creating each resource using its own PowerShell Module: E.g. Az.Websites, Az.Sql, Az.IotHub etc.
By using Az CLI, which is approximately the same as 2.
What are the pros and cons of each method?
You can try with azure ARM templates. It support all your mentioned applications to deploy using simple json structure. once you prepared the ARM template you can deploy the template using Azure DevOps release pipeline. for more details check the microsoft documentation
trigger:
- master
pool:
vmImage: 'windows-latest'
steps:
- task: AzureFileCopy#4
inputs:
SourcePath: 'templates'
azureSubscription: 'copy-connection'
Destination: 'AzureBlob'
storage: 'demostorage'
ContainerName: 'projecttemplates'
name: AzureFileCopy
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: 'copy-connection'
subscriptionId: '00000000-0000-0000-0000-000000000000'
action: 'Create Or Update Resource Group'
resourceGroupName: 'demogroup'
location: 'West US'
templateLocation: 'URL of the file'
csmFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.json$(AzureFileCopy.StorageContainerSasToken)'
csmParametersFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.parameters.json$(AzureFileCopy.StorageContainerSasToken)'
deploymentMode: 'Incremental'
deploymentName: 'deploy1'
Basically, when you want to build an infrastructure, all the vehicles you use will do the same job. but the difference is speed and convenience. operations you will do using the interface in a shorter time using the CLI. You can do the If you are working with multiple cloud providers (AWS, GCP, Azure) I recommend using terraform. so you don't need to be knowledgeable about all cloud providers to build the infrastructure.
We suggest using ARM templates for a couple reasons. ARM templates uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. ARM templates are more consistent and are idempotent. If you rerun a PowerShell or CLI command numerous times you can get different results. More pros can be found here, I am not going to re-write our docs.
The downside of ARM templates is that they can get complex, especially when you start nesting templates or start using Desired State Configuration. We have recenlty released Bicep (Preview) to reduce some of the complexity.
PowerShell and CLI are pretty similar in the pros/cons but there are times I find one is easier to use (e.g. it's easer to configure Web with CLI but AzureAD needs PowerShell). CLI of course is better is you are running on a non-windows client but now you can run PowerShell in Linux so that is not a hard/fast rule.
The downside with PowerShell or CLI is you must understand the dependencies of your infrastructure and code the script accordingly. ARM templates can take care of this orchestration and deploy everything in the proper order. This can also make PowerShell/CLI slower to deploy resources send they are not deployed in parallel where possible unless you code your script in an async manner.
I would be remiss if I didn't mention Terraform. Terraform is great if you want consistency in deployments across clouds like Azure, AWS and GCP.

How to connect Azure DevOps Pipeline to new app service?

I've got an app service on Microsoft Azure. I use Azure DevOps to deploy code. This has been working for a while.
Now I need a Beta version of the app. So I've created a new app service in Azure inside a new Resource Group. That was easy to set up and I'm up and running.
But I'm struggling with deployment in DevOps. I've created a new project for my Beta service in DevOps, but I can't figure out how I connect to my new Beta app service.
Edit:
In DevOps I have created a Service Connection that is linked to my Azure Beta Resource. But I don't see how my new pipelines linkes to this sercvice connection.
Pipeline looks like this (YAML export)
pool:
name: Azure Pipelines
demands: npm
steps:
- task: Npm#1
displayName: 'npm install'
inputs:
verbose: false
- task: Npm#1
displayName: 'npm build'
inputs:
command: custom
verbose: false
customCommand: 'run build --scripts-prepend-node-path=auto '
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: dist
includeRootFolder: false
- task: PublishBuildArtifacts#1
displayName: 'Publish artifacts: drop'
The strange thing is that the pipeline runs without errors, but nor the Beta site or Prod site gets any of the code changes. What I'm I missing?
To deploy your app to an Azure resource (to an app service or to a virtual machine), you will need an Azure Resource Manager service connection first.
Go to project setting of the new Project-->Service connections under Pipelines-->new Service Connection--> Select Azure Resource Manager. See here for more information.
Then you need to create a new pipeline to build the Beta service in the new project. You can check what tasks are used in the pipeline for the other app service. And add the same tasks in this new pipeline. Here is an example of creating a yaml pipeline. You can also create a Classic UI pipeline.
If you deployed the other app service in a release pipeline. You can create a new release pipeline for the Beta service.
The tasks used in the build and release pipeline can be the same with your other app service. You might need to change the tasks' configurations a little bit according to the Beta service project. For example, if you use Azure Web App task to deploy your service to azure app. You need to set the azureSubscription field to the Azure Resource Manager service connection created in the very first step. And set appName field to the new azure app service. See below tutorials for more information.
Deploy an Azure Web App (Yaml Pipeline)
Deploy a web app to Azure App Services (Classic Pipeline)
As I understand the question, you want to setup Azure DevOps pipeline for your new App Service deployment. For this, you can follow this tutorial. Also, note that it is the same process you did with your previous Service.

Defining which project should be deployed to an Azure Functions app from source control

We have a collection of Azure Function Apps in c# net core. Each App contains a small number of Azure Functions. All Function Apps reside in a single git repository.
We would like some of our environments to deploy automatically from source (e.g. bitBucket or gitHub).
How do we configure the project so that Azure knows which project in source relates to which created Function App?
I have searched around this problem for a number of days and have not seen any results that sit outside of "it just works" so can only assume that we are missing something fundamental.
I'd recommend using Azure DevOps (formerly VSTS) to deploy to Azure, you use YAML to define a build pipeline which can publish an artifact from each of your function apps. The artifacts then get picked up by a release pipeline and can be deployed to Azure.
The basic building blocks of this are, firstly some YAML like this in your build pipeline for each project:
...
steps:
# a script task that let's you use any CLI available on the DevOps build agent, also uses a variable for the build config
- script: dotnet build MyFirstProjectWithinSolution\MyFirstProject.csproj --configuration $(buildConfiguration)
displayName: 'dotnet build MyFirstProject'
# other steps removed, e.g. run and publish tests
- script: dotnet publish MyFirstProjectWithinSolution\MyFirstProject.csproj --configuration $(buildConfiguration) --output MyFirstArtifact
displayName: 'dotnet publish MyFirstProject'
# a DevOps named task called CopyFiles (which is version 2 = #2), DevOps supplies lots of standard tasks you can make use of
- task: CopyFiles#2
inputs:
contents: 'MyFirstProjectWithinSolution\MyFirstArtifact\**'
targetFolder: '$(Build.ArtifactStagingDirectory)'
# now publish the artifact which makes it available to the release pipeline, doing so into a sub folder allows multiple artifacts to be dealt with
- task: PublishBuildArtifacts#1
displayName: 'publish MyFirstArtifact artifact'
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)\MyFirstProjectWithinSolution\MyFirstArtifact'
artifactName: MyFirstArtifact
# now repeat the above for every project you need to deploy, each in their own artifact sub-folder
Next you create a release, which in its simplest form picks up the artifacts and does one or more deployment, here's a simple one which deploys two function app projects:
Within a deployment stage (right hand side above), you can define your release process, again in its simplest form you can just deploy straight to production or to a slot, although until function slots are out of preview you could also spin up another function app and deploy and test there.
This screenshot shows a simple deployment which uses a standard Azure Function App deployment from Azure DevOps:
Within your deployment stage you can define which artifact is deployed and after running your build pipeline for the first time you'll get to see all the available artifacts that it created.
All or parts of the above can be automated from pushing a branch (or other triggers such as on a schedule). Notifications and "gates" can be added as well if you want manual intervention before release or between release stages.
There are also other ways to cut this up, eg with multiple build pipelines, it’s basically completely flexible but the above are the elements you can use to deploy one or more function apps at a time.

Resources