I have one Function App, already created in Azure, to which I need to deploy two separate Azure Functions hosted in different repos:
(A) HttpTrigger
(B) QueueTrigger
I would like to do this using a YAML pipeline.
Each Azure Function has its separate YAML pipeline, but every time I run pipeline B, the deployment works ok but function A is overwritten by function B.
Is there a way to keep both?
Below is the deployment to DEV, which appears in both pipelines. I thought there was a flag to say "don't delete anything you find deployed", but there isn't.
What am I missing?
#Deploy to DEV
- stage: DEV
displayName: Deploy to DEV
dependsOn: Build
variables:
- group: my-dev-variables
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/dev'))
jobs:
- job: Deploy
steps:
#Download artifact to make it available to this stage
- task: DownloadPipelineArtifact#2
inputs:
source: 'current'
path: '$(Pipeline.Workspace)'
#Deploy
- task: AzureFunctionApp#1
displayName: Deploy Linux function app
inputs:
azureSubscription: $(azureRmConnection.Id)
appType: 'functionAppLinux'
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/**/*.zip'
deploymentMethod: auto
I’m not sure how projects are structured in Python, but I think what you are trying to do is not possible by having separate repos that you deploy from. However, you should be able to achieve what you want by adding the both functions to the same project and then deploy them from the same repo.
Depending on the app service plan you are running (and your needs), you could also consider having two separate function apps and run them both on the same app service plan.
Related
I am aware that the Azure DevOps strategy "runOnce" does not have an equivalent GitHub Action yet. And I am seeking a workaround for that since I am migrating my DevOps pipelines from Azure DevOps to GitHub Actions.
A sample YAML Block for referring the kind of pipelines I'm translating from ADO to GitHub actions can be found below:
stage: DEV
condition: eq(variables['Build.SourceBranch'], 'refs/heads/dev')
jobs:
- deployment: Deploy
environment: dev
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
artifactName: 'drop'
targetPath: '$(System.ArtifactsDirectory)'
In simple terms, how do I convert the runOnce deployment strategy in Azure to GitHub Actions?
I have the following YAML in my build pipeline.
#Step 3, Copy Files
- task: CopyFiles#2
inputs:
#SourceFolder: '$(Build.SourcesDirectory)'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
CleanTargetFolder: true
OverWrite: true
#Step 4, Publish artifacts
- task: PublishBuildArtifacts#1
inputs:
TargetPath: '$(Build.ArtifactStagingDirectory)'
I have a seperate release pipeline, which is attempting to deploy these files to an Azure Function App. However once this completes, the functions are missing. Using KUDU I can see the files have made their way up, however I think something is configured incorrectly.
This is what the artifact structure looks like when I browse for a 'Package or Folder', which to me seems incorrect.
Can anyone advise why my artifacts are not being produced in the format required for a function app?
From you screenshot about artifacts content, it seems that you are directly packaging and publishing your project to Azure Function.
As far as I know, you should be missing the Build step.
You first need to add a build step according to your project type, and then deploy the built product to Azure function.
For more detailed info, you can refer to this doc: Deploy an Azure Function
I've been researching Azure DevOps and I've come across what looks like a pretty obvious security hole in Azure pipelines.
So, I'm creating my pipeline as YAML and defining 2 stages: a build stage, and a deployment stage. The deployment stage looks like this:
- stage: deployApiProdStage
displayName: 'Deploy API to PROD'
dependsOn: buildTestApiStage
jobs:
- deployment: deployApiProdJob
displayName: 'Deploy API to PROD'
timeoutInMinutes: 10
condition: and(succeeded(), eq(variables.isRelease, true))
environment: PROD
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp#1
displayName: 'Deploy Azure web app'
inputs:
azureSubscription: '(service connection to production web app)'
appType: 'webAppLinux'
appName: 'my-web-app'
package: '$(Pipeline.Workspace)/$(artifactName)/**/*.zip'
runtimeStack: 'DOTNETCORE|3.1'
startUpCommand: 'dotnet My.Api.dll'
The Microsoft documentation talks about securing this by adding approvals and checks to an environment; in the above case, the PROD environment. This would be fine if the protected resource here that allows publishing to my PROD web app - the service connection in azureSubscription - were pulled from the PROD environment. Unfortunately, as far as I can tell, it's not. It's associated instead with the pipeline itself.
This means that when the pipeline is first run, the Azure DevOps UI prompts me to permit the pipeline access to the service connection, which is needed for any deployment to happen. Once access is permitted, that pipeline has access to that service connection for evermore. This means that from then on, that service connection can be used no matter which environment is specified for the job. Worse still, any environment name specified that is not recognized does not cause an error, but causes a blank environment to be created by default!
So even if I setup a manual approval for the PROD environment, if someone in the organization manages to slip a change through our code review (which is possible, with regular large code reviews) that changes the environment name to 'NewPROD' in the azure-pipelines.yml file, the CI/CD will create that new environment, and go ahead and deploy immediately to PROD because the new environment has no checks or approvals!
Surely it would make sense for the service connection to be associated with the environment instead. It would also make sense to have an option to ban the auto-creation of new environments - I don't really see how that's particularly useful anyway. Right now, as far as I can tell, this is a huge security hole that could allow deployments to critical environments by anyone who has commit access to the repo or manages to slip a change to the azure-pipelines.yml file through the approval process, introducing a major single point of failure/weakness. What happened to the much-acclaimed incremental approach to securing your pipelines? Am I missing something here, or is this security hole as bad as I think it is?
In your example, it seemed you created/used an empty environment, there is no deployment target. Currently, only the Kubernetes resource and virtual machine resource types are supported in an environment.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
The resource in your example is a service connection, so you need to go the service connection and define checks for this service connection.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass
An Azure DevOps YAML based pipeline has been created from scratch and requires the use of Azure Subscription. The Subscription value has been stored within Key Vault and then linked to the Variable Group. Pipeline has unobtrusive access to both Variable Group and linked Key Vault. However the pipeline execution fails with the error that the pipeline does not have access to the required subscription. When the Subscription value is moved to the Variable Group the issue is still present. When the Subscription value is declared as a pipeline variable the issue is gone. The click on the Authorize button next to the error does not help with the issue.
- stage: 'DeployDevelopment'
displayName: ''
dependsOn: Build
jobs:
- deployment: DeployDevelopment
pool:
vmImage: 'ubuntu-latest'
environment: Development
variables:
- group: Secrets
- group: Release
strategy:
runOnce:
deploy:
steps:
- task: AzureRmWebAppDeployment#4
displayName: ''
inputs:
azureSubscription: '$(ConnectedServiceName)'
appType: 'webAppLinux'
WebAppName: '$(DevEnvironemntWebAppName)'
packageForLinux: '$(Pipeline.Workspace)/app/s'
RuntimeStack: 'NODE|10-lts'
StartupCommand: '$(StartupCommand)'
WebConfigParameters: '-Handler iisnode -NodeStartFile server.js -appType node'
AppSettings: '-WEBSITE_NODE_DEFAULT_VERSION 10.12.0'
From Developer Community
Thanks for reporting the issue on Developer Community. Azure key vault
values are fetched at run time. The resource are authorized before
deployment. Pipeline can;t be authorized for a value that is not
available. Hence this is not supported. scenario.
We have a collection of Azure Function Apps in c# net core. Each App contains a small number of Azure Functions. All Function Apps reside in a single git repository.
We would like some of our environments to deploy automatically from source (e.g. bitBucket or gitHub).
How do we configure the project so that Azure knows which project in source relates to which created Function App?
I have searched around this problem for a number of days and have not seen any results that sit outside of "it just works" so can only assume that we are missing something fundamental.
I'd recommend using Azure DevOps (formerly VSTS) to deploy to Azure, you use YAML to define a build pipeline which can publish an artifact from each of your function apps. The artifacts then get picked up by a release pipeline and can be deployed to Azure.
The basic building blocks of this are, firstly some YAML like this in your build pipeline for each project:
...
steps:
# a script task that let's you use any CLI available on the DevOps build agent, also uses a variable for the build config
- script: dotnet build MyFirstProjectWithinSolution\MyFirstProject.csproj --configuration $(buildConfiguration)
displayName: 'dotnet build MyFirstProject'
# other steps removed, e.g. run and publish tests
- script: dotnet publish MyFirstProjectWithinSolution\MyFirstProject.csproj --configuration $(buildConfiguration) --output MyFirstArtifact
displayName: 'dotnet publish MyFirstProject'
# a DevOps named task called CopyFiles (which is version 2 = #2), DevOps supplies lots of standard tasks you can make use of
- task: CopyFiles#2
inputs:
contents: 'MyFirstProjectWithinSolution\MyFirstArtifact\**'
targetFolder: '$(Build.ArtifactStagingDirectory)'
# now publish the artifact which makes it available to the release pipeline, doing so into a sub folder allows multiple artifacts to be dealt with
- task: PublishBuildArtifacts#1
displayName: 'publish MyFirstArtifact artifact'
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)\MyFirstProjectWithinSolution\MyFirstArtifact'
artifactName: MyFirstArtifact
# now repeat the above for every project you need to deploy, each in their own artifact sub-folder
Next you create a release, which in its simplest form picks up the artifacts and does one or more deployment, here's a simple one which deploys two function app projects:
Within a deployment stage (right hand side above), you can define your release process, again in its simplest form you can just deploy straight to production or to a slot, although until function slots are out of preview you could also spin up another function app and deploy and test there.
This screenshot shows a simple deployment which uses a standard Azure Function App deployment from Azure DevOps:
Within your deployment stage you can define which artifact is deployed and after running your build pipeline for the first time you'll get to see all the available artifacts that it created.
All or parts of the above can be automated from pushing a branch (or other triggers such as on a schedule). Notifications and "gates" can be added as well if you want manual intervention before release or between release stages.
There are also other ways to cut this up, eg with multiple build pipelines, it’s basically completely flexible but the above are the elements you can use to deploy one or more function apps at a time.