Disable automatic deploy on push on Azure Repo - azure

I'm using Azure DevOps repository for a .NET Core web API and it happens that when I push the code into the repository, the application is immediately published to an Azure App Service. Since I'm using Azure Pipelines to execute some checks before publishing it, I need to disable this automatic deployment done by the push operation, but I didn't figure out how to do it. Is there a way to do it?
By accessing the Kudu service at xxxxxxxxxxxxxx.scm.azurewebsites.net I noticed that there's a folder under site/deployments/tool that contains two files:
- deploy.cmd
- deploymentCacheKey
If I manually remove them, they're automatically recreated once a push is done and the unwanted deployment operation happens.
I have other repositories that have the same folder but it's empty and when I push it remains empty and no unwanted deployment operation is done.
Do you have any suggestion about how to disable this behavior?
Edit
Added screenshots of continuous deployment trigger in the Release pipeline and the release history
Continuous deployment triggers
Release history

You can check if the Continuous deployment trigger is disabled in the lightning icon of Artifacts , as shown below.

Related

Why doesn't Azure Pipelines pick up commit on GitHub?

We use Azure Pipelines for our CI/CD processes since a few weeks. The CI pipeline gets code from GitHub, builds, tests and creates a deploy package.
From the beginning I am quite certain that every commit got detected as intended, but recently that is not the case. Manual triggers and scheduled triggers work, but continuous integration does not.
What could be the causes for this?
In the pipeline, we checked the box for "continuous integration", and we use the recommended GitHup App to provide authorization. This is verified to work, we can see the authorized GitHub repos in the pipelines settings.
You can check if the github branch you committed to is included in the Branch filters. If it is not included. Click Add to add the branch.
Check if there is skip CI command(eg.[skip ci]) in commit message. See here for more information.
If CI trigger is not working even all the settings are correct. You can try below workarounds:
1,Disable the CI trigger, save, then re-enable it and save it again.
2,Clone your build definition. See below screenshot
3, Create a new build pipeline with the same trigger and settings.
If all above arenot working. You can go to this site to see if there is a server outage of azure devops.
We ended up changing how we connect from Azure Pipelines to GitHub. The recommended way is to install the Azure App in GitHub and connect using that in Pipelines. My experience is that it worked at first, but stopped working. I read somewhere that only the first connection works with webhooks, so maybe we tried it somewhere else or something that broke it. I ended up using a GitHub servic account to pull and listen for webhooks, and that works just as expected.

Azure Pipeline with Review Apps - Cleanup after merge

I've been setting up a pipeline for our project with Azure Pipeline using yaml. Currently adding review apps with Azure Kubernetes Service and was wondering how we should clean up after the PR is merged.
We are building docker images, pushing them to our registry, deploying them to a new dev space. Then it's all just left there.
After merge we're never going to bed those images again, not like we're going to deploy from a unapproved PR and there's dozens of them every day. We also would like to delete the review app and take down those extra deployments after we're done with them
I can't find anything in the documentation for this. Am I missing something?
I’ve been wondering the same thing, how does one clean-up all the “ephemeral” resources? Seeing the documentation says “ephemeral” I had hoped this functionality was built in.
Even though we trigger on master, I don’t believe there is enough information available in the pipeline to know which PR caused the commit to master, thus we cannot be guaranteed to tear down the correct review app.
I've been working on this today and think I've got it working.
I made a new pipeline that uses the kubectl task to delete the review app namespace, it also uses the azure cli task to run az acr purge to delete the images created for the PR, the tags of which are prefixed with the pr number so it can identify them.
I set this pipeline to not run with CI and not download the source.
Then I made an Azure Function that uses calls the Run Pipeline API to run it, passing the PR number as a variable to the pipeline.
Finally I used Service Hooks to invoke the Azure Function on PR Updated.
The only thing outstanding is the Environment Resource that is created automatically on the DevOps site does not get deleted. I cannot find an API for this so we might have to live with that.
By using browser dev tools and click the delete button in ADO I managed to find out how to delete environment resources using the Azure DevOps Rest API.
You can send:
DELETE <azure-dev-ops-base-url>/<organization>/_apis/distributedtask/environments/{environment-id}/providers/kubernetes/{resource-id}?api-version=6.0-preview.1
To find environment-id and resource-id use:
GET: <azure-dev-ops-base-url>/<organization>/_apis/distributedtask/environments?api-version=6.0-preview.1
GET: <azure-dev-ops-base-url>/<organization>/_apis/distributedtask/environments/{environment-id}?expands=resourceReferences&api-version=6.0-preview.1

Azure DevOps Build Pipeline - A failed build still gets deployed to Azure

I'm trying to create a CI/CD pipeline for an example prototype. Thus, I've started simple enough to test my infrastructure - I'm using an almost untouched boilerplate of ASP.NET Framework Web App (targeting 4.6.1). The steps I've completed are:
App is deployed to an Azure App Service.
Its version control is hosted with Azure DevOps.
A build pipeline with the following tasks has been created, set-up and tested if it executes (tasks and their order, come from a template):
Azure Deployment Options/Settings are bound to the repository DevOps, thus builds are also displayed in Azure, and should be deployed there if successful.
The Build Pipeline is bound to the correct repository inside DevOps
Builds get triggered by pushing to the master branch
The next step was to verify that a broken build, because of failed tests or any other reason is not deployed to production in Azure. I've created a failing test for this reason.
And this is where I'm left stumped. Builds do fail as expected and the "App Service Deploy" task is skipped, because the build tasks before it have a failure:
And yet those broken builds still get deployed to Azure and to production without even waiting for the pipeline to finish. I'm verifying that a change has actually happened with small visual updates.
Build started and finished in Azure as soon as a push occurs before the pipeline in DevOps is fully traversed (or even started, if finding an agent takes longer):
(DevOps still not finished):
What am I doing wrong here? Am I understanding the pipeline wrong? Have I missed a set-up step somewhere? I'm lost.
Edit: As asked by Josh, here's my trigger as well:
Edit 2.2 A bit more clarification of my deployment options in my App Service in Azure, related to Daniel's comments:
This turned out to be the issue.
This is the only option I'm allowed to choose when tying my deployment to DevOps. I'm not allowed to choose a pipeline, just a project and a branch. In a tutorial I've compared with, the settings are the same (at least in this menu), but the build does not get triggered from the repository, but expects the pipeline to reach the appropriate step first, which is why I haven't considered it to be the culprit. Is there some additional setting up, I've missed to do, to indicate that it must look for a pipeline, rather than fire straight away from branch changes?
The deployment you have set up in the Azure portal is tied to source control only, not your build definition. So every time you commit to source control, two things happen that are totally disconnected from each other and start in parallel since they listen to the same repository for changes:
A build fires off in the pipeline.
The Azure website is updated with the version you just pushed to source control, since its deployment options are bound to it.
Remove #2 and your problem will go away. You set the App Service you want updated in the pipeline, you don't need an additional hook in the App Service itself.

Best workflow to restart app service after a git push

Short version: How can I deploy a new version without first manually stopping the app-service?
Long version:
I'm using the following workflow to publish a new version of my ASP.NET Core app to an Azure App-Service.
The App-Service is running on a basic instance. I understand this is not intended for real use but I hope there is a good way to get this workflow running before we go into production(standard instance).
This works but how can I avoid step 4 to 7?
Publish the solution into a local folder.
Move the published content into a local git repo.
Commit all files and push to the app-service.
Stop the app-service from the portal
Enter the console and delete all files in the wwwroot folder
Redeploy the commit from the portal
Start the app-service
I was hoping that the push in step 3 would automatically trigger the remaining steps.
After step 3 I can see that the files have been updated, the new static files are served to the browser but the old binary is still running.
Similarly I can switch between deployment slots on the portal. I get the new static files served but the previous deployed binary is still answering all calls.
This doesn't work, the static files are changed but the old binary is still responding to calls.
Redeploy from portal
Restart app-service
The old binary is still served.
This works.
Stop app-service
Deploy from portal
Start app-service
It appears the running binary is blocking the deployment.
How can I automate deployment using git push or from the portal without manually having to stop the service?
Application settings:
You need to enable msdeploy flag MSDEPLOY_RENAME_LOCKED_FILES=1 in Azure App Service application settings. The option if set enables msdeploy to rename locked files that are locked during app deployment
Click application settings and scroll down until you see app settings.
set this key: MSDEPLOY_RENAME_LOCKED_FILES and for its value put 1
How can I deploy a new version without first manually stopping the app-service?
When I develop my .Net Core Web application via VS, I would leverage the publish wizard, check the option Remove additional files at destination and use the App offline support by setting EnableMSDeployAppOffline to true under the publish profile for publishing my application to Azure Web App.
Based on your current deployment workflow, I assume that you are using the Continuous Deployment to your Azure App Service with your local Git Repository. After I changed the source code, then commit the changes to the local repository, then push the source code to my web app remote repository, the source code would be built and copied to D:\home\site\wwwroot on Azure side. Details you could follow Local Git Deployment to Azure App Service.
For your step 1 to 3, I just push the code changes from the local repository to my app service remote repository. Azure would generate the deployment script for you to build your source code project and move the built content to D:\home\site\wwwroot. Moreover, you could Custom Deployment Script for your additional requirement.

Azure continuous deployment issue when using Node.js and Github

Azure allows you to deploy from source code located in Github. I've been able to create my first Node app in Visual Studio but when trying to set up continuous deployment (meaning that whenever I push changes to the github repository, it automatically publishes to my azure website), I won't update. My azure portal shows that commits have been made and updates the deployments, but the website won't show the actual changes. What do I need to do to make this work?

Resources