I've been setting up a pipeline for our project with Azure Pipeline using yaml. Currently adding review apps with Azure Kubernetes Service and was wondering how we should clean up after the PR is merged.
We are building docker images, pushing them to our registry, deploying them to a new dev space. Then it's all just left there.
After merge we're never going to bed those images again, not like we're going to deploy from a unapproved PR and there's dozens of them every day. We also would like to delete the review app and take down those extra deployments after we're done with them
I can't find anything in the documentation for this. Am I missing something?
I’ve been wondering the same thing, how does one clean-up all the “ephemeral” resources? Seeing the documentation says “ephemeral” I had hoped this functionality was built in.
Even though we trigger on master, I don’t believe there is enough information available in the pipeline to know which PR caused the commit to master, thus we cannot be guaranteed to tear down the correct review app.
I've been working on this today and think I've got it working.
I made a new pipeline that uses the kubectl task to delete the review app namespace, it also uses the azure cli task to run az acr purge to delete the images created for the PR, the tags of which are prefixed with the pr number so it can identify them.
I set this pipeline to not run with CI and not download the source.
Then I made an Azure Function that uses calls the Run Pipeline API to run it, passing the PR number as a variable to the pipeline.
Finally I used Service Hooks to invoke the Azure Function on PR Updated.
The only thing outstanding is the Environment Resource that is created automatically on the DevOps site does not get deleted. I cannot find an API for this so we might have to live with that.
By using browser dev tools and click the delete button in ADO I managed to find out how to delete environment resources using the Azure DevOps Rest API.
You can send:
DELETE <azure-dev-ops-base-url>/<organization>/_apis/distributedtask/environments/{environment-id}/providers/kubernetes/{resource-id}?api-version=6.0-preview.1
To find environment-id and resource-id use:
GET: <azure-dev-ops-base-url>/<organization>/_apis/distributedtask/environments?api-version=6.0-preview.1
GET: <azure-dev-ops-base-url>/<organization>/_apis/distributedtask/environments/{environment-id}?expands=resourceReferences&api-version=6.0-preview.1
Related
We use Azure Pipelines for our CI/CD processes since a few weeks. The CI pipeline gets code from GitHub, builds, tests and creates a deploy package.
From the beginning I am quite certain that every commit got detected as intended, but recently that is not the case. Manual triggers and scheduled triggers work, but continuous integration does not.
What could be the causes for this?
In the pipeline, we checked the box for "continuous integration", and we use the recommended GitHup App to provide authorization. This is verified to work, we can see the authorized GitHub repos in the pipelines settings.
You can check if the github branch you committed to is included in the Branch filters. If it is not included. Click Add to add the branch.
Check if there is skip CI command(eg.[skip ci]) in commit message. See here for more information.
If CI trigger is not working even all the settings are correct. You can try below workarounds:
1,Disable the CI trigger, save, then re-enable it and save it again.
2,Clone your build definition. See below screenshot
3, Create a new build pipeline with the same trigger and settings.
If all above arenot working. You can go to this site to see if there is a server outage of azure devops.
We ended up changing how we connect from Azure Pipelines to GitHub. The recommended way is to install the Azure App in GitHub and connect using that in Pipelines. My experience is that it worked at first, but stopped working. I read somewhere that only the first connection works with webhooks, so maybe we tried it somewhere else or something that broke it. I ended up using a GitHub servic account to pull and listen for webhooks, and that works just as expected.
I have a project that consists of an Azure webapp, a PostgreSQL on Azure, and multiple Azure functions for background ETL workflows. I also have a local Python package that I need to access from both the webapp and the Azure functions.
How can I structure configuration and script deployment for those resources from a single git repo?
Any suggestions or pointers to good examples or tutorials would be very appreciated.
All the Azure tutorials that I've seen are only for small and simple projects.
For now, I've hand-written an admin.py script that does e.g. the webapp and function deployments by creating a Python package, creating ZIP files for each resource and doing ZIP deployments. This is getting messy, and now I want to have QA and PROD versions, and I need to pass secrets so that the DB is reachable, and it's getting more complex. Is there either a nice way to structure this packaging / deployment, or a tool to help with it? For me, putting everything in Kubernetes is not the solution, at least the DB already exists. Also, Azure DevOps is not an option, we are using Gitlab CI, so eventually I want to have a solution that can run on CI/CD there.
Not sure if this will help complete but here we go.
Instead of using a hand-written admin.py script, try using a yaml pipeline flow. For Gitlab, they have https://docs.gitlab.com/ee/ci/yaml/ that you can use to get started. From what you've indicated, I would recommend having several job steps in your yaml pipeline that will build and package your web and function apps. For deployment, you can make use of environments. Have a look at https://docs.gitlab.com/ee/ci/multi_project_pipelines.html as well which illustrates how you can create downstream pipelines.
From a deployment standpoint, the current integration I've found between Azure and GitLab leaves me with two recommendations:
Leverage the script command of yaml to continue zipping your artifacts use Azure CLI (I would assume you can install the tools during the pipeline) to zip deploy.
Keep your code inside the GitLab repo and utilize Azure Pipelines to handle the CI/CD for you.
I hope you find this helpful.
I'm using Azure DevOps repository for a .NET Core web API and it happens that when I push the code into the repository, the application is immediately published to an Azure App Service. Since I'm using Azure Pipelines to execute some checks before publishing it, I need to disable this automatic deployment done by the push operation, but I didn't figure out how to do it. Is there a way to do it?
By accessing the Kudu service at xxxxxxxxxxxxxx.scm.azurewebsites.net I noticed that there's a folder under site/deployments/tool that contains two files:
- deploy.cmd
- deploymentCacheKey
If I manually remove them, they're automatically recreated once a push is done and the unwanted deployment operation happens.
I have other repositories that have the same folder but it's empty and when I push it remains empty and no unwanted deployment operation is done.
Do you have any suggestion about how to disable this behavior?
Edit
Added screenshots of continuous deployment trigger in the Release pipeline and the release history
Continuous deployment triggers
Release history
You can check if the Continuous deployment trigger is disabled in the lightning icon of Artifacts , as shown below.
I'm trying to create a CI/CD pipeline for an example prototype. Thus, I've started simple enough to test my infrastructure - I'm using an almost untouched boilerplate of ASP.NET Framework Web App (targeting 4.6.1). The steps I've completed are:
App is deployed to an Azure App Service.
Its version control is hosted with Azure DevOps.
A build pipeline with the following tasks has been created, set-up and tested if it executes (tasks and their order, come from a template):
Azure Deployment Options/Settings are bound to the repository DevOps, thus builds are also displayed in Azure, and should be deployed there if successful.
The Build Pipeline is bound to the correct repository inside DevOps
Builds get triggered by pushing to the master branch
The next step was to verify that a broken build, because of failed tests or any other reason is not deployed to production in Azure. I've created a failing test for this reason.
And this is where I'm left stumped. Builds do fail as expected and the "App Service Deploy" task is skipped, because the build tasks before it have a failure:
And yet those broken builds still get deployed to Azure and to production without even waiting for the pipeline to finish. I'm verifying that a change has actually happened with small visual updates.
Build started and finished in Azure as soon as a push occurs before the pipeline in DevOps is fully traversed (or even started, if finding an agent takes longer):
(DevOps still not finished):
What am I doing wrong here? Am I understanding the pipeline wrong? Have I missed a set-up step somewhere? I'm lost.
Edit: As asked by Josh, here's my trigger as well:
Edit 2.2 A bit more clarification of my deployment options in my App Service in Azure, related to Daniel's comments:
This turned out to be the issue.
This is the only option I'm allowed to choose when tying my deployment to DevOps. I'm not allowed to choose a pipeline, just a project and a branch. In a tutorial I've compared with, the settings are the same (at least in this menu), but the build does not get triggered from the repository, but expects the pipeline to reach the appropriate step first, which is why I haven't considered it to be the culprit. Is there some additional setting up, I've missed to do, to indicate that it must look for a pipeline, rather than fire straight away from branch changes?
The deployment you have set up in the Azure portal is tied to source control only, not your build definition. So every time you commit to source control, two things happen that are totally disconnected from each other and start in parallel since they listen to the same repository for changes:
A build fires off in the pipeline.
The Azure website is updated with the version you just pushed to source control, since its deployment options are bound to it.
Remove #2 and your problem will go away. You set the App Service you want updated in the pipeline, you don't need an additional hook in the App Service itself.
What I want to accomplish:
I want to deploy an Azure Cloud Service via Release Management. I managed to get this working by following the steps outlined in this post. In the post the Azure publishsettings file is added to the project and used in Release Management to deploy the Azure package to a Cloud Service. So far so good.
What is the issue:
The Azure publishsettings file will also contain information about the production environment. I don't want that information to be available to all the developers and therefor I would like to have a more secure alternative.
What did I try:
I created a custom action which takes 3 arguments: subscription id, subscription name and certificate key. This way the Azure information stays in Release Management and can be passed to a script. This didn't work because the action is not shown in the Release Template Toolbox.
What is my question:
What is the best way to pass Azure credentials to a deployment script via Release Management on a secure manner?
We have a solution for Build today that will work for RM in the future.
Publish Settings file is an important one with which anybody can get access to certain activities. And once how ever the way you pass on the publish settings file, it can be misused (if tried).
So along with the publish settings file, you need to add a bit of process to the deployment like -
Inactive or remove the management certificate which will in turn invalidate the given publish settings and anyone should request for a new set of publish settings file before they actually start any release procedures.
Even though it adds a rough edge to your smooth flow of deployment process, as it is a live or production system, it is always better to tight the process and make it idiot proof.