I read this article.
Assign specific agent on Azure DevOps YAML Pipelines
When i run my CI pipeline a Dockerfile is executed as well. This Dockerfile pulls a base image and then build my code image.
When we investigated if time to build Dockerfile is X min, 90% of X min is actually spent in downloading base image itself.
We are using MS hosted agents currently.
Does MS agents recycle themself each time its called by any user?. I mean if i try to call MS hosted same agent machine again will it still HOLD base image from last pipeline call?
In the MS hosted agents, each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one job finished, which means any change that a job makes to the virtual machine file system, such as pulls a base image, will be unavailable to the next job. So if you want to minimize the time which you spent for pulling the image, use a Self-hosted agent. You can even use your local PC as a Self-hosted agent.
Related
Specific stage gets stuck in queue only on a specific machine
Shows position in queue: 1, but cannot connect to the agent, even though no other jobs are running and the queue is empty
All other stages work fine
Fault stage works fine on a different machine
Agent is online and enabled
All our agents are self hosted on local machines
This stage runs SonarQube analysis
Tried restarting agent, machine, install new one on the same machine
Check wether your Azure DevOps agent has the agent capabilities that may be needed by your stage. A stage may have certain demands that need to be satisfied by the agent.
UPDATE: turns out Java had to be installed not only globally, but also for user account running agent
I'm currently using resource: container in azure pipeline to pull my custom image tools from ACR, and use this image to create a container than can run several CLI commands on my pipeline.
Pulling this custom image tools takes so much time, roughly around 5mins and I want to avoid that as it is considered as a wasted time and a blockage since I do debugging most of the time.
Question: Is it possible to create an Azure Container Instance that constantly run my custom image tools then call this container inside my pipeline to run some cli commands?
I'm having a hard time finding documentation, so I'm not really sure if it is possible.
You can try to set up a self-hosted agent in your custom Docker image, and when running the container on ACI, it will install the agent and connect it to your Azure DevOps.
Then, you can use this self-hosted agent to run your pipeline job which needs to run the CLI commands. Since the self-hosted agent is hosted in the container, the job runs on the self-hosted agent will run in the container.
About how to configure set up self-hosted agent in Docker image, you can reference the document "Run a self-hosted agent in Docker".
I have three windows self-hosted runners for github actions.
Need to turn on the machines when the pipeline runs.
And want to turn off the machine once the pipeline finishes.
But, want to make sure, other pipelines are not running before turning off the machine.
Also, with azure, unlike aws, shuttting down the machine doesn't stop the billing. So, we need to turnoff from azure portal/api. Thought to run that command at the end of pipeline but again, it needs another github runner(either self-hosted or provided by github actions) to do that step, if I do in the same runner, it won't provide the return code correctly as the runner will be turned off.
As a beginner of DevOps, I would like to know how to use one VM for azure pipeline runs. When starting the run of the azure pipeline task it always gives a fresh VM from azure.
For caching and file saving purposes, I want to use a reserved VM for pipeline run.
Appreciate your suggestions and support.
Check the pic, In the Azure DevOps, we could run the pipeline via Hosted agent and Self-Host agent.
Azure Pipelines provides a pre-defined agent pool named Azure Pipelines, this is hosted agent and each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.
For caching and file saving purposes, I want to use a reserved VM for pipeline run.
We could refer to this doc to install the self-hosted agent, it will save the cache.
You can setup a 'self hosted agent'. That would be your own VM, which you have total control over. I'm not sure whether this will be any cheaper than hosted agents.
I've used a self-hosted agent a while ago, and saved some money booting the VM only when needed. After a while it would shutdown again.
Source: Self-hosted agents
I am working on building a CI CD pipeline of a .net application. I am finished with the CI part and everything is working as expected, the pipeline is releasing an artifact with the website files in it.
Instead of deploying this application to a singular vm, i would like to deploy it on virtual machine scale set.
I understand this is not a direct explicit question, but I am really trying to understand how people do this. I couldn't find any accurate documentation on it.
From my understanding, there are 2 different ways from what I've read.
Build an immutable image and push it to VMSS using a built in tasks of the release pipelines
Use extension scripts that will push the changes to each vm as decscribed here