We currently have more than one self-hosted Azure pipeline agents running on one server. Recently we have notices the pipelines are failing with "Network Path Issues", looks like all the steps run on one agent and somehow one of the steps jumps to a different agent causing it to fail. Is there a way to separate this other than creating new servers for each agent?
Looks like all the steps run on one agent and somehow one of the steps
jumps to a different agent causing it to fail. Is there a way to
separate this other than creating new servers for each agent?
I can't reproduce same issue on my side. I assume the self-agents you mentioned above are in same agent pool, if so, as I know Devops doesn't have one option to separate agents from same agent pool when these agents are installed in same machine.
About the strange behavior you met, you can try this to resolve it:
1.Since you may have more than one self-agents in same Agent Pool running in same server, I suggest you can try separate those agents in different Agent Pool. Since your agents are running in same server, in this situation one agent pool for one agent can be more suitable.
2.Assuming your steps may not in same agent job, check and make sure your different agent jobs use same agent pool.
Hope it helps:)
After going through lot of logs and all the pipelines that were having issues was able to find some similarities. Most of the issues were having on the step where we were using Powershell task and the task was outdated (replaced with new one by Azure). After updating all the powershell tasks the issue seems to have gone.
Related
I'm having issues on troubleshooting long build times for my ADO pipeline. Build steps are taking much longer than expected. We're using self-hosted agents - I suspect there may be a problem there, but I wanted input on any other directions I should take in investigating.
According to your description, do you mean that same pipeline run via microsoft hosted agent takes much shorter time?
It's suggested that you could check the pipeline debug logs from microsoft-hosted agent and self-hosted agent to see which part of the task and which step would perform different speed. And you could also share the logs with us for investigations.
And you could configure another self hosted agent on another local machine to test again. Usually, the time spent on same pipeline run through different agent would vary from agent performance
And for more information, pipeline run on self-hosted agent would be effected by multiple factors, like hardware property, network transmission and other concept.
We've got two YAML Pipelines, pull-request.yml and main.yml. As the names suggest, pull-request.yml runs on every PR, and main.yml runs once deployed to main.
I've configured two MS hosted parallel jobs.
In main.yml using deployment jobs, I'm deploying to various Environments. It all works well, except when main.yml is executed twice, in parallel. Then, it will deploy to the same environment in each pipeline, causing issues with our IAC scripts.
Looking at the documentation, it doesn't seem possible to restrict this behavior with YAML pipelines.
My workaround now is to switch back to 1 parallel job, but I want to have to have parallel jobs for my pull-request.yml pipelines. Then, I thought, let's create another Agent Pool, but that only allows me to add self hosted agents. I want to avoid that as MS hosted agents are very convenient.
How can I have parallel jobs for my pull-request.yml but only a single instance for main.yml with MS hosted agents only?
It's not supported to have parallel jobs for one pull-request.yml but single parallel for another main.yaml with MS hosted agent, since Microsoft will auto detect the agent for pipeline if the requirements meet and use it to run the job.
But for your main.yml which deploying to environment, maybe you can use "Exclusive deployment lock policy" on the environment.
As doc mentioned:
With this update, you can ensure that only a single run deploys to an environment at a time. By choosing the "Exclusive lock" check on an environment, only one run will proceed. Subsequent runs which want to deploy to that environment will be paused. Once the run with the exclusive lock completes, the latest run will proceed. Any intermediate runs will be canceled.
I encountered an error which I was fighting for a few days already, without success. I have a multistage pipeline written for Azure DevOps and Self-Hosted agent, is it possible to run multiple concurrent runs, for different branches on a different workspace?
I mean, I have queued runs for: dev, dev2, master, etc., and I wanna run three concurrent runs in separate workspaces for them.
You need to install multiple instances of the agent on your build agent. One agent only runs 1 job at a time. But you can just install as many copies of the agent on the same server as you want, just extract the agent to a new folder and register it.
Currently, I am working on Django based project which is deployed in the azure app service. While deploying into the azure app service there were two options, one via using DevOps and another via vscode plugin. Both the scenario is working fine, but strangle while deploying into app service via DevOps is slower than vscode deployment. Usually, via DevOps, it takes around 17-18 minutes whereas via vscode it takes less than 14 min.
Any reason behind this.
Assuming you're using Microsoft hosted build agents, the following statements are true:
With Microsoft-hosted agents, maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.
and
Parallel jobs represents the number of jobs you can run at the same time in your organization. If your organization has a single parallel job, you can run a single job at a time in your organization, with any additional concurrent jobs being queued until the first job completes. To run two jobs at the same time, you need two parallel jobs.
Microsoft provides a free tier of service by default in every organization that includes at least one parallel job. Depending on the number of concurrent pipelines you need to run, you might need more parallel jobs to use multiple Microsoft-hosted or self-hosted agents at the same time.
This first statement might cause an Azure Pipeline to be slower because it does not have any cached information about your project. If you're only talking about deploying, the pipeline first needs to download (and extract?) an artifact to be able to deploy it. If you're also building, it might need to bring in the entire source code and/or external packages before being able to build.
The second statement might make it slower because there might be less parallelization possible than on the local machine.
Next to these two possible reasons, the agents will most probably not have the specs of your development machine, causing them to run tasks slower than they can on your local machine.
You could look into hosting your own agents to eliminate these possible reasons.
Do self-hosted agents have any performance advantages over Microsoft-hosted agents?
In many cases, yes. Specifically:
If you use a self-hosted agent, you can run incremental builds. For example, if you define a pipeline that does not clean the repo and does not perform a clean build, your builds will typically run faster. When you use a Microsoft-hosted agent, you don't get these benefits because the agent is destroyed after the build or release pipeline is completed.
A Microsoft-hosted agent can take longer to start your build. While it often takes just a few seconds for your job to be assigned to a Microsoft-hosted agent, it can sometimes take several minutes for an agent to be allocated depending on the load on our system.
More information: Azure Pipelines Agents
When you deploy via DevOps pipeline. you will go through a lot more steps. See below:
Process the pipeline-->Request Agents(wait for an available agent to be allocated to run the jobs)-->Downloads all the tasks needed to run the job-->Run each step in the job(Download source code, restore, build, publish, deploy,etc.).
If you deploy the project in the release pipeline. Above process will need to be repeated again in the release pipeline.
You can check the document Pipeline run sequence for more information.
However, when you deploy via vscode plugin. Your project will get restored, built on your local machine, and then it will be deployed to azure web app directly from your local machine. So we can see deploying via vscode plugin is faster, since much less steps are needed.
Situation:
I have a pipeline job that executes tests in parallel. I use Azure VMs that I start/stop on each build of the job thru Powershell. Before I run the job, it checks if there are available VMs on azure (offline VMs) then use that VMs for that build. If there is no available VMs then I will fail the job. Now, one of my requirements is that instead of failing the build, I need to queue the job until one of the nodes is offline/available then use those nodes.
Problem:
Is there any way for me to this? Any existing plugin or a build wrapper that will allow me to queue the job based on the status of the nodes? I was forced to do this because we need to stop the Azure VM to lessen the cost usage.
As of the moment, I am still researching if this is possible or any other way for me to achieve this. I am thinking of any groovy script that will check the nodes and if there are no available, I will manually add it to the build queue until at least 1 is available. The closest plugin that I got is Run Condition plugin but I think this will not work.
I am open to any approach that will help me achieve this. Thanks