Speeding up solution build in Visual Studio Team Services (was VS Online) - azure

I am using Team Services for my application build/deploy but I am finding that the build step is extremely slow. It ranges in time from 6 minutes to sometimes 15 minutes just for the solution build. A large proportion of this time is taken up by the nuget package restore, which can be up to 5 minutes.
The way I see it there are 2 potential ways in which I could speed up the build time, but I am unsure if it is possible to do these things:
Configure Team Services to clone the repository to the same disk location each time it does a build so that it only needs to restore new / remove old nuget packages
Upgrade the power of the build agent
Does anyone know if either of these things are possible, or does anyone have any other tips on how to speed up the build step?

If you're using the hosted queue, it has to clone the repo and restore the packages every time -- you don't get a dedicated agent, so every build is from scratch.
You can set up an on-premise build agent if you need capabilities that exceed those of the hosted agent.

Related

How to restore NuGet package in Azure Pipeline?

I am new to Azure DevOps and trying to create my first Azure pipeline. I have a ASP.NET MVC project and there are a few NuGet packages that need to be restored before the MSBuild step.
Unfortunately, the NuGet restore is failing with the following error:
The pipeline is not valid. Job Job_1: Step 'NuGetCommand' references
task 'NuGetCommand' at version '2.194.0' contains an execution handler
that relies on NodeJS version '6' which is restricted by your
administrator.
NodeJS 6 came disabled out of the box so we are not going to enable it.
My Questions:
Is there an alternative to NuGet restore that does not use NodeJS?
Is there a way to update the NodeJS6 to a higher version?
update 23-Nov-2021
I have found a work around for the time being. I am using a custom PowerShell script to restore NuGet Packages and build Visual Studio project
$msBuildExe = 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\MSBuild.exe'
Write-Host "Restoring NuGet packages" -foregroundcolor green
& "$($msBuildExe)" "$($path)" /p:Configuration=Release /p:platform=x86 /t:restore
Note: $path here is the path to my .csproj file
Apparently, other people are also getting the same issue and it is just a matter of time that the task is updated by the OpenSource community.
Here are some similar issues being faced in other tasks as well:
https://github.com/microsoft/azure-pipelines-tasks/issues/15526
https://github.com/microsoft/azure-pipelines-tasks/issues/15511
https://github.com/microsoft/azure-pipelines-tasks/issues/15516
https://github.com/microsoft/azure-pipelines-tasks/issues/15525
It's AzureDevOps' NuGetCommand task that uses NodeJS, not NuGet itself. Therefore, you can find a way to restore without using Azure DevOps' NuGetCommand task.
Idea 1: use DotnetCoreCli task instead. However, this probably won't work for you since you said your project is ASP.NET MVC, rather than ASP.NET Core. Also, it also appears to need NodeJS to run.
Idea 2: Use MSBuild restore. You can test on your local machine whether or not this works by clearing your global packages folder, or temporarily configuring NuGet to use a different path, and then running msbuild -t:restore My.sln from a Developer PowerShell For Visual Studio prompt. If your project uses packages.config, rather than PackageReference, you'll need to also pass -p:RestorePackagesConfig=true (although maybe this is currently broken). I'm not an expert on Azure Pipelines tasks, so I don't know what it means that this task defines both PowerShell and Node execution entry points, but maybe it means it will work even if your CI agent doesn't allow NodeJS.
Idea 3: Don't use any of the built-in tasks, just use - script: or - task: PowerShell#2, but even that is a little questionable whether it'll work since even the powershell task defines a Node execution entry point. I'm guessing it will work, but I don't have access to a CI agent where NodeJS is forbidden, so I couldn't test even if I wanted to. Anyway, if this works, then you can run MSBuild yourself (but it might also be your responsibility to find msbuild.exe if it's not on the path). Or you can download nuget.exe yourself and execute it in your script. The point is, if you can get Azure Pipeline's script task working, you can run any script and do everything you need yourself.
Idea 4: Use Microsoft Hosted agents. They have documented all the software they pre-install on the machines, which includes Node JS. Downside is that once you exceed the free quota it costs money, and I've worked for companies where it's easier to get money to buy hardware once-off, and pretend that maintenance of the server is free, even though it reduces team productivity, rather than pay for a monthly service. So, I'll totally understand if this is not an option for you.
Idea 5: Talk to whoever maintains your CI agents and convince them to allow & install NodeJS. It's clearly a fundamental part of Azure Pipelines. The tasks are open source on github, and you can see that pretty much all of them use NodeJS to orchestrate whatever work it does. Frankly, I thought the agent software itself was a NodeJS application, so I'm surprised that it runs without NodeJS.

Azure Artifacts Feed is much slower than maven central

I'm working on a project in Azure DevOps and, as recommended in the doc, I created an Artifacts Feed with maven central as upstream source to store all my dependencies (I don't really need to publish artifacts for now).
So I configured my local maven to fetch all the dependencies from my feed instead of maven central and it all works fine, except that it's very slow compared to maven central.
When I start from an empty .m2 on my local machine, it takes 1 min 15 secs to build my project when downloading the dependencies from maven central, but it takes over 8 minutes to do the same when downloading the dependencies from the Feed (which contains already all the dependencies).
I could live with that, since the download of everything happens only on the first build.
But the issue is that it's also slower when building my project from Azure Pipelines, which I really didn't expect since it's a connection from Azure to Azure and within the same organization. In this case, it takes at least twice the time when using the feed rather than maven central. And this will be true every time since Azure Pipelines gives you a fresh VM each time you build (I'm using a hosted agent), so there's no dependencies caching in this case.
It's really annoying since my project is just a HelloWorld so far, so it will only get worse over time.
Using a repository manager/feed is the best practice according to both Maven and Azure, but at this point I'm really thinking of going for the bad practice of getting everything from maven central instead of my feed, at least in my pipeline, to improve the performance.
Am I the only one having this issue ? What are your thoughts about this ?
Finally, after diving into the documentation for Azure Pipelines recently, I found out there is a way to cache the maven repository between runs so it partially solves my issue since the full download of the dependencies will happen only once.
Here is the doc in question for those who are interested.

Self hosted azure agent - how to configure pipelines to share the same build folder

We have a self-hosted build agent on an on-prem server.
We typically have a large codebase, and in the past followed this mechanism with TFS2013 build agents:
Daily check-ins were built to c:\work\tfs\ (taking about 5 minutes)
Each night a batch file would run that did the same build to those folders, using the same sources (they were already 'latest' from the CI build), and build the installers. Copy files to a network location, and send an email to the team detailing the build success/failures. (Taking about 40 minutes)
The key thing there is that for the nightly build there would be no need to get the latest sources, and the disk space required wouldn't grow much. Just by the installer sizes.
To replicate this with Azure Devops, I created two pipelines.
One pipeline that did the CI using MSBuild tasks in the classic editor- works great
Another pipeline in the classic editor that runs our existing powershell script, scheduled at 9pm - works great
However, even though my agent doesn't support parallel builds what's happening is that:
The CI pipeline's folder is c:\work\1\
The Nightly build folder is c:\work\2\
This doubles the amount of disk space we need (10gb to 20gb)
They are the same code files, just built differently.
I have struggled to find a way to say to the agent "please use the same sources folder for all pipelines"
What setting is this, as we have to pay our service provider for extra GB storage otherwise.
Or do I need to change my classic pipelines into Yaml and somehow conditionally branch the build so it knows it's being scheduled and do something different?
Or maybe, stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before?
(I did try looking for the same question - I'm sure I can't be the only one).
There is "workingDirectory" directive available for running scripts in pipeline. This link has details of this - https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/command-line?view=azure-devops&tabs=yaml
The number '1' '2'...'6' of work folder c:\work\1\, c:\work\2\... c:\work\6\ in your build agent which stands for a particular pipeline.
Agent.BuildDirectory
The local path on the agent where all folders for a given build
pipeline are created. This variable has the same value as
Pipeline.Workspace. For example: /home/vsts/work/1
If you have two pipelines, there will also be two corresponding work folders. It's an except behavior. We could not configure pipelines to share the same build folde. This is by designed.
If you need to use less disk space to save cost, afraid stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before is a better way.

In GitLab is it possible to configure a Scheduled Pipeline that runs on all branches periodically?

I am using GitLab for Git version control and GitLab CI / CD for my automated builds. Usually, the builds are triggered by Git repository activity but I also have a weekly build to ensure that projects not under active development continue to work. When there is only a "master" branch on a project, it is easy to ensure a weekly build is run on the latest code. When there are multiple branches in a project, I would like to repeat the pipeline work for each of them in turn.
What I would like to be able to do is schedule a build (weekly, fortnightly or monthly) that runs on all current branches visible in Git. Is that possible within GitLab's Continuous Delivery system?
The motivation behind doing this is to ensure that external activity, such as tool and library updates, do not introduce an issue without it being promptly visible. Assuming there are reasonable automated testing, coverage and comprehensive builds for target platforms, a monthly build with the latest tools should highlight the problem promptly. This is better than an invisible mountain to problems accumulating while a project is shelved for a few years (or months). Sometimes all that is required is occasional maintenance.
There are only a handful of feature branches and release lines on the projects currently. I would not expect that number to grow significantly. There is time enough over a weekend to run the required pipelines dozens if not hundreds of times at present.
Ideally, I would like something straightforward to set up. I cannot see anything in the admin GUI that would allow this at present. I did look at the API and I can see there is some scope there to script the addition and removal. Perhaps some script that is run once a month to create new Scheduled pipelines based on git branches is the only way. A pre-made solution on those lines would be perfectly acceptable. If nothing exists I might start work on something like that in time.
I am currently running GitLab Community Edition 11.2.3 06cbee3 (GitLab CE 11.2.3). If there is an Enterprise Edition only answer, that is fine and will add to the justifications of purchasing the EE version. I would pick at CE one above the EE one though.
You cannot set a schedule for all branches at once, you have to configure one schedule per branch yourself.
Perhaps some script that is run once a month to create new Scheduled
pipelines based on git branches is the only way.
I would go in that way.

How to deploy Go program from windows to CentOS server

I have a Go package running on Windows and is working fine but now I'm at a stage where I would like to test this on production CentOS 6.5 server.
What is the best practice to deploy this from Windows to CentOS?
Would I have to use my Git repo to distribute to Linux operating system, compile then deploy the binary to the server?
Also I have multiple files, so I would imagine go build *.go would suffice or are there better options for doing compilation?
What is the best practice to deploy this from Windows to CentOS?
As far as best practices go I would recommend using continuous integration. You can setup jenkins, or there are some cloud options out there: codeship.io, travis-ci.org, drone.io, wercker.com, ... Some of them have free plans available.
Basically you'd commit your code to git and push that out to Github (or Bitbucket if you want free private repos). The continuous integration server will be notified whenever you push out changes, and will build, test and create a release tar archive of your project. You can then take this resulting tar and download it to your CentOS box. In 6.5 you'll need to create an init.d script to keep your program up and running. You can see an example here (the system v script).
CentoOS 7 uses systemd now which would be slightly easier to setup.
Taking this one step further it's also possible to setup continuos deployment, in which the download, extraction and installation can also be automated. Depending on your project it may or may not make sense to set up continuous deployment. (Auto-pushing to production might be a little too automatic) You can find an example in wercker here.
Although there is an an up-front cost to setting up continuous integration if this is a project that other people will contribute too, or one that you intend to work on long-term, the cost will definitely be worth it. (Future you will be greatful when you come back to this project 6 months from now, change 1 line of code, and don't have to remember all the manual steps it took to deploy)

Resources