Any way to cache git source files? - azure

There is this great Cache task in Build pipelines. It works great to cache nuget and npm packages (among others), but I don't see a corollary for the git sources.
Is there any way to do this, short of disabling the built in get sources and running our own git checkout as a task? We have a relatively large repo that takes about 3 minutes to sync. I'd like to reduce that if possible. I've already tried using shallow fetch set to a depth of 1, and it hasn't appreciably changed the checkout time.

Azure devops doesnot have this feature of Caching git source currently.
However, if you use self-hosted agent, the git source will be cached on your self-hosted agent machine by default(in the pipeline source code directory. eg. C:\agent\_work\1\s ). Next time when you run your pipeline, it will only checkout the changed files to your local machine.
So you can create a self-hosted agent to run your pipeline. Check here for detailed steps to create self-hosted agent.
(If you want to clean the source code folder each time you run the pipeline. You can set the clean to true for checkout step. clean option of checkout step will not work for Microsoft-hosted agent. Because Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.)
You can also click here to submit a feature request(Click Suggest a feature and select Azure Devops.
) to Microsoft development team. Hopefully they will consider adding this feature in the future.

Related

Self hosted azure agent - how to configure pipelines to share the same build folder

We have a self-hosted build agent on an on-prem server.
We typically have a large codebase, and in the past followed this mechanism with TFS2013 build agents:
Daily check-ins were built to c:\work\tfs\ (taking about 5 minutes)
Each night a batch file would run that did the same build to those folders, using the same sources (they were already 'latest' from the CI build), and build the installers. Copy files to a network location, and send an email to the team detailing the build success/failures. (Taking about 40 minutes)
The key thing there is that for the nightly build there would be no need to get the latest sources, and the disk space required wouldn't grow much. Just by the installer sizes.
To replicate this with Azure Devops, I created two pipelines.
One pipeline that did the CI using MSBuild tasks in the classic editor- works great
Another pipeline in the classic editor that runs our existing powershell script, scheduled at 9pm - works great
However, even though my agent doesn't support parallel builds what's happening is that:
The CI pipeline's folder is c:\work\1\
The Nightly build folder is c:\work\2\
This doubles the amount of disk space we need (10gb to 20gb)
They are the same code files, just built differently.
I have struggled to find a way to say to the agent "please use the same sources folder for all pipelines"
What setting is this, as we have to pay our service provider for extra GB storage otherwise.
Or do I need to change my classic pipelines into Yaml and somehow conditionally branch the build so it knows it's being scheduled and do something different?
Or maybe, stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before?
(I did try looking for the same question - I'm sure I can't be the only one).
There is "workingDirectory" directive available for running scripts in pipeline. This link has details of this - https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/command-line?view=azure-devops&tabs=yaml
The number '1' '2'...'6' of work folder c:\work\1\, c:\work\2\... c:\work\6\ in your build agent which stands for a particular pipeline.
Agent.BuildDirectory
The local path on the agent where all folders for a given build
pipeline are created. This variable has the same value as
Pipeline.Workspace. For example: /home/vsts/work/1
If you have two pipelines, there will also be two corresponding work folders. It's an except behavior. We could not configure pipelines to share the same build folde. This is by designed.
If you need to use less disk space to save cost, afraid stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before is a better way.

Good practices for pulling from git repo into production server

I have a DigitalOcean VPS with ubuntu and a few laravel projects, for my projects initial setup I do a git clone to create a folder with my application files from my online repository.
I do all development work in my local machine, where I have two branches (master and develop), what I do is merge develop into my local master, then I push from master into my local repository.
Nw back into my production server, when I want to add all the changes added into production I do a git pull from origin, so far this has resulted into git telling me to stash my changes, why is this?
What would be the best approach to pull changes into production server? take in mind that my production server has no working directory perse, all I do in my VPS is either clone or push upgrades into production.
You can take a look at the CI/CD (continuous integration / continuous delivery) systems. GitLab for example offer free-to-use plan for small teams.
You can create a pipeline with a manual deploy step (you have to press a button after the code is merged to the master branch) and use whatever tool you like to deploy your code (scp, rsync, ftp, sftp etc.).
And the biggest benefit is that you can have multiple intermediate steps (even for the working branches) where you can run unit tests which would prevent you to upload failing builds (whenever you merge non-working code)
For the first problem, do a git status on production to see which files that git sees as changed or added and consider adding them to your .gitignore file (which itself should be a part of your repo). Laravel generally has good defaults for these, but you might have added things or deviated from them in the process of upgrading Laravel.
For the deployment, the best practice is to have something that is consistent, reproducible, loggable, and revertable. For this, I would recommend choosing a deployment utility. These usually do pretty much the same thing:
You define deployment parameters in code, which you can commit as a part of your repo (not passwords, of course, but things like the server name, deploy path, and deploy tasks).
You initiate a deploy directly from your local computer.
The script/utility SSH's into your target server and pulls the latest code from the remote git repo (authorized via SSH key forwarded into the server) into a 'release' folder.
The script does any additional tasks you define (composer install, npm run prod, systemctl restart php-fpm, soft-linking shared files like .env, and etc.)
The script soft-links the document root to your new 'release' folder, which results in an essentially zero-downtime deployment. If any of the previous steps fail, or you find a bug in the latest release, you just soft-link to the previous release folder and your site still works.
Here are some solutions you can check out that all do this sort of thing:
Laravel Envoyer: A 1st-party (paid) service that allows you to deploy via a web-based GUI.
Laravel Envoy: A 1st-party (free) package that allows you to connect to your prod server and script deployment tasks. It's very bare-bones in that you have to write all of the commands yourself, but some may prefer that.
Capistrano: This is (free) a tried-and-tested popular ruby-based deployment utility.
Deployer: The (free) PHP equivalent of Capistrano. Easier to use, has a lot of built-in tasks (including a Laravel one), and doesn't require ruby.
Using these utilities is not necessarily exclusive of doing CI/CD if you want to go that route. You can use these tools to define the CD step in your pipeline while still doing other steps beforehand.

Azure Pipelines YAML: How to get rid of "Post-Build: Get Sources" step?

We are trying to get a clean YAML build going, and ran into a quirk: The build has an extra "Get Sources" step at the end, which is not in our YAML file, and can't be removed using the UI.
I created an azure-pipelines.yml file in the root of a new Azure Git (not GitHub) repo. A build definition was automatically created under OurRepo/OurRepo CI in the Builds pane on Azure DevOps.
The build works, but note the extra step at the end:
When I edit the job in Azure's UI via Pipeline Settings, I notice a "Get Sources" task that cannot be removed:
While this non-removable step makes sense for GUI-defined builds, I'm trying to go "pure YAML". The extra pull doesn't take long, so it's not a big deal, just annoying.
Apparently other users have this extra step in YAML builds as well: try googling "Post-job: Get sources".
Am I doing something wrong, or is this just a quirk with Azure Git repos using YAML builds? (The MS Docs tutorial uses a regular GitHub repo, I noticed).
Edit: I have also tried creating a build definition from YAML via New Build Definition > Azure Git Repo > YAML. The resulting page fails to detect the azure-pipeilnes.yml file (whether that file is empty or has a known working build definition when I committed it--tried both), so I ended up in the same place.
I doubt you can since it appears built-in to the pipeline. Is the output of that task have similar to my post task. Although it is labeled "Post-job Checkout". It looks like a clean-up step to me.
2019-01-30T21:39:38.1940431Z ##[section]Starting: Checkout
2019-01-30T21:39:38.2032443Z ==============================================================================
2019-01-30T21:39:38.2032500Z Task : Get sources
2019-01-30T21:39:38.2032550Z Description : Get sources from a repository. Supports Git, TfsVC, and SVN repositories.
2019-01-30T21:39:38.2032583Z Version : 1.0.0
2019-01-30T21:39:38.2032794Z Author : Microsoft
2019-01-30T21:39:38.2032822Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkId=798199)
2019-01-30T21:39:38.2032852Z ==============================================================================
2019-01-30T21:39:38.5783539Z Cleaning any cached credential from repository: Sandbox (Git)
2019-01-30T21:39:38.5854582Z ##[section]Finishing: Checkout
The post job step is to wipe out the downloaded source content from the hosted agent machine. There is no way that the user has control over it and it is a built-in feature.

configure gitlab to build the source code on another machine

We have two servers in our organisation.
1) server with gitlab
2) Build server
I would like to create an automate build happen in the second machine(Build server ) for the source code in the gitlab server.
How can I achieve this using gitlab ?
Thanks,
siva
If you are moving from an "pull" continuous integration system (e.g. using a kind of crontab that regularly checks if the source code on the versioning system has changed and start the configure/build/test/deploy stages if it has), then know that gitlab has a much better way of doing this.
gitlab approach is to configure a "pull" system: every time the code is updated (in any branch) on the git repository then the script defined in your .gitlab-ci.yml is read to see if continuous integration jobs have to be launched. jobs are send to your configured gitlab runners. gitlab runners are defined on your build server(s) and takes the job when they are coming.
Definition of what to do is also describes in the .gitlab-ci.yml.
Here is a list of documentation to start learning about gitlab CI:
the official documentation can be helpful
A general introduction to gitlab ci using docker can be found in this blog article (the first slides are great). If your build server or your intended build is on Linux, I would recommend using the "docker executor" (e.g. gitlab runners are executed inside a docker machine inside your build server). It is easy and quick to setup.
Hope this helps you starting...

Bitbucket Pipelines access other node repository

I have enabled Bitbucket Pipelines in one of my node.js repositories to have it run the build on every commit. My repository depends on another node.js repository. For development I've linked the one to the other using npm link.
I've tried a git clone of that repository that is specified in the bitbucket-pipelines.yml file, but the build gets stuck on that command. I guess it's because git is asking for authentication at that point.
Is there a way to allow the container to access other repositories in the same team? Or is there a better way altogether on how to solve this? I'd also be fine with switching to another CI tool if Bitbucket Pipelines aren't capable of this – the only requirement is that it's free for teams < 5 people.
Btw. I'd like to avoid paying for npm private packages if possible.
Thanks!
You can organize access by ssh key for another repo like described in official docs https://confluence.atlassian.com/bitbucket/access-remote-hosts-via-ssh-847452940.html

Resources