I use the Gitlab CE edition and I setup my classic build pipeline (build, test, deploy). The build step creates an application of around 1GB (500 MB as zipped artifact). The artifact is uploaded to the server and the next gitlab runner downloads it again to test it. Is there a way to set an “affinity” for a gitlab runner, so exactly the machine which just built the binary can continue using the binaries to test it?
One option would be to merge the build and test step into a single one, but I am looking for alternatives. Thank you!
Had exactly the same issue using cmake, and had to combine the build and test step into one to save time from zipping and unzipping large build directories.
There is an open issue on using sticky runners, where one pipeline will always use the same runner and save the workspace between jobs, however that is a while away from being completed it seems.
Related
We have a self-hosted build agent on an on-prem server.
We typically have a large codebase, and in the past followed this mechanism with TFS2013 build agents:
Daily check-ins were built to c:\work\tfs\ (taking about 5 minutes)
Each night a batch file would run that did the same build to those folders, using the same sources (they were already 'latest' from the CI build), and build the installers. Copy files to a network location, and send an email to the team detailing the build success/failures. (Taking about 40 minutes)
The key thing there is that for the nightly build there would be no need to get the latest sources, and the disk space required wouldn't grow much. Just by the installer sizes.
To replicate this with Azure Devops, I created two pipelines.
One pipeline that did the CI using MSBuild tasks in the classic editor- works great
Another pipeline in the classic editor that runs our existing powershell script, scheduled at 9pm - works great
However, even though my agent doesn't support parallel builds what's happening is that:
The CI pipeline's folder is c:\work\1\
The Nightly build folder is c:\work\2\
This doubles the amount of disk space we need (10gb to 20gb)
They are the same code files, just built differently.
I have struggled to find a way to say to the agent "please use the same sources folder for all pipelines"
What setting is this, as we have to pay our service provider for extra GB storage otherwise.
Or do I need to change my classic pipelines into Yaml and somehow conditionally branch the build so it knows it's being scheduled and do something different?
Or maybe, stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before?
(I did try looking for the same question - I'm sure I can't be the only one).
There is "workingDirectory" directive available for running scripts in pipeline. This link has details of this - https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/command-line?view=azure-devops&tabs=yaml
The number '1' '2'...'6' of work folder c:\work\1\, c:\work\2\... c:\work\6\ in your build agent which stands for a particular pipeline.
Agent.BuildDirectory
The local path on the agent where all folders for a given build
pipeline are created. This variable has the same value as
Pipeline.Workspace. For example: /home/vsts/work/1
If you have two pipelines, there will also be two corresponding work folders. It's an except behavior. We could not configure pipelines to share the same build folde. This is by designed.
If you need to use less disk space to save cost, afraid stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before is a better way.
This is my first time where I am both learning and implementing automated CICD pipelines in Atlassian bamboo. I have a NodeJS project whose build and deployment plan I configured after much R&D over the net.
In the deployment process, I observed that the deployment is taking very much time as the number of files to be transferred are more in numbers due to node_modules probably. I would like to compress the artifact generated after build steps and want to decompress at server side once the transfer is complete.
I tried finding ZIP in the tool tasks but it is not there. My question is that is it possible in any other way. Is doing it via cmd works & is feasible?
I have a little experience over the Linux commands.
Any help would be highly appreciated.
In my company we use an Ant task including ivy to prepare, zip and publish our projects as artifacts. In the deployment we use an SCP Task to copy the artifact onto our server and an SSH task to unzip it.
So our whole build part is implemented in ant and the only thing our bamboo build does, is checking out a git repository and running the ant script.
That workflow is used for a lot of different projects including nodejs, python, java, c++ or pure text file setups and it works really well.
But a normal script task for zipping should also do the job and depending on the scale of your projects Ant may be an overkill.
I think its possible to use win/linux commands for acheiving your requirement. you would need to write a task to compress the files you can use shell plugin or any other suitable plugin. once the artifact is sent to server you would need a pooling batch program to unzip your artifact at the server end.
I'm very confused about how one of the build task currently works.
I have been using Grunt locally in VS-Code to minify a JS file. All seems to be working well. In Azure DevOps, as a Build Task, I am using the same package.json the minification takes place but on the agent VM:
D:\a\1\s\Build\Hello.js
Looking in my repo, this file does not exist. I am assuming that I need to copy the file and upload to my own repo. Does anyone know how I do this?
A build usually creates a build ** artifact** that gets copied to a drop location. You will use the build artifacts inside your release definitions to deploy the binaries / minified or optimized code to an environment.
You probably don't want/need to upload any file back to your repo.
See: What is Azure pipelines
We have two servers in our organisation.
1) server with gitlab
2) Build server
I would like to create an automate build happen in the second machine(Build server ) for the source code in the gitlab server.
How can I achieve this using gitlab ?
Thanks,
siva
If you are moving from an "pull" continuous integration system (e.g. using a kind of crontab that regularly checks if the source code on the versioning system has changed and start the configure/build/test/deploy stages if it has), then know that gitlab has a much better way of doing this.
gitlab approach is to configure a "pull" system: every time the code is updated (in any branch) on the git repository then the script defined in your .gitlab-ci.yml is read to see if continuous integration jobs have to be launched. jobs are send to your configured gitlab runners. gitlab runners are defined on your build server(s) and takes the job when they are coming.
Definition of what to do is also describes in the .gitlab-ci.yml.
Here is a list of documentation to start learning about gitlab CI:
the official documentation can be helpful
A general introduction to gitlab ci using docker can be found in this blog article (the first slides are great). If your build server or your intended build is on Linux, I would recommend using the "docker executor" (e.g. gitlab runners are executed inside a docker machine inside your build server). It is easy and quick to setup.
Hope this helps you starting...
I have a VSTS project and I'm setting up CI/CD at the moment. All fine, but I seem to have 2 options for the publishing step:
Option 1: it's a task as part of the CI Build, e.g. see build step 3 here:
https://medium.com/#flu.lund/setting-up-a-ci-pipeline-for-deploying-your-angular-application-to-azure-using-visual-studio-team-f686c8f190cf
Option 2: The build phase produces artifacts, and as part of a separate release phase these artifacts are published, see here:
https://learn.microsoft.com/en-us/vsts/build-release/actions/ci-cd-part-1?view=vsts
Both options seem well supported in the MS documentation, is one of these options better than the other? Or is it a case of pros & cons for each and it depends on circumstances, etc?
Thanks!
You should definitely use "Option 2". Your build should not make changes in your environments whatsoever, that is strictly what a "release" is. That link you have under "Option 1" is the wrong way to do it, a build should be just that, compiling code and making artifacts, not actually deploying code.
When you mesh build/releases together, you make it very difficult to debug build issues. Since your code is always being released, you really have to disable the "deploy" step to get any idea of what was built before you deployed.
Also, the nice thing about creating an artifact is you have a deployable package, and if in the future you need to rollback to a previous working version, you have that ready to go. Using the "build only" strategy, you'd have to revert your code or make unnecessary backups to achieve this.
I think you'll find any new Microsoft documentation pointing you toward this approach, and VSTS is completely set up like this. Even the "Configure Continuous Delivery in Azure..." feature in Visual Studio 2017 will create a build and a release.
Almost all build tasks are the same as release tasks, so you can deploy the app after building the project in build process.
Also there are many differences between release and build, for example, many environments, deployment group phase in release.
So which way is better is per to your detail requirement, for example, if build > deploy > other process is simple, you can do it just in build.
Regarding Publish artifact task, it is used to publish the files to VSTS server or other place (e.g. shared folder), which can be used in release as Artifact (Click Add artifact > Build in release definition), you also can download them for troubleshooting, for example, if you are using Hosted Agent that you can’t access, but you want to get some files (e.g. build result), you can add publish artifact task to publish to VSTS server, then download them (Open build result > Artifacts)