Jenkins, Maven, Slaves and artifacts - linux

I have a Master Jenkins running on a Windows machine. I'm trying to add a Linux slave and so far seems fine. For some reason if a new project is ran on the Linux Machine that triggers another project due to dependencies, the build fail because is unable to see the artifacts from the first project.
If I run everything on the master (i.e. disabling the slave) it works fine. I suspect is some kind of artifact sharing that needs to take place, but I haven't been able to find examples or figure out how to do this synchronization.
Thanks in advance!

Related

boto3 describe_images empty (jenkins/packer)

I am running packer via a jenkins pipeline and want to delete the ami afterwards.
I am using a small python3/boto3 script to do that.
However, when calling describe_images I get an empty list. No errors (via debug).
If I run the same script via the same docker based agent (on a ec2 jenkins node) but from a different pipeline, it works.
I also do not have issues on another project with similar settings.
Sometimes, intermittently it will work, but seldom.
I can rule out a general config issue as the same script works perfectly on the same systems (just a different jenkins pipeline).
I can also rule out general issue with the jenkins pipeline, as it will intermittently work - without changes.
What am I missing?
Yikes, this was a stupid mistake on my side. So my script to fetch the ami-id from the packer manifest.json was not returning the correct ami-id (I assumed I'd only find one ami-id in that file).

configure gitlab to build the source code on another machine

We have two servers in our organisation.
1) server with gitlab
2) Build server
I would like to create an automate build happen in the second machine(Build server ) for the source code in the gitlab server.
How can I achieve this using gitlab ?
Thanks,
siva
If you are moving from an "pull" continuous integration system (e.g. using a kind of crontab that regularly checks if the source code on the versioning system has changed and start the configure/build/test/deploy stages if it has), then know that gitlab has a much better way of doing this.
gitlab approach is to configure a "pull" system: every time the code is updated (in any branch) on the git repository then the script defined in your .gitlab-ci.yml is read to see if continuous integration jobs have to be launched. jobs are send to your configured gitlab runners. gitlab runners are defined on your build server(s) and takes the job when they are coming.
Definition of what to do is also describes in the .gitlab-ci.yml.
Here is a list of documentation to start learning about gitlab CI:
the official documentation can be helpful
A general introduction to gitlab ci using docker can be found in this blog article (the first slides are great). If your build server or your intended build is on Linux, I would recommend using the "docker executor" (e.g. gitlab runners are executed inside a docker machine inside your build server). It is easy and quick to setup.
Hope this helps you starting...

What is gitlab runner

I think I'm fundamentally missing something. I'm new to CI/CD and trying to set up my first pipeline ever with gitlab.
The project is a pre-existing PHP project.
I don't want to clean it up just yet, at the moment I've pushed the whole thing into a docker container and it's running fine talking to google cloud's mysql databases etc as it should locally and also on a remote google cloud testing VM.
The dream is to be able to push to the development branch, and then merge the dev banch into the test branch which then TRIGGERS automated tests (easy part), and also causes the remote test VM (hosted on google cloud), to PULL the newest changes, rebuild the image from the latest docker file (or pull the latest image from gitlab image register)... and then rebuild the container with the newest image.
I'm playing around with gitlab's runner but I'm not understanding what it's actually for, despite looking through almost all the online content for it.
Do I just install it in the google cloud VM, and then when I push to gitlab from my development machine.. the repo will 'signal' the runner (which is running on the VM, to execute a bunch of scripts (which might include git pull on the newest changes?).
Because I already pre-package my app into a container locally (and push the image to the image registry) do I need to use docker as my executor on the runner? or can i just use shell and shell the commands in?
What am I missing?
TLDR and extra:
Questions:
What is runner actually for,
where is it meant to be installed?
Does it care which directory it is run in?
If it doesn't care which directory it's run,
where does it execute it's script commands? At root?
If I am locally building my own images and uploading them to gitlab's registry,
Do I need to set my executor to docker? Shouldn't I just set it to shell, pull the image, and build it? (Assuming the runner is runing on the remote VM).
What is runner actually for?
You have your project along with a .gitlab-ci.yml file. .gitlab-ci.yml defines what stages your CI/CD pipeline has and what to do in each stage. This typically consists of a build,test,deploy stages. Within each stage you can define multiple job. For example in build stage you may have 3 jobs to build on debian, centos and windows (in GitLab glossary build:debian, build:centos, build:windows). A GitLab runner clones the project read the gitlab-ci.yaml file and do what he is instructed to do. So basically GitLab runner is a Golang process that executes some instructed tasks.
where is it meant to be installed?
You can install a runner in your desired environment listed here. https://docs.gitlab.com/runner/install/
or
you can use a shared runner that is already installed on GitLab's infrastructure.
Does it care which directory it is run in?
Yes. Every task executed by runner is relativly to CI_PROJECT_DIR defined in https://gitlab.com/help/ci/variables/README. But you can alter this behaviour.
where does it execute it's script commands? At root?
Do I need to set my executor to docker? Shouldn't I just set it to shell, pull the image, and build it?
A runner can have mutiple executors such as docker, shell, virtualbox etc but docker being the most common one. If you use docker as the executor you can pull any image from docker hub or your configured registry and you can do loads of stff with docker images. In a docker environment normally you run them as the root user.
https://docs.gitlab.com/runner/executors/README.html
See gitlab access logs , runner is constantly polling the server

which directory should I checkout our java project files into for a team build

we use svn(subversion) for our source repository. On the same box, we build our project PLUS deploy it onto an appserver. All the team members(under 10, in number) will login to the Linux (ubuntu server) box and run the build script.
Question : I would like to know which directory is typically used for creating the home directory for the subversion checkout and doing the build. What type of permissions should I be giving so that the teammembers can come in to that dir, update the source code(svn update) and run the build script (ant).
P.S : I'm also interested in any understand best-practices.
Thank you,
Sounds like you need a Continuous Integration server. Install Hudson on the server and use that instead.
Hudson will automatically check out changes from Subversion and build them when something is checked in. You can also make it deploy to an app server after a successful build. And you can trigger builds manually if you want, for example for a release.
You'll find it very easy to get started with.

Deployment after CI builds

Im pretty new to CI so bear with me here. I have just setup an instance of Team City in on a local machine, and I can clearly see the benefits.
The one thing we do want understand is how we can managed the deployment aspect of CI. What we really want to achieve are two builds:
1) We check in to our source repository and the CI server notices the change and compiles the code, tests etc.
2) We manually trigger a build that compiles the code, copies the code to a remote server and update its IIS mappings.
Now the first build is pretty much wrapped up with TeamCity. But I assume that the deployment aspect of this is going to involve some scripting (Nant, MsBuild, Rake etc) is this correct?
If this is the case, I can see that transferring files from the build machine to a remote server will be ok, but will we be able to update IIS mappings without being on the same network? For that matter where is THE correct place to deploy a CI server, should is live on the same network as the apps we deploy?
Finally, we have been (rather unorthadoxily) using IronRuby to run rake scripts as our build runner. This is simply because we like Rake, but if we were to look at Nant/Msbuild do they have any baked in tasks that would simplify what we are trying to achieve?
Cheers, Chris.
We use MSBuild exclusively, just a choice. I am sure Nant and the others do things just as well. We only publish to a dev environment (for dev testing) and a stage environment (Where QA actually tests). I would not suggest that you put the production system push on this as the temptation to force builds might be too great for some people.
We use some of the MSBuild Community Tasks

Resources