In project's setting for merge request we have option Pipelines must succeed checked. So any merge request that does not produce pipeline cannot be merged.
However, majority of files on root do not actually need a pipeline but because the condition above, there needs to be a job defined for changes there also. What we want to do, is to run a specific empty job only when there are changes on root files and not run it if there are changes in folders and subfolders as well (since these changes already trigger the pipeline). So as example:
folder/subfolder/files
folder/file
file1
file2
If only file1 or/and file2 is changed run the job.
If file1 or/and file2 and folder are changed, do not run the job.
Current definition would be like that, but it does not work:
root_changes:
stage: pre_build
image: alpine:latest
tags: ...
variables:
GIT_STRATEGY: none
script: date
rules:
- if: '$CI_MERGE_REQUEST_ID'
changes:
- ./**/*
when: never
- if: '$CI_MERGE_REQUEST_ID'
changes:
- ./*
when: always
Related
I need to automate the CI/CD pipeline for an SpringBoot application. This application has application.properties file which contains few obvious env specific properties like DB properties, Kafka properties etc.
For each of my env. where the gitlab runner tuns, we have commands which provides properties to be used in that env. e.g. user which runs the gitlab runner has script getDBURL which returns DB URL of that env. Same script is available in all env which returns those env. specific values.
So, if the build is running in QA env, I need to replace DBURL, DBPass, etc. in apllication.properties file & generate it as an artefact along with jar/war build for the app.
How to write gitlab-ci.yml for such configuration?
My existing file that generates jar looks like:
stages:
- build
- publish
build-code-job:
stage: build
before-script: #some cleanup
script:
- echo "build app1"
- mvn $MAVEN_CLI_OPTS clean package
publish-nexus:
stage: publish
script:
- mvn $MAVEN_CLI_OPTS deploy -Dmaven.test.skip=true
- echo "Publishing to local repository"
only:
- master
Appreciate any help to generate application.properties file which will generate env specific configuration in it for gitlab runner.
Not sure if there's any elegant way but I implemented it with simple script to find and replace environment specific values.
Lets say below snippet is part of your application.properties file:
spring.datasource.url=jdbc:oracle:thin:#//<<DB_SERVER>>:<<DB_PORT>>/<<DB_SERVICE>>
spring.datasource.username=<<DB_USER>>
spring.datasource.password=<<DB_PASSWORD>>
There must be some way to get above environment specific values from your server where runner runs the build job. Below script accepts the source & destination file path, copies source file to build (destination) path & then does the find & replace as per values retrieved from server env:
updatePropertiesFile.sh
#!/bin/bash
SOURCE_FILE=$1
DEST_FILE=$2
echo "Extracting env variables"
dbhost=$(get_db_host)
dbport=$(get_db_port)
dbservicename=$(ge_tservicename)
dbuser=$(get_db_user)
dbpassword=$(get_db_password)
echo "Copy original properties file to destination"
cp -v -p $SOURCE_FILE $DEST_FILE
echo "Updating the env variable in destination file"
#echo "Replacing <<DB_SERVER>> to $dbhost"
sed -i s/\<\<DB_SERVER\>\>/${dbhost}/g application.properties
#echo "Replacing <<DB_PORT>> to $dbport"
sed -i s/\<\<DB_PORT\>\>/${dbport}/g application.properties
#echo "Replacing <<DB_SERVICE>> to $dbservicename"
sed -i s/\<\<DB_SERVICE\>\>/${dbservicename}/g application.properties
#echo "Replacing <<DB_USER>> to $dbuser"
sed -i s/\<\<DB_USER\>\>/${dbuser}/g application.properties
#echo "Replacing <<DB_PASSWORD>> to $dbpassword"
sed -i s/\<\<DB_PASSWORD\>\>/${dbpassword}/g application.properties
echo "$DEST_FILE updated successfully"
Now that the script is ready to it's job, let's call it from the gitlab-ci.yml as below:
build-code-job:
stage: build
before-script: #some cleanup
script:
- echo "build app1"
- mvn $MAVEN_CLI_OPTS clean package
- sh $BUILD_CHECKOUTDIR/$CI_PROJECT_NAME/bin/updatePropertiesFile.sh $BUILD_CHECKOUTDIR/$CI_PROJECT_NAME/src/main/resources/application.properties $BUILD_CHECKOUTDIR/$CI_PROJECT_NAME/release/application.properties
This will copy properties file form project's resource dir to release dir & do the values replacement to generate env specific application.properties using GitLab runner.
I have a small project on GitLab and I keep a CHANGELOG.md file in it. I want to update it with every merge to master, but occasionally I forget. I'm using GitLab CI and so I'd like to employ it to check if the changelog file was changed. How can I do it?
This was my solution after a lot of trials. Hope it helps. Basically, I only want to trigger the job when we have a new merge request. Using git, I get the list of files that changed within the merge request and if the file I want to track is inside that list, then the job is successful. If the file is not found, the job fails
update_file_check:
stage: test
script:
- git fetch
- FILES_CHANGED=$(git diff --name-only $CI_MERGE_REQUEST_DIFF_BASE_SHA...HEAD)
- |+
for i in $FILES_CHANGED
do
if [[ "$i" == "<filename>" ]]
then
exit 0
fi
done
exit 1
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
It is possible and there are several ways to achieve it. I would propose to use the same host where gitlab is located as runner with shell executor. Basically, you are opening a way in order to run a few commands into this gitlab runner. Now, there are a lot of resources in internet and even in the official docs of gitlab, but to sum up, you will need to follow the next flow:
1. .gitlab-ci.yml
This file should be in the root of your project. It is read and interpreted by gitlab when running CI CD tasks. It can be as complex as you wish, but in my opinion, I like to keep things simple, so I will just invoke an script when master branch is changed.
The content might be something like this:
Check Changelog:
script:
- sh .gitlab/CI-CD-Script.sh ## Execute in the gitlab runner the script.
only:
- master ## It will run only when master branch changes.
2. .gitlab/CI-CD-Script.sh
As mentioned, I prefer to call a script which it will manage the logic of all CI CD. But as previously said, there are multiple ways to achieve same results. Below, you can build an script in the next way:
#!/bin/bash
# Download the original changelog from master branch.
wget http://<yourgitlabAddress>/<pathToProject>/-/raw/master/CHANGELOG.md /tmp/CHANGELOG.md
if cmp -s /tmp/CHANGELOG.md CHANGELOG.md; then ## Checking if files are different.
echo "Changelog not changed"
exit 1 ## Job will fail
else
echo "Changelog changed"
exit 0 ## Job will pass
fi
That would be all so far. I can't try it in your environment, I hope it helps you.
I have a Git pre-commit hook, which fetches local branch name and add checks with my pre defined Regex and alerts developer to if not matching.
I am finding the local branch name using
local_branch="$(git rev-parse --abbrev-ref HEAD)"
It is working good during normal commit process.
But in case, if I did a wrong commit message and I will edit it using the option "Rebase childern of abc interactively" in soucetree and edit the message. At this time my match condition on regex is failing as the local branch is not coming as expected.
In happy case it is coming as origin/feature/XYZ-01
but while editing message it is coming as just "HEAD"
Tried using git branch --show-current and its giving nothing.
Also tried, git branch | sed -n '/\* /s///p' and its giving "no branch, rebasing feature/XYZ-01"
Is there a way that I can get current branch in all cases like regular/Rebase etc.
How to get local branch name during rebase and what other cases also should I need to consider?
I have a crontab containing around 80 entries on a server. And I would like to manage that crontab using Ansible.
Ideally I would copy the server's crontab to my Ansible directory and create an Ansible task to ensure that crontab is set on the server.
But the cron module only seems to manage individual cron entries and not whole crontab files.
Manually migrating the crontab to Ansible tasks is tedious. And even if I find or make a tool that does it automatically, I feel the YAML file will be far less readable than the crontab file.
Any idea how I can handle that big crontab using Ansible?
I managed to find a simple way to do it. I copy the crontab file to the server and then update the crontab with the shell module if the file changed.
The crontab task:
---
- name: Ensure crontab file is up-to-date.
copy: src=tasks/crontab/files/{{ file }} dest={{ home }}/cronfile
register: result
- name: Ensure crontab file is active.
shell: crontab cronfile
when: result|changed
In my playbook:
- include: tasks/crontab/main.yml file=backend.cron
I solved this problem like this:
- name: Save out Crontabs
copy: src=../files/crontabs/{{ item }} dest=/var/spool/cron/{{ item }} owner={{item}} mode=0600
notify: restart cron
with_items:
- root
- ralph
- jim
- bob
The advantage of this method (versus writing to an intermediate file) is that any manual edits of the live crontab get removed and replaced with the Ansible controlled version. The disadvantage is that it's somewhat hacking the cron process.
Maintain idempotency by doing it this way:
- name: crontab
block:
- name: copy crontab file
copy:
src: /data/vendor/home/cronfile
dest: /home/mule/cronfile
mode: '0644'
register: result
- name: ensure crontab file is active
command: crontab /home/mule/cronfile
when: result.changed
rescue:
- name: delete crontab file
file:
state: absent
path: /home/mule/cronfile
I'm not sure if this is the right place to ask this, please redirect me if not.
I'm new to git and while learning it I stumbled upon this.
How does git branch branchName and 'ls' work with each other.
For eg:
If I have a master and test branch and test branch has an extra testFile when compared to master branch.
Now, while in the master branch, if I ls, I'll wont see the testFile but after switching to the test branch and ls, I'll see the testFile
kiran#kiran-desktop:/media/kiran/Linux_Server/dev$ git branch
master
* test
kiran#kiran-desktop:/media/kiran/Linux_Server/dev$ git checkout master
M editor/editor_parts/syntax/operator_syntax.js
Switched to branch 'master'
kiran#kiran-desktop:/media/kiran/Linux_Server/dev$ ls
cgi-bin index.php misc underConstruction
editor jquery-1.5.2.min.js php.php userManage
fileManage jquery-ui-1.8.11.custom.css projectManage userPages
images login test.php
kiran#kiran-desktop:/media/kiran/Linux_Server/dev$ git checkout test
M editor/editor_parts/syntax/operator_syntax.js
Switched to branch 'test'
kiran#kiran-desktop:/media/kiran/Linux_Server/dev$ ls
cgi-bin index.php misc test.php
editor jquery-1.5.2.min.js php.php underConstruction
fileManage jquery-ui-1.8.11.custom.css projectManage userManage
images login testFile.txt userPages
kiran#kiran-desktop:/media/kiran/Linux_Server/dev$
But pwd from both branches shows the same location.
So, how does switching branches change the output of ls ( which as I understand is a function of linux) ?
git checkout switches you from one branch to the other. To do this, it replaces the files in the checked out repository with the ones from the branch.
Your repository is effectively the .git subdirectory.
If you do a ls -a, you'll see it:
% ls -a -1
.git
.gitignore
...
Tracked files are stored in there. You normally only see the currently checked out branch. When you checkout a different branch, git grabs the files from .git and you can see them with ls.
Have a look at the answers to this question for more information about how git works: Where can I find a tutorial on Git's internals?
Git, unlike in SVN what you know keeps branches in different directories, keeps only the current working branch in the repository directory.
All the branches (Actually, all the objects) are stored inside the '.git' folder in the root of the repo and only the files belonging to the specific branch are present while you have checked out a specific branch. (and those files that are not added to the repo)