gitlab: what is the use of before script inside a job - gitlab

I feel what is the need of using before_script in a job. It can be put together inside the script itself
deploy-to-stage:
stage: deploy
before_script:
- "which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )"
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
script:
- *** some code here ***
If they are going to run one after another
I can understand having before_script common for all jobs, because it saves some boilerplate

Effectively, machine-wise the before_script content and the script content are concatenated and executed together in a single shell, but the jobs aren't (only) read by machines.
Let's put as example your current Job, and let's suppose that I have to maintain it in a future.
If I have to change something related to the way the new code is generated or how an image has to be deployed, I can just go to the script section because the Job is properly defined and I don't have anything to do related to the Git configuration. On the other hand, if you have everything on that aforementioned section, then I'll have to go through all the code when probably the part I'm interested on is at the end (but it could be the case that it isn't)
Of course, this only applies when the separation between before_script and script is properly set and not a random split without any consideration.

Related

how to differentiate when the MR is accepted

I'd like to have a branch in the deployment script of my .gitlab-ci.yml file that is based on whether the particular pipeline is run because a MR has been accepted and is being merged into the default branch.
For instance,
Deploy:
stage: deploy
image: alpine
script:
- apk add openssh-client
- install -d -D -m 700 -p ~/.ssh
- eval $(ssh-agent -s)
- cat "${MY_SSH_PRIVATE_KEY}" | ssh-add -
- if [ "${CI_DEFAULT_BRANCH}" = "${CI_COMMIT_BRANCH}" ]; then \
rsync $BUILD_DIR/myfile.tar.gz filestore1 ; \
else \
rsync $BUILD_DIR/myfile.tar.gz filestore2 ; \
fi
I don't think that the comparison of CI_DEFAULT* with CI_COMMIT* is a safe comparison. I believe this works anytime a commit is on the default branch, not when being merged, so I think I'm missing an important distinction and/or a best-practice.
My intent is that so long as the MR is still in dev, it should be pushed to filestore2, and only pushed to the production filestore1 when the MR is good, tests good/complete, and is accepted.
Related:
Gitlab: How to run a deploy job on a tagged version / release? requires me to tag when I need to release something; this may be something I work into my workflow, but is not our current workflow
The code you have makes sense to fulfill the purpose you described and should work as-is (assuming the bash code is otherwise well-formed).
When changes are merged to the default branch, CI_COMMIT_BRANCH will be the same as CI_DEFAULT_BRANCH and the condition you have would evaluate to true, causing it to deploy to filestore1 -- for pipelines running on any other branch, filestore2 would be used when this job runs.
Your code should be as follows, this way filestore1 will be pushed only when MR is merged to main or master, or else filestore2 will be pushed.
Deploy:
stage: deploy
image: alpine
script:
- apk add openssh-client
- install -d -D -m 700 -p ~/.ssh
- eval $(ssh-agent -s)
- cat "${MY_SSH_PRIVATE_KEY}" | ssh-add -
- if [ "${CI_COMMIT_BRANCH}" -eq "main" ] | [ "${CI_COMMIT_BRANCH}" -eq "master" ]; then \
rsync $BUILD_DIR/myfile.tar.gz filestore1 \
else \
rsync $BUILD_DIR/myfile.tar.gz filestore2 \
fi
The if statement checks for master or main here.

How to handle input(yes/ no) in buildspec.yml

Im trying to perform codebuild in aws, and within my buildspec.yml i inserted a sudo apt install python3-pip command. And the aws codebuild run the buildspec file automatically, but during command execution
i get this:
And i want it to answer with [yes] but the command in buildspec runs automatically and i cant interact with buildlog thats in AWS-Codebuild. what should i do in this case.
Do
sudo apt-get --yes install python3-pip
Which will answer yes automatically. I am unsure whether this works with apt.
Usually ‘your Command << $"y\n"’ should work, but buildspec.yml in AWS codebuild did not work as my environment was ubuntu docker image.
But I was able to fix a similar issue
(1) create a shell script (which executes the same code above) in the build spec file
(2) executing the shell script created in step 1 thru the same build file.
e.g.
pre_build:
commands:
- echo '#!/bin/bash' > test.sh
- echo 'your command <<< $"y\n"' >> test.sh
- chmod 755 ./test.sh
build:
commands:
- ls -l
- echo currentdir $PWD
# executing the created file
- ./test.sh
I used this aand it worked, in case someone faced similar issue
$ printf 'Y\n' |
https://askubuntu.com/questions/338857/automatically-enter-input-in-command-line

Why does my gitlab pipeline fail with rsync error

I used the pipeline in a another project and here it worked.
But now I have a problem (I'm using exactly the same settings)
staging_upload:
stage: staging
only:
refs:
- develop
- schedules
script:
- sshpass -e rsync -avz --progress --exclude='.git' --exclude='.gitlab-ci.yml' . $SSH_USERNAME#j$HOST:/home/xy/html/project/staging/
Now I get this error:
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.2]
Has anyone a clue what is going wrong here?
sshpass is very rarely used in gitlab-ci.yml files.
Much more frequent is the use of an ssh-agent (if your private key is passphrase-protected), as seen here.
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY_DEV")
- ssh-add <(echo "$SSH_PRIVATE_KEY_PROD")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo "$SSH_HOSTKEYS" > ~/.ssh/known_hosts'
dev:
script:
- rsync --exclude=.gitlab-ci.yml --exclude=.git -avx -e ssh `pwd`/ dbasdev#iaas.ouce.ox.ac.uk:/var/www/html/it_dev/it_monitor_app/auth/
Check if this approach is more reliable, using a masked variable for your private key.
Try running the script with a -v (or 2). The verbose output will show you where/why it is stalling. Also, validate your escaped characters and absolute paths. They are fairly common reasons for rsync to stall.

Is it possible to do a git push within a Gitlab-CI without SSH?

We want to know if it's technically possible like in GitHub, to do a git push using https protocol and not ssh and without using directly an username and password in the curl request.
I have seen people that seem to think it is possible, we weren't able to prove it.
Is there any proof or witness out there than can confirm such a feature that allow you to push using a user access token or the gitlab-ci-token within the CI?
I am giving my before_script.sh that can be used within any .gitlab-ci.yml
before_script:
- ./before_script.sh
All you need is to set a protected environment variable called GL_TOKEN or GITLAB_TOKEN within your project.
if [[ -v "GL_TOKEN" || -v "GITLAB_TOKEN" ]]; then
if [[ "${CI_PROJECT_URL}" =~ (([^/]*/){3}) ]]; then
mkdir -p $HOME/.config/git
echo "${BASH_REMATCH[1]/:\/\//://gitlab-ci-token:${GL_TOKEN:-$GITLAB_TOKEN}#}" > $HOME/.config/git/credentials
git config --global credential.helper store
fi
fi
It doesn't require to change the default git strategy and it will work fine with non protected branch using the default gitlab-ci-token.
On a protected branch, you can use the git push command as usual.
We stopped using SSH keys, Vít Kotačka answers helped us understand why it was failing before.
I was not able to push back via https from a Docker executor when I did changes in the repository which was cloned by gitlab-runner. Therefore, I use following workaround:
Clone a repository to some temporary location via https with a user access token.
Do some Git work (like merging, or tagging).
Push changes back.
I have a job in the .gitlab-ci.yml:
tagMaster:
stage: finalize
script: ./tag_master.sh
only:
- master
except:
- tags
and then I have a shell script tag_master.sh with Git commands:
#!/usr/bin/env bash
OPC_VERSION=`gradle -q opcVersion`
CI_PIPELINE_ID=${CI_PIPELINE_ID:-00000}
mkdir /tmp/git-tag
cd /tmp/git-tag
git clone https://deployer-token:$DEPLOYER_TOKEN#my.company.com/my-user/my-repo.git
cd my-repo
git config user.email deployer#my.company.com
git config user.name 'Deployer'
git checkout master
git pull
git tag -a -m "[GitLab Runner] Tag ${OPC_VERSION}-${CI_PIPELINE_ID}" ${OPC_VERSION}-${CI_PIPELINE_ID}
git push --tags
This works well.
Just FYI, personal access tokens have account level access, which is generally too broad. The Deploy Key is better as it only has project-level access and can be given write permissions at the time of creation. You can provide the public SSH key as the deploy key and the private key can come from a CI/CD variable.
Here's basically the job I use for tagging:
release_tagging:
stage: release
image: ubuntu
before_script:
- mkdir -p ~/.ssh
# Settings > Repository > Deploy Keys > "DEPLOY_KEY_PUBLIC" is the public key of the utitlized SSH pair
# Settings > CI/CD > Variables > "DEPLOY_KEY_PRIVATE" is the private key of the utitlized SSH pair, type is 'File' and ends with empty line
- mv "$DEPLOY_KEY_PRIVATE" ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- 'which ssh-agent || (apt-get update -y && apt-get install openssh-client git -y) > /dev/null 2>&1'
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa > /dev/null 2>&1
- (ssh-keyscan -H $CI_SERVER_HOST >> ~/.ssh/known_hosts) > /dev/null 2>&1
script:
# .gitconfig
- touch ~/.gitconfig
- git config --global user.name $GITLAB_USER_NAME
- git config --global user.email $GITLAB_USER_EMAIL
# fresh clone
- mkdir ~/source && cd $_
- git clone git#$CI_SERVER_HOST:$CI_PROJECT_PATH.git
- cd $CI_PROJECT_NAME
# Version tag
- git tag -a "v$(cat version)" -m "version $(cat version)"
- git push --tags

Docker can't write to directory mounted using -v unless it has 777 permissions

I am using the docker-solr image with docker, and I need to mount a directory inside it which I achieve using the -v flag.
The problem is that the container needs to write to the directory that I have mounted into it, but doesn't appear to have the permissions to do so unless I do chmod 777 on the entire directory. I don't think setting the permission to allows all users to read and write to it is the solution, but just a temporary workaround.
Can anyone guide me in finding a more canonical solution?
Edit: I've been running docker without sudo because I added myself to the docker group. I just found that the problem is solved if I run docker with sudo, but I am curious if there are any other solutions.
More recently, after looking through some official docker repositories I've realized the more idiomatic way to solve these permission problems is using something called gosu in tandem with an entry point script. For example if we take an existing docker project, for example solr, the same one I was having trouble with earlier.
The dockerfile on Github very effectively builds the entire project, but does nothing to account for the permission problems.
So to overcome this, first I added the gosu setup to the dockerfile (if you implement this notice the version 1.4 is hardcoded. You can check for the latest releases here).
# grab gosu for easy step-down from root
RUN mkdir -p /home/solr \
&& gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
Now we can use gosu, which is basically the exact same as su or sudo, but works much more nicely with docker. From the description for gosu:
This is a simple tool grown out of the simple fact that su and sudo have very strange and often annoying TTY and signal-forwarding behavior.
Now the other changes I made to the dockerfile were these adding these lines:
COPY solr_entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
just to add my entrypoint file to the docker container.
and removing the line:
USER $SOLR_USER
So that by default you are the root user. (which is why we have gosu to step-down from root).
Now as for my own entrypoint file, I don't think it's written perfectly, but it did the job.
#!/bin/bash
set -e
export PS1="\w:\u docker-solr-> "
# step down from root when just running the default start command
case "$1" in
start)
chown -R solr /opt/solr/server/solr
exec gosu solr /opt/solr/bin/solr -f
;;
*)
exec $#
;;
esac
A docker run command takes the form:
docker run <flags> <image-name> <passed in arguments>
Basically the entrypoint says if I want to run solr as per usual we pass the argument start to the end of the command like this:
docker run <flags> <image-name> start
and otherwise run the commands you pass as root.
The start option first gives the solr user ownership of the directories and then runs the default command. This solves the ownership problem because unlike the dockerfile setup, which is a one time thing, the entry point runs every single time.
So now if I mount directories using the -d flag, before the entrypoint actually runs solr, it will chown the files inside of the docker container for you.
As for what this does to your files outside the container I've had mixed results because docker acts a little weird on OSX. For me, it didn't change the files outside of the container, but on another OS where docker plays more nicely with the filesystem, it might change your files outside, but I guess that's what you'll have to deal with if you want to mount files inside the container instead of just copying them in.

Resources