How to run yq command from included project in Gitlab? - gitlab

I have two projects JWT and RELEASE-MGMT under the same group name in Gitlab.I have the pipelines as follows.
gitlab-ci.yml
JWT:
stages:
- prjname
include:
- project: 'testing-group/RELEASE-MGMT'
ref: 'main'
file:
- '/scripts/testing-prj-name.yml'
RELEASE-MGMT:(/scripts/testing-prj-name.yml)
testyqcommand:
stage: prjname
before_script:
- pip3 install jq
- pip3 install awscli
- pip3 install yq
script:
- pwd
- ls -ltr
- echo $CI_PROJECT_NAME
- yq -r '.$CI_PROJECT_NAME.projectname' projectnames.yml
Getting the below error
yq: error: argument files: can't open
'./scripts/testing-service-name.yml': [Errno 2] No such file or
directory: './scripts/testing-service-name.yml'
I was thinking since the two projects exists in the same group we can do this without using multi-project pipelines and also RELEASE-MGMT is the one that is included in all the microservices we have got.

include: is a logical mechanism in rendering a pipeline configuration. It won't actually bring any files to the workspace of the project running the pipeline.
If you want to run yq against a YAML file in another project, you'll have to clone the project first or otherwise retrieve the file as part of your CI job -- for example by using the files API or cloning the repo with the job token:
script:
- git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>

Related

How to view file actions (reads and writes) during a CI build on Github Actions?

I am trying to track the file actions (reads and writes) (by using strace or inotifywait) performed by different steps of a job on a continuous integration workflow in Github Actions.
steps:
- uses: actions/checkout#v2
- name: Maven resolve ranges
run: ./mvnw versions:resolve-ranges -ntp -Dincludes='org.springframework:*,org.springframework.boot:*'
- name: Cache Maven Repos
uses: actions/cache#v2
with:
path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ runner.os }}-maven-
- name: Set up JDK 8
uses: actions/setup-java#v2
with:
distribution: 'temurin'
java-version: 8
- name: Build with Maven
run: echo y | ./mvnw -B --no-transfer-progress clean install -Dmaven.javadoc.skip=true -Drat.skip=true -Dcheckstyle.skip=true -Dspotless.apply.skip=true
- name: Build examples with Maven
run: echo y | ./mvnw -B -f examples/pom.xml clean package -DskipTests
As an illustration, in the example above, there are 6 steps, and I would like to keep track of file actions performed by each of them separately.
I believe I have two options;
Tracking file actions in Github Servers
Tracking file actions in my PC
In the first option, I probably have to write a plugin and add it to the workflow so that it can watch file actions on GitHub servers, and in the latter option, the challenge is simulating the workflow locally.
I found a tool (nektos/act) however, this tool simulates the workflow inside a docker container, so I am not sure if I can watch file actions inside a docker container.
Do you have any suggestions on which path I should follow?

How to trigger Gitlab CI pipeline manually, when in normal conditions, it is triggered by webhook with commit Ids?

I have Gitlab CI pipeline which is triggered by bitbucket webhook with current and last commit ids. I also want to re-run pipeline manually whenever the build created Gitlab CI file, triggered by webhook is not working as expected.
I tried RUN-PIPELINE option but shows the error:
The form contains the following error:
No stages/jobs for this pipeline.
Here is the GitLab CI file. Include refers to other project where standard yaml file for the pipeline is kept:
include:
- project: Path/to/project
ref: bb-deployment
file: /bitbucket-deployment.yaml
variables:
TILLER_NAMESPACE: <namespace>
NAMESPACE: testenv
REPO_REF: testenvbranch
LastCommitSHA: <commit sha from webhook>
CurrentCommitSHA: <Current commit she from webhook>
Here is the detailed gitlab-ci file that is provided in other project which has stages:
stages:
- pipeline
- build
variables:
ORG: test
APP_NAME: $CI_PROJECT_NAME
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIIVATE_KEY2" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
Building CI Script:
stage: pipeline
image: python:3.6
only:
refs:
- master
script:
- |
curl https://github.com/org/scripts/branch/install.sh | bash -s latest
source /usr/local/bin/pipeline-variables.sh
git clone git#bitbucket.org:$ORG/$APP_NAME.git
cd $APP_NAME
git checkout $lastCommit
cp -r env old
git checkout $bitbucketCommit
$CMD_DIFF old env
$CMD_BUILD
$CMD_INSTALL updatedReposList.yaml deletedReposList.yaml /tmp/test $NAMESPACE $REPO_REF $ORG $APP_NAME $lastCommit $bitbucketCommit
cat cicd.yaml
mv cicd.yaml ..
artifacts:
paths:
- cicd.yaml
Deplopying Apps:
stage: build
only:
refs:
- master
trigger:
include:
artifact: cicd.yaml
job: Building CI Script
strategy: depend
In the manual trigger, instead of considering the last and current commit she, it should rebuild the application.
Any help will be appreciated.
Thank you for your comment (below), I see you are using the include directive (https://docs.gitlab.com/ce/ci/yaml/#include) in one .gitlab-ci.yml to include a GitLab CI YAML file from another project.
I can duplicate this error (No stages / jobs for this pipeline) by invoking "run pipeline" on project 1 which is configured to include GitLab CI YAML from project 2 when the project 2 GitLab CI YAML is restricted to the master branch but I'm running the project on another branch.
For example, let's say project 1 is called "stackoverflow-test" and its .gitlab-ci.yml is:
include:
- project: atsaloli/test
file: /.gitlab-ci.yml
ref: mybranch
And project 2 is called "test" (in my own namespace, atsaloli) and its .gitlab-ci.yml is:
my_job:
script: echo hello world
image: alpine
only:
refs:
- master
If I select "Run Pipeline" in the GitLab UI in project 1 on a branch other than "master", I then get the error message "No stages / jobs for this pipeline".
That's because there is no job defined for my non-master branch, and then without any job defined, I don't have any stage defined.
I hope that sheds some light on what's going on with your webhook.

Is it possible to transfer caches/artifacts between pipelines?

In Gitlab, is it possible to transfer caches or artifacts between pipelines?
I am building a library in one pipeline and I want to build an application with the library in another pipeline.
Yes, it is possible. There are a couple of options to achieve this:
Using Job API and GitLab Premium
The first option is to use Job API to fetch artifacts. This method is available only if you have GitLab Premium. In this option, you use CI_JOB_TOKEN in Job API to fetch artifacts from another pipeline. Read more here.
Here is quick example of a job you would put in your application pipeline configuration:
build_application:
image: debian
stage: build
script:
- apt update && apt install -y unzip
- curl --location --output artifacts.zip "https://gitlab.example.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/master/download?job=build&job_token=$CI_JOB_TOKEN"
- unzip artifacts.zip
Using S3
The second option is to use some third-party intermediate storage, for instance, AWS S3. To pass artifacts follow below example.
In your library pipeline configuration create the following job:
variables:
TARGET_PROJECT_TOKEN: [get token from Settings -> CI/CD -> Triggers]
TARGET_PROJECT_ID: [get project id from project main page]
publish-artifact:
image: "python:latest"
stage: publish
before_script:
- pip install awscli
script:
- aws s3 cp output/artifact.zip s3://your-s3-bucket-name/artifact.zip.${CI_JOB_ID}
- "curl -X POST -F token=${TARGET_PROJECT_TOKEN} -F ref=master -F variables[ARTIFACT_ID]=${CI_JOB_ID} https://gitlab.com/api/v4/projects/${TARGET_PROJECT_ID}/trigger/pipeline"
Then in your application pipeline configuration retrieve the artifact from the s3 bucket:
fetch-artifact-from-s3:
image: "python:latest"
stage: prepare
artifacts:
paths:
- artifact/
before_script:
- pip install awscli
script:
- mkdir artifact
- aws s3 cp s3://your-s3-bucket-name/artifact.zip.${ARTIFACT_ID} artifact/artifact.zip
only:
variables:
- $ARTIFACT_ID
Once fetch-artifact-from-s3 job is completed you will have your artifact available in artifact/ directory. It can now be consumed in other jobs within application pipeline.

Gitlab - How to add badge based on jobs pipeline

My goal is to show badges (ex : ) based on pipeline results.
I have a private gitlab ce omnibus instance with the following .gitlab-ci.yml :
image: python:3.6
stages:
- lint
- test
before_script:
- python -V
- pip install pipenv
- pipenv install --dev
lint:
stage: lint
script:
- pipenv run pylint --output-format=text --load-plugins pylint_django project/ | tee pylint.txt
- score=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' pylint.txt)
- echo "Pylint score was $score"
- ls
- pwd
- pipenv run anybadge --value=$score --file=pylint.svg pylint
artifacts:
paths:
- pylint.svg
test:
stage: test
script:
- pipenv run python manage.py test
So I thought that I would store the image in the artifacts of the lint job and display it via the badge feature.
But I encounter the following issue : when I browse https://example.com/[group]/[project]/-/jobs/[ID]/artifacts/file/pylint.svg, instead of seeing the badge I have the following message :
The image could not be displayed because it is stored as a job artifact. You can download it instead.
And anyways, I feel like this is the wrong way, because even if I could get the image, there don't seems to be a way to get the image from the last job since gitlab URL for badges images only supports %{project_path}, %{project_id}, %{default_branch}, %{commit_sha}
So how one would add badge to a gitlab project based on a svg generated from results in a gitlab pipeline ?
My guess is that I could push to a .badge folder but that doesn't sound like a clean solution.
You can indeed get the artifact(s) for the latest job (see documentation here), but the trick is that you need to use a slightly different URL:
https://example.com/[group]/[project]/-/jobs/artifacts/[ref]/raw/pylint.svg?job=lint
where [ref] is the reference to your branch/commit/tag.
Speaking of badge placeholders available in Gitlab, you can potentially put %{default_branch} or %{commit_sha} into [ref]. This won't allow you to get the correct badge for every branch, but at least your default branch will get one.
Please also note that ?job=lint query parameter is required, without it the URL won't work.

Is it possible to use multiple docker images in bitbucket pipeline?

I have this pipeline file to unittest my project:
image: jameslin/python-test
pipelines:
default:
- step:
script:
- service mysql start
- pip install -r requirements/test.txt
- export DJANGO_CONFIGURATION=Test
- python manage.py test
but is it possible to switch to another docker image to deploy?
image: jameslin/python-deploy
pipelines:
default:
- step:
script:
- ansible-playbook deploy
I cannot seem to find any documentation saying either Yes or No.
You can specify an image for each step. Like that:
pipelines:
default:
- step:
name: Build and test
image: node:8.6
script:
- npm install
- npm test
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
image: python:3.5.1
trigger: manual
script:
- python deploy.py
Finally found it:
https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_stepstep(required)
step (required) Defines a build execution unit. Steps are executed in
the order in which they appear in the pipeline. Currently, each
pipeline can have only one step (one for the default pipeline and one
for each branch). You can override the main Docker image by specifying
an image in a step.
I have not found any information saying yes or no either so what I have assumed is that since this image can be configured with all the languages and technology you need I would suggest this method:
Create your docker image with all utilities you need for both default and deployment.
Use the branching method they show in their examples https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_branchesbranches(optional)
Use shell scripts or other scripts to run specific tasks you need and
image: yourusername/your-image
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for master"
- chmod +x your-task-configs.sh #necessary to get shell script to run in BB Pipelines
- ./your-task-configs.sh
feature/*:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for feature/*"
- npm install
- npm install -g grunt-cli
- npm install grunt --save-dev
- grunt build

Resources