Does anyone know if it's possible to breakdown the config file for circleci into smaller files where each job, command, workflow, etc, is in it's own specific file/subdirectory, and if so, how would you approach this?
I've been looking around and even attempted myself to build a python script to build a config from all these yaml files, but with no luck due to reference variable names not existing in these various files so pyyaml library won't load them.
What I'm trying to accomplish is to have this folder structure
configs/
dependencies.yml
commands/
command_1.yml
command_2.yml
jobs/
job_1.yml
job_2.yml
workflows/
workflow_1.yml
workflow_2.yml
Where dependencies.yml contains a breakdown of what each workflow requires in terms of what is used in each step > job > command. And this file would be hand written.
You can do the following :
Split your config.yml in a structure defined in Packing a config
Use dynamic configuration where you fist generate the config from step 1 and the call the generated config file from them main file
Example original config.yml to split:
version: 2
orbs:
sonarcloud: sonarsource/sonarcloud#1.0.3
jobs
my-job:
docker:
- image: cimg/latest
steps:
- checkout
- run: make
workflows:
build:
jobs:
-my-job
Create following layout in a new folder called config (run tree):
.
├── config.yml
└── config
├── #orbs.yml
├── jobs
│ └──my-job.yml
└── #workflows.yml
#orbs.yml contains
version: 2
orbs:
sonarcloud: sonarsource/sonarcloud#1.0.3
#workflows.yml contains
workflows:
build:
jobs:
-my-job
my-job.yml contains
docker:
- image: cimg/latest
steps:
- checkout
- run: make
And the main config.yml should look like:
version: 2.1
setup: true
orbs:
continuation: circleci/continuation#0.3.1
jobs:
generate-and-run-circleci:
docker:
- image: 'circleci/circleci-cli:latest'
steps:
- circleci-cli/install
- checkout
- run:
command : |
cd .circleci
circleci config pack config > generated.yml
- continuation/continue:
configuration_path: .circleci/generated.yml
workflows:
build:
jobs:
- generate-and-run-circleci
Related
My file structure consits of 2 main directories, resources and src, resources has images in a subdirectory, and various json files. src has many nested directories with .ts files in each:
├── package.json
├── package-lock.json
├── README.md
│
├── .docker
│ ├── Dockerfile
│ └── aBashScript.sh
│
├── resources
│ ├── data.json
│ └── images
│ └── manyimages.png
│
├── src
│ ├── subdirectory1
│ └── NestedDirectories
│
├── .gitlab-ci.yml
├── tsconfig.eslint.json
├── tsconfig.json
├── eslintrc.json
└── prettierrc.json
My gitlab-ci.yml has two stages, build and deploy
What I want:
1- If it's a commit on branches "main" or "dev" and If anything that affects the actual project changes, run build.
That is anything under resources, or src (and their nested directories), the Dockerfile, package.json and package-lock.json
I'd be content with "any .ts file changed" too, since all other criteria is usually only when this happens.
2- If build ran and it's a commit on the default branch ("main") then run the deploy stage.
Also for clarification when I say there's a commit on branch X, I mean as in an accepted merge request, or well an actual change on that branch. At some point in my tinkering it was running on (non accepted) merge requests, but I forgot what I changed to fix that.
What happens:
1- If I specify the changes rule on build then it never runs, however even if build doesn't run deploy always runs (if on branch "main")
.gitlab-ci.yml
variables:
IMAGE_TAG: project
stages:
- build
- deploy
build_image:
stage: build
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- echo $REGISTRY_PASS | docker login -u $REGISTRY_USER --password-stdin
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag="latest"
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = '$tag'"
else
tag="$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build -f .docker/Dockerfile -t $REPO_NAME:$IMAGE_TAG-$tag .
- docker push $REPO_NAME:$IMAGE_TAG-$tag
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "dev"'
changes:
- \*.ts
- \*.json
- Dockerfile
deploy:
stage: deploy
before_script:
- chmod SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY $VPS "
echo $REGISTRY_PASS | docker login -u $REGISTRY_USER --password-stdin &&
cd project &&
docker-compose pull &&
docker-compose up -d"
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
This is the most basic one I could cobble up, basically excluding just the readme, but the build stage doesn't run (deploy does run even if build didn't)
Normally this is something I'd be able to "brute force" figure out myself, but to avoid uselessly modifying my files to test the changes rule, I've only been able to test this when making actual modifications to the project.
There seems to be a lot of examples from questions and tutorials out there, but I think something is off with my file structure as I've had no luck copying their changes rule
The changes: entries are glob patterns, not regex. So in order for you to match .ts files in any directory, you'll need to use "**/*.ts" not *.ts (which would only match files in the root).
changes:
- "**/*.ts"
- "**/*.json"
# ...
If build ran and it's a commit on the default branch ("main") then run the deploy stage.
To get this effect, you'll want your deploy job to share some of the rules of your build job.
deploy:
rules:
- if: "$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH"
changes:
- Dockerfile
- "**/*.ts"
- "**/*.json"
Or a little fancier way that reduces code duplication:
rules:
- if: "$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH"
when: never # only deploy on default branch
- !reference [build_image, rules]
This is how my project's structure looks like
├── main_project
│ ├── service1
│ │ ├── [...]
│ ├── service2
│ │ ├── [...]
│ ├── docs
│ │ ├── [...]
│ ├── .gitlab-ci.yml
└── Makefile
My .gitlab.ci-yml
[...]
service1_build:
image: image
stage: build
script:
- #doing something
only:
changes:
- /service1/**/*
- /.gitlab-ci.yml
- /Makefile
except:
changes:
- /docs/**/*
service2_build:
image: image
stage: build
script:
- #doing something
only:
changes:
- /service2/**/*
- /.gitlab-ci.yml
- /Makefile
except:
changes:
- /docs/**/*
test:
image: image
stage: test
needs:
- service1_build
- service2_build
script:
- #doing something
except:
changes:
- /docs/**/*
service1_docker:
image: image
stage: docker
needs:
- test
script:
- #doing something
only:
refs:
- master
changes:
- /service1/**/*
- /.gitlab-ci.yml
- /Makefile
except:
changes:
- /docs/**/*
service2_docker:
image: image
stage: docker
needs:
- test
script:
- #doing something
only:
refs:
- master
changes:
- /service2/**/*
- /.gitlab-ci.yml
- /Makefile
except:
changes:
- /docs/**/*
[...]
I tried this, but it is not working properly. For example, if I change a file in docs folder, the test job will be executed by the pipeline.
These are the rules that I want to be applied to each job:
service1_build: run on all branches, when there is any modification within the service1 folder or if Makefile or .gitlab-ci.yml has been changed. But it should not run when files in docs folder have been modified.
service2_build: run on all branches, when there is any modification within the service2 folder or if Makefile or .gitlab-ci.yml has been changed. But it should not run when files in docs folder have been modified.
test: run on all branches every time, except when files in docs folder have been modified.
service1_docker: run only on master, when there is any modification within the service1 folder or if Makefile or .gitlab-ci.yml has been changed. But it should not run when files in docs folder have been modified.
service2_docker: run only on master, when there is any modification within the service2 folder or if Makefile or .gitlab-ci.yml has been changed. But it should not run when files in docs folder have been modified.
Is this applicable like this? Or how should I handle this? When I am giving the path, it always looking it from the gitlab-ci.yml's folder?
Following your project structure, you should try to change your only/except paths like this :
only:
changes:
- main_project/service1/**/*
- .gitlab-ci.yml
- Makefile
except:
changes:
- main_project/docs/**/*
This include the main_project directory.
You should also add some optional needs to your test job, because if there is only updates on service1, the test job will look to the service2 job and fail :
test:
stage: test
needs:
- job: service1_build
optional: true
- job: service2_build
optional: true
I made some tests with these new fix and your rules are working properly : https://gitlab.com/sandbox_fm/ci-rules.
You also should consider moving from only/except to rules because :
only and except are not being actively developed. rules is the
preferred keyword to control when to add jobs to pipelines.
i have a template that i don't want to be modified as follows:
build:
script:
- ...
artifacts:
name: $BINFILE
paths: [$ARTIFACTS_PATH]
and gitlab-ci.yaml that includes that template and the following vars
BINFILE: demo-bin-files
ARTIFACTS_PATH: "Build"
the artifacts NAME is substituted correctly by the var BINFILE but the var ARTIFACTS_PATH is not and throws an error while i start the job
Uploading artifacts...
WARNING: --artifact-format: no matching files
ERROR: No files to upload
What i want here is that user pass only the paths that he want to upload as artifacts.
Can i do this or Gitlab doesn't support that ? thanks
You can't
but you can achieve this using shell script.
Set the artifacts: path to a temp folder and in the last step of your script: or after_script: section, copy all the content that you generate dynamically to the temp folder
it will solve your problem until Gitlab add this feature
I've had a similar issue and there's a simple fix - use the template as ana actual template.
template.yml (note the dot in the job name denoting it as a template)
.build:
script:
- ...
.gitlab-ci.yml
build:
extends: .build
artifacts:
name: demo-bin-files
paths:
- Build
You can even define common artifact settings in the template like so:
template.yml
.build:
script:
- ...
artifacts:
when: always
expire_in: 6 hours
and .gitlab-ci.yml remains the same.
I have a golang repo with a react client. I want to setup up CI using github actions for my client. The React client is inside the client folder in the workspace.
I have written the following workflow
name : Node.js CI
on: [push, pull_request]
jobs:
build:
name: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
path: client
- name: Set up Node.js
uses: actions/setup-node#v1
with:
node-version: 12.x
- run: yarn install
- run: yarn build
But on committing it shows the following error
Run yarn build1s
##[error]Process completed with exit code 1.
Run yarn build
yarn run v1.21.1
error Couldn't find a package.json file in "/home/runner/work/evential/evential"
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
##[error]Process completed with exit code 1
The snippet
- uses: actions/checkout#v2
with:
path: client
doesn't make the steps following run inside the client folder.
Need help. Thanks in advance.
You can use the working-directory keyword in a run step. See the documentation here.
- run: yarn install
working-directory: client
- run: yarn build
working-directory: client
Assuming your repository structure looks like this:
.
├── README.md
├── client
│ └── ... # your source files
└── workflows
└── example-ci.yml
You can also set the default working directory for multiple steps using:
defaults:
run:
working-directory: client # The working directory path
This way you don't need to specify it for each steps.
You can also adjust the scope depending on where you put the above snippet, either for:
All steps of all jobs: put it at the base of your workflow.
All steps of one job: put it within the job in the jobs attribute of your workflow so it applies to the steps of that job.
I have a Gitlab mono repository with some backend Java and frontend Node.js code. To create a CI, I'm working on a shared approach to build both the applications.
In the application repository, let's call it "A", I have source code as well a .gitlab-ci.yml file as below,
A
├── .gitlab-ci.yml
├── backendapi
└── ui
.gitlab-ci.yml file,
---
include:
- project: 'root/B'
ref: master
file: 'top-level.yml'
- project: 'root/B'
ref: master
file: 'maven.yml'
- project: 'root/B'
ref: master
file: 'node.yml'
I have another repository called "B", where I have all my CI functionalities in three different files.
B
├── maven.yml
├── node.yml
└── top-level.yml
top-level.yml file that has my build stage in it,
---
stages:
- build
variables:
GIT_SSL_NO_VERIFY: "1"
.build_script: &build_script
stage: build
tags:
- default
- docker
java_build:
<<: *build_script
image:
name: maven:latest
script:
- backend_build
node_build:
<<: *build_script
image:
name: node:slim
script:
- frontend_build
maven.yml, that has mvn build function,
.maven_build: &maven_build |-
function backend_build {
cd backendapi
mvn clean package -DskipTests
}
before_script:
- *maven_build
node.yml, with node function in it,
.node_build: &node_build |-
function frontend_build {
cd ui
npm install
npm build
}
before_script:
- *node_build
When the .gitlab-ci.yml file in repository "A" is run, it is calling the top-level.yml, maven.yml and node.yml files from the repository "B" which is good.
The problem here is when it runs the java_build it is unable to find the backend_build function from maven.yml instead it seems like it only loading the frontend_build function from node.yml file or overwriting the backend_build function from maven.yml file. The node_build works as expected, cause it can find the frontend_build function.
the Skipping Git submodules setup
Authenticating with credentials from /root/.docker/config.json
Authenticating with credentials from /root/.docker/config.json
Authenticating with credentials from /root/.docker/config.json
$ function frontend_build { # collapsed multi-line command
$ backend_build
/bin/bash: line 90: backend_build: command not found
I know that I can copy all the functions into one big yml file in repository "B" and include the in .gitlab-ci.yml in the repository "A" but here I'm trying to understand is it even possible to try the above approach.
Thanks in advance!
Ok, Finally found a hack but not a complete answer as yaml files cannot act accordingly as I stated in my question, but I took a different approach to solve the problem.
Well, there are no more maven.yml or node.yml, there are only four files in the repository B backend.yml, frontend.yml, hybrid.yml and top-level.yml.
The backend.yml has all the functions (build_app, lint_app, unit_test_app and so on..) that are required and the same follows the frontend.yml with different commands in the functions.
ex: In the backend.yml build_app function I will have the maven command at the same time in the frontend.yml build_app function I will have the nom command. Here the build_app function name is common in both the frontend.yml and backend.yml but the functionality is different.
In the top-level.yml stages, I specified the common function name as build_app in the script key.
stages:
- build
variables:
GIT_SSL_NO_VERIFY: "1"
.build_script: &build_script
stage: build
tags:
- default
- docker
build:
<<: *build_script
image: maven:latest
script:
- build_app
But in the .gitlab-ci.yml, depending on the build I need to do, I include that specific yml file. In the below example I want to build the backend and included the backend.yml same applies for the frontend.
include:
- project: 'root/B'
ref: master
file: 'top-level.yml'
- project: 'root/B'
ref: master
file: 'backend.yml'
If I have to build both the backend and frontend, I will use a hybrid.yml with the same function name as build_app but include both the maven and npm command. I know this is not the right approach but I will suffice the use case I'm trying to solve.
Thank you for helping me with the question!
Happy Automation :)