I'm trying to create a gitlab-ci.yml that will look through a folder and run job, but only if changed. so folder structure would be
/.gitlab-ci.yml
/apps/some-app/files
/apps/some-app2/files
What I want is to have the job loop through each folder in the "apps" folder and execute the build script, but only if there are file changes in the "some-app" subdirectory.
my current job script is:
script:
- |-
for PACKAGE in `ls apps`; do
./build/build.sh $PACKAGE
done
and this runs on every folder with or with out changes. I don't want to do a only block here because I do want it to run if app1 has changes, but skip app2 if no changes.
I've googled, but can't seem to find a way to either pass dynamic variables to an extends job, or have a way to call another job from the script I already use, but pass the "PACKAGE" variable.
If other solutions exist, I'm all ears.
this looks the cleanest, but I can't figure out how to update the "PACKAGE" variable in the build-loop step.
# Build the docker image & push to the ECR repository
.build-each:
script: ./build/build.sh $PACKAGE
only:
changes:
- "$PACKAGE/*"
- "$PACKAGE/**/*"
build-loop:
extends: .build-each
variables:
PACKAGE: some-app
Related
I am facing a problem with GitLab include and I'm wondering whether it's possible to do what I intent to.
I have 2 GitLab repositories:
my-infrastructure
my-prod-deployment
In my-infrastructure I have the YML file that defines a job with a command referencing a local file. For example in my-infrastructure I have the following:
A gitlab-infrastructure.yml template
image: amazon/aws-cli
variables:
FOO: ReplaceMe
stages:
- one
agent_ui:
stage: microservices
script:
- aws cloudformation deploy --stack-name sample --template-file templates/aws-template.yml
and also I have a templates/aws-template.yml that has some cloud formation code.
Notice that the GitLab template needs access to a local file that exists in the same project my-infrastructure
Now in the other project my-prod-deployment I have a .gitlab-ci.yml with
include:
- project: mycompany/my-infrastructure
ref: main
file: gitlab-infrastructure.yml
variables:
FOO: Bar
When I run this CI/CD pipeline I can see the FOO variable being properly overriden and I can see that the included job's script is executed. The problem is that I get a
$ aws cloudformation deploy --stack-name sample --template-file templates/aws-template.yml
Invalid template path templates/aws-template.yml
This is probably because the local relative path is in my-infrastructure, but not in my-prod-deployment that file is not locally available in this project and therefore it can't be found.
Is there any solution to this?
Maybe a way to include not only gitlab but also other files or similar?
Or maybe some kind of shortcut or link to a different repo folder?
Or maybe a way to temporary copy a remote folder to the local CI/CD pipeline execution?
Notice that I cannot use an absolute or URL path for that script parameter since that specific tool (AWS CLI) does not allow it. Otherwise I wouldn't face this relative path issue
UPDATE 1: I have tried a workaround with git submodules separating the gitlab template in a different project and adding my-infrastructure as a submodule
cd my-prod-deployment
git submodule add git#gitlab.com:mycompany/my-infrastructure.git
so that my .gitlab-ci.yml looks like this
include:
- project: mycompany/my-gitlab-templates
ref: main
file: gitlab-infrastructure.yml
variables:
CLOUDFORMATION_SUBMODULE: my-infrastructure
FOO: Bar
and my repo has a local folder my-infrastructure, but I am shocked to find that it still complains about the AWS CloudFormation template path, so I've added AWS Cloud Formation tag to the question and edited it.
This is the error
$ aws cloudformation deploy --stack-name sample --template-file $CLOUDFORMATION_SUBMODULE/templates/aws-template.yml
Invalid template path my-infrastructure/templates/aws-template.yml
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
There is a my-infrastructure/templates/aws-template.yml path under my repo. It's part of the submodule. So I don't understand why this workaround does not work.
Any help would be appreciated.
I fixed the issue with the git submodule approach.
I had to add 2 changes as per https://docs.gitlab.com/ee/ci/git_submodules.html
Add the submodule with relative path
git submodule add ../my-infrastructure.git
So that .gitmodules displays the relative url within the same server
[submodule "my-infrastructure"]
path = my-infrastructure
url = ../my-infrastructure.git
and add this variable to .gitlab-ci.yml
variables:
# submodule behavior
GIT_SUBMODULE_STRATEGY: recursive
I have two repositories in GitLab, repositories A and B let's say.
Repo A contains:
read_ci.yml
read_ci.sh
read_ci.yml contains:
stages:
- initialise
create checksum from pipeline:
stage: initialise
script:
- chmod +x read_ci.sh
- source ./read_ci.sh
Repo B contains:
gitlab-ci.yml
gitlab-ci.yml contains:
include:
project: 'Project/project_name'
file:
- '.gitlab-ci.yml'
ref: main
Obviously, this doesn't do what my intention is.
What I want to achieve is in the project B pipeline to run the project A script.
The reason is that I want project A to be called from multiple different pipelines and run there.
an alternative to this for GitLab: Azure Pipelines. Run script from resource repo
Submodules would absolutely work as Davide mentions, though it's kinda like using a sledgehammer to hang a picture. If all you want is a single script from the repository, just download it into your container. Use the v4 API with your CI_JOB_TOKEN to download the file, then simply run it using sh. If you have many files in your secondary repository and want access to them all, then use Submodules as Davide mentiones, and make sure your CI job retrieves them by setting the submodule strategy like this:
variables:
GIT_SUBMODULE_STRATEGY: normal
If you want to run the project A script in the project B pipeline, you can add the repository B as a git submodule in A
git submodule add -b <branch-B> <git-repository-B> <target-dir>
You need also to add in the CI job, the variable GIT_SUBMODULE_STRATEGY: recursive.
I want to deploy a static page using GitLab repos with plain HTML/CSS (actually SCSS). As far as I learnt, a static page needs at least .gitlab-ci.yml and /public folder. The file .gitlab-ci.yml will have a minimum requirement like this: (an example from official doc)
pages:
stage: deploy
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
And my question is lying in the script line.
(I assume the script below will create a hidden folder name .public and copy all the file in it then move it from .public to public folder. Please correct me if I'm wrong.)
script:
- mkdir .public
- cp -r * .public
- mv .public public
To me, it's similar to shell-script of Linux. It's also confirmed in GitLab doc that it's run by the Runner. But the problem is, how do I know how many shell-scripts are installed in GitLab? And is it possible to make one?
I would like to make 2 folders: src and public. The GitLab CI will run the script and compile SCSS from src then move it to public.
I'm using gitlab.com by the way.
So a few things to consider. Each job in gitlab is run in a container. Generally you specify which one you want to use. Pages is a special case though so you don't have to care about the image for the container.
The pages job will populate your public folder. But you can alter the gitlab-ci.yml file and add steps.
This would build an app using node:
build_stuff:
stage: build
image: node:11
before_script:
- npm install
script:
- npm run build
artifacts:
paths:
- build
pages:
stage: deploy
script:
- mkdir .public
- cp -r build/ .public
- mv .public public
artifacts:
paths:
- public
only:
- master
Formatting is off
Things to note. The first step is running the build steps to generate all the assets for your output folder. It is then storing anything declared in the artifacts block, in this case the build folder, and passing it on to the next job. Adjust this step accordingly to what you need to build your app.
The only thing I altered in the second step is that you copy the contents of the build folder instead of the entire repo into the .public folder. Adjust this to your needs as well.
As for shell scripts, there are none present except for the ones you bring to the repository. The default runner supports Bash so you can execute bash commands just as you would in your terminal.
If you create the file foo.sh in your repo and do bash foo.sh it will execute the script, if it's executable. Remember to chmod it before pushing it.
There are no "shell-scripts installed in Gitlab". Gitlab supports several shells and the script part in your example are just pure bash commands. Since you are most probably using the default docker runner you can execute bash commands from the script part, run scripts in other languages that are in your repo, install packages on the docker container and even prepare and run your own docker images.
I have the following gitlab-ci.yml file that reads the package.json using the jq processor to dynamically set the variable name of the artifact folder, something along the lines of
image: node:latest
stages:
- build
before_script:
## steps ignored for purpose of question
- export NAME_OF_ARTIFACT_FOLDER=$(cat package.json | jq -r .name)"_"$(cat package.json | jq -r .version)".zip"
- echo $NAME_OF_ARTIFACT_FOLDER ##prints the expected name here eg. myApp_1.0.0.zip
prod_build:
stage: build
script:
- echo $NAME_OF_ARTIFACT_FOLDER ##prints the expected name here eg. myApp_1.0.0.zip
- yarn run build
artifacts:
paths:
- dist/$NAME_OF_ARTIFACT_FOLDER ## this does not work
expire_in: 2 hrs
The issue here is - dist/$NAME_OF_ARTIFACT_FOLDER does not work, not sure if am missing something here.
EDIT
Upon hard coding the expected path such as the following, it works fine, which would mean that the folder name is valid and that the artifact is indeed identified appropriately, but does NOT work when coming from $NAME_OF_ARTIFACT_FOLDER
artifacts:
paths:
- dist/myApp_1.0.0.zip ##hardcoding the expected works just fine
expire_in: 2 hrs
Well, that is not possible currently. Manual says as follows:
The artifacts:name variable can make use of any of the predefined variables.
That is no variables set in your script part of the job can be used.
This is an open issue at GitLab
Artifacts Filename Cannot Be Set with Dynamic Variables
I had a project variable defining the path to a zip file in a script which I reused at artifacts:paths level. The linked issue would have been more obvious had the artifacts:paths instance completely failed to get assigned but in my case it inherited a different value from that a mere two lines above in my job!
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted. This effectively forces you to put all the teardown logic into the .gitlab-ci.yml instead of a script that .gitlab-ci.yml just calls.
Does anyone know a workaround for this? I have a shell script that removes the deployment. This script is part of the repository and can also be called locally (i.e. not onli in an CI environment). I want GitLab-CI to call this script when removing a dynamic environment but it's obviously not there anymore when the branch has been deleted. I also cannot put this script to the artifacts as it is generated before the build by a configure script and contains secrets. It would be great if one could execute the teardown script before the branch is deleted.
Here's a relevant excerpt from the .gitlab-ci.yml
deploy_dynamic_staging:
stage: deploy
variables:
SERVICE_NAME: foo-service-$CI_BUILD_REF_SLUG
script:
- ./configure
- make deploy.staging
environment:
name: staging/$CI_BUILD_REF_SLUG
on_stop: stop_dynamic_staging
except:
- master
stop_dynamic_staging:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- make teardown # <- this fails
when: manual
environment:
name: staging/$CI_BUILD_REF_SLUG
action: stop
Probably not ideal, but you can curl the script using the gitlab API before running it:
curl \
-X GET https://gitlab.example. com/raw/master/script.sh\
-H 'PRIVATE-TOKEN: ${GITLAB_TOKEN}' > script.sh
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted.
That includes:
An on_stop action, if defined, is executed.
With GitLab 15.1 (June 2022), you can skip that on_top action:
Force stop an environment
In 15.1, we added a force option to the stop environment API call.
This allows you to delete an active environment without running the specified on_stop jobs in cases where running these defined actions is not desired.
See Documentation and Issue.