Include external files in gitlab-ci file - gitlab

I am trying to make a shared pipeline repository, from where I am running a Dangerfile. The cool thing here should be that each repo could simply include the shared-pipeline repo's Dangerfile and .gitlab-ci.yml-file and then get all the cool stuff from it. My problem is, from repo1 I am trying to use include to inherit the Dangerfile from repository shared-pipeline. However, it is only possible to include .yaml files. I can include .gitlab-ci.yml, but how do I include external files such as the Dangerfile?
├── shared-pipeline
│ ├── Dangerfile
│ └── .gitlab-ci.yml
├── repo1
│ └── .gitlab-ci.yml
├── repo2
│ └── .gitlab-ci.yml
└── repo3
└── .gitlab-ci.yml
This is what I have so far:
include:
- project: 'myproject/shared-pipeline'
file: '.gitlab-ci.yml'
This is what I was trying:
include:
- project: 'myproject/shared-pipeline'
file: '.gitlab-ci.yml'
- project: 'myproject/shared-pipeline'
file: 'Dangerfile' # Syntax error here as this is no yml file

you can not include other files than yml files. GitLab will merge your included yml files to one big file and will execute it on the runners.
This will just fetch the content of the file and merge it. If you do want to use a file from your shared pipeline, you do have multiple options:
put it in a shared docker image, which will be used by your action, this way everyone will get the same dangerfile with the docker image executed in the job.
fetch the docker image via API in a pre step and make it available to your job with the artifact directive
you could also put the file in a separate docker image, and use the artifact directive in an earlier step to hand it over.
... and most likely many more
... but to be clear again, you can not use the include directive to include any kind of file into your pipeline.

Related

Gitlab Pages constantly returns status 404 after pipeline succeeds

I'm trying to deploy a simple .html report, which is created after testing, to the Gitlab Pages. Everything is pretty straightforward according to the official documentation however, I'm constantly getting the 404 status when I open the https://DOMAIN/$CI_PIPELINE_ID/$REPORT link (this is an example URL).
I did the same yesterday, and all was relatively good - reports went live, though not for a long time, maybe because new pipeline was overwriting already existing Page.
I have two questions:
1. What may be the reason for a URL to return the 404?
2. Is it possible to keep Pages from previous pipelines still accessible by their URL after a new pipeline succeeds? I used this approach by creating folders with unique titles based on the pipeline ID inside the public/ folder, and put .html files inside. However, I cannot check it because of the 404 status.
public\ folder looks smth like this:
├── public
│   ├── 1234567
│ │ ├──report_1.html
│ ├── 1234568
│ │ ├──report_.html
.gitlab-ci.yml
stages:
- test
- publish
cache:
paths:
- public
test:
stage: test
...
pages:
stage: publish
script:
- mkdir -p public/$CI_PIPELINE_ID
- REPORT=$(ls report*.html)
- cp report* public/$CI_PIPELINE_ID/
artifacts:
name: “$CI_PIPELINE_ID”
paths:
- public
expire_in: never
rules:
- if: $TASK_NAME == “load_tests”

How does the syntax to specify file type changes in Gitlab's CI rules work?

My file structure consits of 2 main directories, resources and src, resources has images in a subdirectory, and various json files. src has many nested directories with .ts files in each:
├── package.json
├── package-lock.json
├── README.md
│
├── .docker
│ ├── Dockerfile
│ └── aBashScript.sh
│
├── resources
│ ├── data.json
│ └── images
│ └── manyimages.png
│
├── src
│ ├── subdirectory1
│ └── NestedDirectories
│
├── .gitlab-ci.yml
├── tsconfig.eslint.json
├── tsconfig.json
├── eslintrc.json
└── prettierrc.json
My gitlab-ci.yml has two stages, build and deploy
What I want:
1- If it's a commit on branches "main" or "dev" and If anything that affects the actual project changes, run build.
That is anything under resources, or src (and their nested directories), the Dockerfile, package.json and package-lock.json
I'd be content with "any .ts file changed" too, since all other criteria is usually only when this happens.
2- If build ran and it's a commit on the default branch ("main") then run the deploy stage.
Also for clarification when I say there's a commit on branch X, I mean as in an accepted merge request, or well an actual change on that branch. At some point in my tinkering it was running on (non accepted) merge requests, but I forgot what I changed to fix that.
What happens:
1- If I specify the changes rule on build then it never runs, however even if build doesn't run deploy always runs (if on branch "main")
.gitlab-ci.yml
variables:
IMAGE_TAG: project
stages:
- build
- deploy
build_image:
stage: build
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- echo $REGISTRY_PASS | docker login -u $REGISTRY_USER --password-stdin
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag="latest"
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = '$tag'"
else
tag="$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build -f .docker/Dockerfile -t $REPO_NAME:$IMAGE_TAG-$tag .
- docker push $REPO_NAME:$IMAGE_TAG-$tag
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "dev"'
changes:
- \*.ts
- \*.json
- Dockerfile
deploy:
stage: deploy
before_script:
- chmod SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY $VPS "
echo $REGISTRY_PASS | docker login -u $REGISTRY_USER --password-stdin &&
cd project &&
docker-compose pull &&
docker-compose up -d"
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
This is the most basic one I could cobble up, basically excluding just the readme, but the build stage doesn't run (deploy does run even if build didn't)
Normally this is something I'd be able to "brute force" figure out myself, but to avoid uselessly modifying my files to test the changes rule, I've only been able to test this when making actual modifications to the project.
There seems to be a lot of examples from questions and tutorials out there, but I think something is off with my file structure as I've had no luck copying their changes rule
The changes: entries are glob patterns, not regex. So in order for you to match .ts files in any directory, you'll need to use "**/*.ts" not *.ts (which would only match files in the root).
changes:
- "**/*.ts"
- "**/*.json"
# ...
If build ran and it's a commit on the default branch ("main") then run the deploy stage.
To get this effect, you'll want your deploy job to share some of the rules of your build job.
deploy:
rules:
- if: "$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH"
changes:
- Dockerfile
- "**/*.ts"
- "**/*.json"
Or a little fancier way that reduces code duplication:
rules:
- if: "$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH"
when: never # only deploy on default branch
- !reference [build_image, rules]

Lambda Function can't resolve layer code (nodejs)

Introduction
I am using layers to avoid duplicate code. Currently I have one lambda function and one layer. But the function has problems using the layer.
I am using the AWS SAM to deploy it.
Project
The project looks like this:
backend/
├── lambdas/
│ └── onSignup/
│ ├── app.ts
│ └── package.json
├── layers/
│ └── SQLLayer/
│ └── nodejs/
│ ├── package.json
│ └── index.ts
└── template.yaml
template.yaml:
Resources:
SQLLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: sql-dependencies
Description: sql dependencies
ContentUri: layers/SQLLayer/
CompatibleRuntimes:
- nodejs14.x
LicenseInfo: 'MIT'
RetentionPolicy: Retain
Metadata: # Manage esbuild properties
BuildMethod: nodejs14.x
onSignup:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: lambdas/onSignup/
Handler: app.lambdaHandler
Runtime: nodejs14.x
Timeout: 10
Architectures:
- x86_64
Layers:
- !Ref SQLLayer
Metadata: # Manage esbuild properties
BuildMethod: esbuild
BuildProperties:
Minify: false # set True on release
Target: "es2020"
Sourcemap: true
EntryPoints:
- app.ts
Problem
I am now trying to import this layer into the app.ts from the lambda function like this. But it cant find the module.
var executeSqlStatement = require("/opt/SQLLayer/")
I have also tried countless other file paths but it doesn't work. Almost every resource I tried to use used another file path but none worked for me. But when I deploy them they are uploaded correctly. I am just not sure how they are put together on runtime.
Other mentions
I am not sure if this is a problem but when building I get this output:
Building layer 'SQLLayer'
package.json file not found. Continuing the build without dependencies. <=== COULD THIS BE A PROBLEM?
Running NodejsNpmBuilder:CopySource
Building codeuri: C:\coding\apps\iota\backend\lambdas\onSignup runtime: nodejs14.x metadata: {'BuildMethod': 'esbuild', 'BuildProperties': {'Minify': False, 'Target': 'es2020', 'Sourcemap': True, 'EntryPoints': ['app.ts']}} architecture: x86_64 functions: ['onSignup']
Running NodejsNpmEsbuildBuilder:CopySource
Running NodejsNpmEsbuildBuilder:NpmInstall
Running NodejsNpmEsbuildBuilder:EsbuildBundle
Build Succeeded
It says package.json was not found but the build continues and also succeeds. The layer can also be found in the build folder so I don't think this is a problem.
The other weird thing is that in the build folder for the layer there are two nodejs folders nested inside each other. But that could be normal but I just wanted to mention it for completion.
I hope you have enough information to help me.
Thanks!
It's because, in fact, that path doesn't really exist.
You can do the following.
Do the normal import:
var executeSqlStatement = require("/opt/SQLLayer/")
and in tsconfig.json change the settings to the following:
{
baseUrl: "./",
paths: {
"/opt/SQLLayer/": "your path local"
}
}
You did not specify which version of SAM CLI you were using. The information I am providing below is based on SAM CLI version 1.55.0.
So your layer code doesn't really follow your local hierarchy or include the name of the layer itself. The warning you get of the missing package.json is a warning that tells you the code is not been placed where you expect.
You need to remove the nodejs folder so your code looks like this
├── layers/
│ └── SQLLayer/
│ ├── package.json
│ └── index.ts
This will create a layer with the current hierarchy
├── nodejs/
│ ├── package.json
│ └── index.ts
In your actual Lamabda function this will look like this
├── /opt/
│ └── nodejs/
│ ├── package.json
│ └── index.ts
This means that you can import this module like this in your lambda function require('/opt/nodejs')
I hope you find this information useful to understand how you can structure the code of your lambda layers source code folder when using SAM CLI.

CircleCI Dynamic Config / Config breakdown

Does anyone know if it's possible to breakdown the config file for circleci into smaller files where each job, command, workflow, etc, is in it's own specific file/subdirectory, and if so, how would you approach this?
I've been looking around and even attempted myself to build a python script to build a config from all these yaml files, but with no luck due to reference variable names not existing in these various files so pyyaml library won't load them.
What I'm trying to accomplish is to have this folder structure
configs/
dependencies.yml
commands/
command_1.yml
command_2.yml
jobs/
job_1.yml
job_2.yml
workflows/
workflow_1.yml
workflow_2.yml
Where dependencies.yml contains a breakdown of what each workflow requires in terms of what is used in each step > job > command. And this file would be hand written.
You can do the following :
Split your config.yml in a structure defined in Packing a config
Use dynamic configuration where you fist generate the config from step 1 and the call the generated config file from them main file
Example original config.yml to split:
version: 2
orbs:
sonarcloud: sonarsource/sonarcloud#1.0.3
jobs
my-job:
docker:
- image: cimg/latest
steps:
- checkout
- run: make
workflows:
build:
jobs:
-my-job
Create following layout in a new folder called config (run tree):
.
├── config.yml
└── config
   ├── #orbs.yml
   ├── jobs
│ └──my-job.yml
   └── #workflows.yml
#orbs.yml contains
version: 2
orbs:
sonarcloud: sonarsource/sonarcloud#1.0.3
#workflows.yml contains
workflows:
build:
jobs:
-my-job
my-job.yml contains
docker:
- image: cimg/latest
steps:
- checkout
- run: make
And the main config.yml should look like:
version: 2.1
setup: true
orbs:
continuation: circleci/continuation#0.3.1
jobs:
generate-and-run-circleci:
docker:
- image: 'circleci/circleci-cli:latest'
steps:
- circleci-cli/install
- checkout
- run:
command : |
cd .circleci
circleci config pack config > generated.yml
- continuation/continue:
configuration_path: .circleci/generated.yml
workflows:
build:
jobs:
- generate-and-run-circleci

Managing multiple terraform statefiles with gitlab

Just as an example, my git repository would look like so:
└── git_repo_example
├── modules
│   ├── module_1
│   │   ├── main.tf
│   │   └── var.tf
│   └── module_2
│   ├── main.tf
│   └── var.tf
└── projects
├── project_1
│   ├── main.tf
│   ├── terraform.tfstate
│   └── vars.tf
└── project_2
├── main.tf
├── terraform.tfstate
└── vars.tf
7 directories, 10 files
My team wants to make our terraform state files gitlab-managed, so that the statefiles would be locked in case multiple people want to run or modify a single project at the same time.
All of the examples I can find for managing terraform via gitlab only seem to assume 1 tfstate file and project, but my repository has multiple. Breaking this up into multiple repositories would be difficult to manage since they all reference the same modules, and it seems that placing everything into one folder is against terraform best-practices.
How would one best go about managing one repository with multiple terraform projects / statefiles via gitlab?
I have a similar-ish directory structure that works well with GitLab managed state per project directory.
I'd recommend replacing local TF development with GitLab CI/CD pipelines, using the provided GitLab container image as it supports the GitLab backend by default.
I use environments (representing each project directory) to mange the pipeline (CI/CD variables are managed by environment). The TF statefile is named according to the TF_ADDRESS variable:
image: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
TF_ROOT: ${CI_PROJECT_DIR}/${ENVIRONMENT}
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${ENVIRONMENT}
Here a build job is defined to create the TF plan and run only when the development directory is modified and merged to the default branch. An identical job for the production directory is also defined. Each environment references a unique TF state file managed by GitLab:
.plan:
stage: build
environment:
name: $ENVIRONMENT
script:
- gitlab-terraform plan
- gitlab-terraform plan-json
cache:
policy: pull # don't update the cache
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
Development Build Terraform Plan:
extends: .plan
variables:
ENVIRONMENT: development
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "*"
- "development/**/*"
Production Build Terraform Plan:
extends: .plan
variables:
ENVIRONMENT: production
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "*"
- "production/**/*"
I have exactly the same kind of terraform scripts repository, with a "run" script at the top, which will do, for each application, a
cd modules/modulesx
terraform init -backend-config=backend.tf -reconfigure
With backend.tf (here for Azure):
container-name = "tfbackend"
key = "aStorageAccount/aFileShare/path/to/modulex.tfstate"
resource_group_name = "xxx"
That does create a modules/modulex/.terraform/terraform.tfstate
However, this file is local and not versioned nor "locked".

Resources