i have an angular app adn i have wrote a pipline for it like this :
image: node:16.13.2
variables:
DOCKER_HOST: myurl
GIT_STRATEGY: clone
TAG_LATEST: latest
TAG_COMMIT: $CI_COMMIT_REF_NAME-$CI_COMMIT_SHORT_SHA
.login_into_nexus: &login_into_nexus
- echo "Login Into Nexus...."
- docker login -u $NEXUS_USERNAME -p $NEXUS_PASS $NEXUS_URL
services:
- docker:dind
stages:
- build
install-dependency:
stage: .pre
script:
- npm i --prefer-offline # install dependencies
cache:
key: "{$CI_JOB_NAME}"
paths:
- node_modules
policy: pull-push
artifacts:
paths:
- node_modules/
build:
stage: build
needs:
- job: install-dependency
artifacts: true
script:
- npm run build:aot
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_PIPELINE_SOURCE != "push" && $CI_PIPELINE_SOURCE != "merge_request_event" && $CI_COMMIT_REF_NAME == "master"'
- if: '$CI_PIPELINE_SOURCE != "push" && $CI_PIPELINE_SOURCE != "merge_request_event" && $CI_COMMIT_REF_NAME == "develop"'
config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "runner-global-1"
output_limit = 10000000
url = "myurl"
token = "QmeDZw6u2Qa48n6asVHE"
executor = "docker"
cache_dir = "/tmp/build-cache"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "alpine"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/tmp/build-cache:/cache:rw"]
shm_size = 0
[[runners]]
name = "runner-global-2"
output_limit = 10000000
url = "myurl"
token = "YYaXwQfLZ-2zSL8eHMGP"
executor = "docker"
cache_dir = "/tmp/build-cache"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "alpine"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/tmp/build-cache:/cache:rw"]
shm_size = 0
[[runners]]
name = "runner-global-3"
output_limit = 10000000
url = "myurl"
token = "-EUSye1c7h7tQyEk2VfH"
executor = "docker"
cache_dir = "/tmp/build-cache"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "alpine"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/tmp/build-cache:/cache:rw"]
shm_size = 0
[[runners]]
name = "runner-global-4"
output_limit = 10000000
url = "myurl"
token = "S7gPu3r2xVzc2CTZzy7z"
executor = "docker"
cache_dir = "/tmp/build-cache"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "ruby:2.6"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/tmp/build-cache:/cache:rw"]
shm_size = 0
[[runners]]
name = "runner-global-6"
output_limit = 10000000
url = "myurl"
token = "U_VQCMkj_AN5AfVuWyCR"
executor = "docker"
cache_dir = "/tmp/build-cache"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "ruby:2.6"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/tmp/build-cache:/cache:rw"]
shm_size = 0
as seen above the pipeline downloads node modules from my nexus and installs it in the install-dependency job
i also have 5 runners on this project and each one of them can pick the job . but each runner saves the cache for itself and when i run the pipeline on another branch it wont use the saved cache on the other branch
my gitlab version is : 13.3.5-ee
You must enable distributed caching in order for all your runners to share the same cache. Otherwise, the default is that the cache is local to the runner.
Related
I have following CI configurations:
...
cache:
key: ${CI_PROJECT_NAME}
paths:
- ${TF_ROOT}/.terraform
before_script:
- echo -e "credentials \"$CI_SERVER_HOST\" {\n token = \"$CI_JOB_TOKEN\"\n}" > $TF_CLI_CONFIG_FILE
- cd ${TF_ROOT}
- export TF_LOG_CORE=TRACE
- export TF_LOG_PATH=terraform_logs.txt
stages:
- initialize
- validate
init:
stage: initialize
script:
- terraform -v
- terraform init
#- terraform validate
validate:
stage: validate
script:
- terraform validate
My init runs totally fine however i get following in the next stage i.e. validate:
$ terraform validate
╷
│ Error: Missing required provider
│
│ This configuration requires provider registry.terraform.io/datadog/datadog,
│ but that provider isn't available. You may be able to install it
│ automatically by running:
│ terraform init
in provider.tf:
terraform {
required_version = ">= 0.14"
required_providers {
datadog = {
source = "DataDog/datadog"
version = "2.24.0"
}
}
}
in config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "some rummer"
url = "****
token = "***"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
If run the validate as subsequent command in the init stage itself if works fine, but just not in the different stage.
If i do ls -al in the next stage before validate, i can even see .terraform folder present which should be having providers inside?
Second guess was a caching issue, however I believe I have specified caches correctly - ${TF_ROOT}/.terraform?
I am running the gitlab-runner as shell executor.
Any idea what is wrong here?
There are a lot of answers about this topic, but I cannot find a solution to my problem here my log:
Waiting for services to be up and running...
*** WARNING: Service runner-hgz7smm8-project-3-concurrent-0-c2b622f72cceadc3-docker-0 probably didn't start properly.
Health check error:
service "runner-hgz7smm8-project-3-concurrent-0-c2b622f72cceadc3-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2021-12-07T16:13:47.326235886Z mount: permission denied (are you root?)
2021-12-07T16:13:47.326275450Z Could not mount /sys/kernel/security.
2021-12-07T16:13:47.326284427Z AppArmor detection and --privileged mode might break.
My docker version inside the runner:
root#gitlab-runner-2:~# docker -v
Docker version 20.10.7, build 20.10.7-0ubuntu5.1
Gitlab-runner:
root#gitlab-runner-2:~# gitlab-runner -v
Version: 14.5.1
Git revision: de104fcd
Git branch: 14-5-stable
GO version: go1.13.8
Built: 2021-12-01T15:41:35+0000
OS/Arch: linux/amd64
Runner is an LXD container running inside PROXMOX and is configured like this with "docker" executor:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab-runner-2"
url = "http://gitlab.XXXXXX.com"
token = "XXXXXXXXXX"
executor = "docker"
pre_build_script = "export DOCKER_HOST=tcp://docker:2375"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Any advices?
The solution that I've achieved, for GitLab 14.10, to solve those Warnings/Errors was to perform the following changes.
On gitlab-runner config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "runnertothehills"
url = "https://someexample.com/"
token = "aRunnerToken"
executor = "docker"
[runners.docker]
image = "docker:20.10.14"
privileged = true
disable_cache = false
volumes = ["/cache:/cache", "/var/run/docker.sock:/var/run/docker.sock", "/builds:/builds"]
group = 1000
environment = ["DOCKER_AUTH_CONFIG={\"auths\":{\"some.docker.registry.com:12345\":{\"auth\":\"AdockerLoginToken=\"}}}"]
extra_hosts = ["one.extra.host.com:100.111.120.231"]
The main configuration here is the docker executor and the volumes mount point "/var/run/docker.sock:/var/run/docker.sock".
In .gitlab-ci.yml instead of using
service: docker:dind
Use docker commands directly.
Example:
deploy:
stage: deploy
script:
- docker login -u myuser -p my_password
This solved the following problems:
** WARNING: Service runner-3tm987o3-project-131-concurrent-0-ce49f8u8c582bf56-docker-0
probably didn't start properly.
The problem of docker group not found
2022-05-23T14:24:57.167991289Z time="2022-05-23T14:24:57.167893989Z"
level=warning msg="could not change group /var/run/docker.sock to
docker: group docker not found"
and
2022-05-23T14:24:57.168164288Z failed to load listeners: can't create
unix socket /var/run/docker.sock: device or resource busy
When running my CI-Pipeline, my GitLab runner shows that the access to the repository is denied (although it is internal and all users of the server are maintainers - including the admin)!
remote: You are not allowed to download code.
fatal: unable to access 'https://gitlab.<omitted>.me/S0urC10ud/eaglesheetmusicbackend.git/': The requested URL returned error: 403
I noticed that there is no token in the URL above, although there is one in the requests before:
21:29:18.702836 git.c:439 trace: built-in: git fetch origin +38682fb8a487f8dca7baa5107a5a021b6f8391c7:refs/pipelines/12 +refs/heads/master:refs/remotes/origin/master --depth 50 --prune --quiet
21:29:18.702963 run-command.c:663 trace: run_command: GIT_DIR=.git git-remote-https origin https://gitlab-ci-token:<omitted>#gitlab.<omitted>.me/S0urC10ud/eaglesheetmusicbackend.git
Is any special configuration needed for the Auth to be set? My runner config looks like the following:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "shared-runner"
url = "https://gitlab.<omitted>.me"
token = "<omitted>"
executor = "docker"
clone_url = "https://gitlab.<omitted>.me"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
network_mode = "br0"
tls_verify = false
image = "ruby:2.6"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
dns = ["192.168.1.251"]
Before you ask: Yes, I am accessing the GitLab-Backend via a NGINX reverse-proxy - but my config should not yield 403
i ended up needing to create a loopback in our firewall/DNS and that resolved the issue
I am using distributed cache (S3) for gitlab runner. It working fine but everytime the runner executes job and store cache file in S3, the size of file got double. It includes the older cache.zip file in the new cache file.
gitlab-ci.yml file:
cache:
key: "$CI_COMMIT_REF_NAME"
untracked: true
paths:
- .m2/repository/
runner cache configuration file config.toml:
[runners.cache]
Type = "s3"
Path = "runners-cache"
[runners.cache.s3]
ServerAddress = "s3.amazonaws.com"
AccessKey = "***"
SecretKey = "***"
BucketName = "***"
BucketLocation = "***"
If try to get data from module use calling_class the data don't come to puppet manifests, if put the variable to common or osfamily yaml file value will be available from manifets.
My environment:
Puppet Master 3.7.4 + Foreman 1.7 + Hiera 1.3.4
Hiera configs:
---
:backends:
- yaml
:hierarchy:
- "%{::environment}/node/%{::fqdn}" #node settings
- "%{::environment}/profile/%{calling_class}" # profile settings
- "%{::environment}/%{::environment}" # environment settings
- "%{::environment}/%{::osfamily}" # osfamily settings
- common # common settings
:yaml:
:datadir: '/etc/puppet/hiera'
/etc/puppet/hiera/production/profile/common.yaml
profile::common::directory_hierarchy:
- "C:\\SiteName"
- "C:\\SiteName\\Config"
profile::common::system: "common"
And on profile module manifest /etc/puppet/environments/production/modules/profile/manifests/common.pp
class profile::common (
$directory_hierarchy =undef,
$system =undef
)
{
notify { "Dir is- $directory_hierarchy my fqdn is $fqdn, system = $system": }
}
Puppet config /etc/puppet/puppet.config
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
privatekeydir = $ssldir/private_keys { group = service }
hostprivkey = $privatekeydir/$certname.pem { mode = 640 }
autosign = $confdir/autosign.conf { mode = 664 }
show_diff = false
hiera_config = $confdir/hiera.yaml
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
default_schedules = false
report = true
pluginsync = true
masterport = 8140
environment = production
certname = puppet024.novalocal
server = puppet024.novalocal
listen = false
splay = false
splaylimit = 1800
runinterval = 1800
noop = false
configtimeout = 120
usecacheonfailure = true
[master]
autosign = $confdir/autosign.conf { mode = 664 }
reports = foreman
external_nodes = /etc/puppet/node.rb
node_terminus = exec
ca = true
ssldir = /var/lib/puppet/ssl
certname = puppet024.novalocal
strict_variables = false
environmentpath = /etc/puppet/environments
basemodulepath = /etc/puppet/environments/common:/etc/puppet/modules:/usr/share/puppet/modules
parser = future
And the more interesting thing that if deploy the same code without foreman it will be working.
Maybe I've missed some configuration or plugins?
You need have a environment (production in your sample) folder structures as below:
/etc/puppet/hiera/environments/production/node/%{::fqdn}.yaml
/etc/puppet/hiera/environments/production/profile/%{calling_class}.yaml
/etc/puppet/hiera/environments/production/production/*.yaml
/etc/puppet/hiera/environments/production/%{::osfamily}.yaml
/etc/puppet/hiera/environments/common.yaml
So the environment path you pasted is wrong also.
/etc/puppet/hiera/production/profile/common.yaml
Side notes
By first view, shouldn't mix hieradata with modulepath, so if can, move the modules out of basemodulepath
basemodulepath = /etc/puppet/environments/common
With the puppet.conf you pasted, the real profile module path is at one of three folders:
/etc/puppet/environments/common/modules/profile
/etc/puppet/modules/profile
/usr/share/puppet/modules/profile