I am wanted to try out caching on my Gitlab project following documentation here - https://docs.gitlab.com/ee/ci/caching/#how-archiving-and-extracting-works. I have a project specific runner and am using docker executor, but I get error
cat: vendor/hello.txt: No such file or directory
How would I go about troubleshooting this problem? I set disable_cache = false in my runner config, but that did not help.
EDIT: using private gitlab instance 12.3.
I acheived this using distributed caching which I found easy. First of all you need a S3 bucket or s3 compatible storage like minio. You can set MinIo locally where gitlab runner exsists with following commands.
docker run -it --restart always -p 9005:9000 \
-v /.minio:/root/.minio -v /export:/export \
--name minio \
minio/minio:latest server /export
Check the IP address of the server:
hostname --ip-address
Your cache server will be available at MY_CACHE_IP:9005
Create a bucket that will be used by the Runner:
sudo mkdir /export/runner
runner is the name of the bucket in that case. If you choose a different bucket, then it will be different. All caches will be stored in the /export directory.
Read the Access and Secret Key of MinIO and use it to configure the Runner:
sudo cat /export/.minio.sys/config/config.json | grep Key
Next step is to configure your runner to use the cache. For that following is the sample config.toml
[[runners]]
limit = 10
executor = "docker+machine"
[runners.cache]
Type = "s3"
Path = "path/to/prefix"
Shared = false
[runners.cache.s3]
ServerAddress = "s3.example.com"
AccessKey = "access-key"
SecretKey = "secret-key"
BucketName = "runner"
Insecure = false
I hope this answer will help you
Reference:
https://docs.gitlab.com/runner/install/registry_and_cache_servers.html
https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching
I managed to solve the issue thanks to this post https://gitlab.com/gitlab-org/gitlab-runner/-/issues/336#note_263931046.
Basically added
variables:
GIT_CLEAN_FLAGS: none
and it worked.
#Bilal's answer is definitely correct, but I was looking for slightly different solution.
Related
I want to use the Gitlab Docker registry. I am using GitLab CE 15.7
I created my own CA and signed a certificate. GitLab UI and GitLab runners are working fine!
When it comes to the Docker Registry I have some issues. I configured the gitlab.rb like this:
registry_external_url 'https://198.18.133.100:5000'
registry['enable'] = true
registry['username'] = "registry"
registry['group'] = "registry"
registry['registry_http_addr'] = "127.0.0.1:5000"
registry['debug_addr'] = "localhost:5001"
registry['env'] = {
'SSL_CERT_DIR' => "/etc/gitlab/ssl/"
}
registry['rootcertbundle'] = "/etc/gitlab/ssl/198.18.133.100.crt"
Which also confuses me are the options for registry and registry_nginx.
I am not sure if I configured it correctly and the documentation doesn't help me a lot. I didn't spin up any docker container for the registry or anything. I believe that this comes in the binary of the GitLab (if I am not mistaken). I port 5000 is available and I can telnet.
However, while pushing the image to the registry I get the following error:
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://198.18.133.100:5000/v2/": x509: certificate signed by unknown authority
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit status 1
Any ideas? Thanks a lot!
I tried already quite a lot of different configs and reconfigured the gitlab server.
It has been fixed with copying the ca at the following path:
mkdir -p /etc/docker/certs.d/<your_registry_host_name>:<your_registry_host_port>
As well as the right config in the gitlab.rb
registry_nginx['enable'] = true
registry_nginx['listen_https'] = true
registry_nginx['redirect_http_to_https'] = true
registry_external_url 'https://registry.YOUR_DOMAIN.gtld'
Thanks all for your help!
I start signserver with docker:
docker run -it --rm --name signserver \
-p 80:8080 -p 443:8443 \
-e CRYPTO_SERVER_IP=**** \
-v /ca-cert.pem:/mnt/external/secrets/tls/cas/ManagementCA.crt \
signserver:1.0
Now, i need connect signserver to PKCS11 on HSM.I has changed signserver-deploy.configuaration:
cryptotoken.p11.lib.30.name=Utimaco
cryptotoken.p11.lib.30.file=/opt/utimaco/p11/libcs_pkcs11_R3.so
Then I add PKCS#11 crypto worker from template,and i change the configuration:
WORKERGENID1.SHAREDLIBRARYNAME=Utimaco
The PKCS#11 crypto worker status is offline,so i active it and enter authentication Code.but i get errors:
- Failed to initialize crypto token: SHAREDLIBRARYNAME Utimaco is not referring to a defined value
Could you please help me
Thank you so much!
This is being discussed at the SignServer CE project's GitHub Discussions page where it is being answered that:
The current SignServer CE container does not support changing configuration in the signserver_deploy.properties.
A theoretical short-term solution for doing this could be something like this:
Find where the signserver.ear is in the container (probably under the appserver deployments folder and it might be folder instead of a ZIP file).
Find the JAR file which has the configuration, likely lib/SignServer-Common.jar
Find the properties file in that JAR file, something like org/signserver/common/.../signservercompile.properties
Change the property in that file and save it back to the ZIP file
I am using Linux 18.04 and I want to lunch a spark cluster on EC2.
I used the export command to set environment variables
export AWS_ACCESS_KEY_ID=MyAccesskey
export AWS_SECRET_ACCESS_KEY=Mysecretkey
but when I run the command to lunch the spark cluster I get
ERROR: The environment variable AWS_ACCESS_KEY_ID must be set
I put all the commands I used in case I made a mistake:
sudo mv ~/Downloads/keypair.pem /usr/local/spark/keypair.pem
sudo mv ~/Downloads/credentials.csv /usr/local/spark/credentials.csv
# Make sure the .pem file is readable by the current user.
chmod 400 "keypair.pem"
# Go into the spark directory and set the environment variables with the credentials information
cd spark
export AWS_ACCESS_KEY_ID=ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=SECRET_KEY
# To install Spark 2.0 on the cluster:
sudo spark-ec2/spark-ec2 -k keypair --identity-file=keypair.pem --region=us-west-2 --zone=us-west-2a --copy-aws-credentials --instance-type t2.micro --worker-instances 1 launch project-launch
I am new to these things and any help is really appreciated
You can also retrieve the value AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY by using the get subcommand for aws configure:
AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
In command line:
sudo AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id) AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key) spark-ec2/spark-ec2 -k keypair --identity-file=keypair.pem --region=us-west-2 --zone=us-west-2a --copy-aws-credentials --instance-type t2.micro --worker-instances 1 launch project-launch
source: AWS Command Line Interface User Guide
Environment variables can be simply passed after sudo in form ENV=VALUE and they'll be accepted by followed command. It's not known to me if there are restrictions to this usage, so my example problem can be solved with:
sudo AWS_ACCESS_KEY_ID=ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=SECRET_KEY spark-ec2/spark-ec2 -k keypair --identity-file=keypair.pem --region=us-west-2 --zone=us-west-2a --copy-aws-credentials --instance-type t2.micro --worker-instances 1 launch project-launch
I'm running a Laravel api on my server, and I wanted to use Gitlab-runner for CD. The first two runs were good, but then I started to see this problem listen_address not defined, session endpoints disabled builds=0
I'm running a linux server on a web shared hosting, so I can access a terminal and get some priviliges but I can't do some sudo stuff like installing a service. That's why I've been running gitlab-runner in user-mode
Error info
Configuration loaded builds=0
listen_address not defined, metrics & debug endpoints disabled builds=0
[session_server].listen_address not defined, session endpoints disabled builds=0
.gitlab-runner/config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "CD API REST Sistema SIGO"
url = "https://gitlab.com/"
token = "blablabla"
executor = "shell"
listen_address="my.server.ip.address:8043"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
I have literally wasted 2 days on this subject. I have followed the below steps to get the runners configured and execute jobs successfully.
I am using Mac OS X 10.13 and Git Lab 12. However, people with other OS also can check this out.
I have stopped the runners and uninstalled them. Now deleted all references and files to gitlab runner, including the gitlab executable also.
I got to know GitLab Runner executable paths from https://docs.gitlab.com/runner/configuration/advanced-configuration.html
I have installed them again using the gitlab official documentation.
Then the runners shows online in the gitlab portal. However, the jobs are not getting executed. It shows simply stuck. It tried to get information from logs using
gitlab-runner -debug run
Then I got to know that listen_address not defined. After a long try I got to know that simply enabling Run Untagged jobs did the trick. The jobs started and completed successfully. Still the I see the listen_address not defined from debug. So that misled me.
Though it seems that last one task has solved my problem, but doing all the tasks in a batch did the trick.
Conversely, an alternative to Avinash's solution is to include the tags you create when you register the runner in the gitlab-ci.yml file
stages:
- testing
testing:
stage: testing
script:
- echo 'Hello world'
tags:
- my-tags
I have a GitLab installation running in Kubernetes, and suddenly, my connections to dind have stopped working. This problem started appearing in a single project out of ~30 and is working in the other ones, and no change has been made.
The builds give the following errors:
*** WARNING: Service runner-c542f8fe-project-3-concurrent-0-docker-0 probably didn't start properly.
Health check error:
service "runner-c542f8fe-project-3-concurrent-0-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2018-08-13T08:40:53.274661600Z mount: permission denied (are you root?)
2018-08-13T08:40:53.274713900Z Could not mount /sys/kernel/security.
2018-08-13T08:40:53.274730800Z AppArmor detection and --privileged mode might break.
2018-08-13T08:40:53.275949300Z mount: permission denied (are you root?)
*********
I am running the container privileged, as can be seen in my /etc/gitlab-runner/config.toml:
metrics_server = ":9252"
concurrent = 10
check_interval = 30
[[runners]]
name = "mothy-jackal-gitlab-runner-bb76cb464-7fq6z"
url = "[redacted]"
token = "[redacted]"
executor = "kubernetes"
[runners.cache]
[runners.kubernetes]
host = ""
image = "ubuntu:16.04"
namespace = "gitlab"
namespace_overwrite_allowed = ""
privileged = true
cpu_request = "100m"
memory_request = "128Mi"
service_cpu_request = "100m"
service_memory_request = "128Mi"
service_account_overwrite_allowed = ""
[runners.kubernetes.volumes]
The only other solution I've found that don't pertain to making sure that the runner is privileged is this one. I've tried setting the variables in my .gitlab-ci.yaml to this:
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay
The error remains the same.
Worth noting is the output of these following commands, in accordance with the other post:
bash-4.3# find /lib/modules/`uname -r`/kernel/ -type f -name "overlay*"
find: /lib/modules/4.4.111-k8s/kernel/: No such file or directory
bash-4.3# lsmod | grep overlay
overlay 45056 12
Note the "No such file or directory" error.
I'm stumped, and with my builds failing in the registry stage, I can't make releases. Any pointers as of where to go?
Thanks.
EDIT
It's not a solution, but I noticed that this occurred because I had set a dedicated runner to this project. Once I removed that, it worked again. Not a fix, but important info to anyone having the same issue.