I am self-hosting a private Gitlab 15.0.2 on Gentoo using this overlay: https://gitlab.awesome-it.de/overlays/gitlab
It's basically an installation from source (no Omnibus). Now I have also configured a gitlab runner (docker based) and a CI pipeline in one of my projects (a homepage being generated through hugo). The pipeline works fine up the the point where it is supposed to upload the artifact which is currently about 11GB in size.
Initially this gave me an "413 Request Entity Too Large" error, so I raised the artifact size limits in Gitlab and increased the client_max_body_size in Nginx. Now I am seeing this error instead:
Uploading artifacts for successful job
Using docker image sha256:c20c992e5d83348903a6f8d18b4005ed1db893c4f97a61e1cd7a8a06c2989c40 for registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:x86_64-latest with digest registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper#sha256:edc1bf6ab9e1c7048d054b270f79919eabcbb9cf052b3e5d6f29c886c842bfed ...
Uploading artifacts...
public: found 907 matching files and directories
WARNING: Uploading artifacts as "archive" to coordinator... 404 Not Found id=112 responseStatus=404 Not Found status=404 token=X8QjapaV
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts as "archive" to coordinator... 404 Not Found id=112 responseStatus=404 Not Found status=404 token=X8QjapaV
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts as "archive" to coordinator... 404 Not Found id=112 responseStatus=404 Not Found status=404 token=X8QjapaV
FATAL: invalid argument
ERROR: Job failed: exit code 1
It tries 3 times before eventually giving up. Each attempt takes a few minutes.
I am not seeing any messages related to this in Gitlab's production.log which leaves me a bit stumped. The 404 error code does not seem to make much sense in this context. I have tested the build pipeline by branching and removing lots of content to create a much smaller artifact. The upload works in that branch on first try, so the upload URL must be fine.
Are there any other configuration settings that I need to be aware of? Perhaps some timeout for the upload?
EDIT:
Here's my current .gitlab-ci.yaml to give you better idea of what I am doing. It's rather ugly with those NodeJS dependencies being installed every time the pipeline is run, but that's currently not the issue.
image: cibuilds/hugo
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
stage: build
script:
- curl -sL https://deb.nodesource.com/setup_16.x -o /tmp/nodesource_setup.sh
- sudo bash /tmp/nodesource_setup.sh
- sudo apt update
- sudo apt install nodejs
- npm install autoprefixer postcss-cli
- hugo
artifacts:
paths:
- public
I am planning to add another step to the pipeline for the deployment using rsync over ssh.
Related
in my project i have 2 submodules (both are maintained by us), for some reason when pushing our code into the CI the runners are all failing because:
Running with gitlab-runner 14.4.0 (4b9e985a)
on bv1 runner docker DcUNN4JN
Resolving secrets
00:00
Preparing the "docker" executor
Using Docker executor with image registry.gitlab.com/visionary.ai/brightervision/gitlab-cuda-trt-v2 ...
Authenticating with credentials from job payload (GitLab Registry)
Pulling docker image registry.gitlab.com/<company-name>/<repository>/gitlab-cuda-trt-v2 ...
Using docker image sha256:ASDFASDFSADFSDAF for registry.gitlab.com/<company-name>/<repository>/gitlab-cuda-trt-v2 with digest registry.gitlab.com/<company-name>/<repository>/gitlab-cuda-trt-v2#sha256:fgsdgsdfgsdfgdsfgsdfg ...
Preparing environment
00:07
Running on runner-dcunn4jn-project-30307801-concurrent-0 via bv1...
Getting source from Git repository
00:16
Fetching changes with git depth set to 1...
Reinitialized existing Git repository in /ci_builds/DcUNN4JN/0/<company-name>/<repository>/.git/
Checking out 09e4b628 as ci-add-bokeh...
Updating/initializing submodules recursively with git depth set to 1...
Synchronizing submodule url for 'jetson/isp-arc-implementations'
Synchronizing submodule url for 'models'
Entering 'jetson/isp-arc-implementations'
Entering 'models'
Entering 'jetson/isp-arc-implementations'
HEAD is now at b9480b7 cleanup
Entering 'models'
HEAD is now at 9c17371 Merge branch 'mar-22' into 'main'
fatal: refusing to merge unrelated histories
Unable to merge 'c90d7c8a3564ff09bb5e02513f28e64a688b325b' in submodule path 'jetson/isp-arc-implementations'
Uploading artifacts for failed job
00:09
Uploading artifacts...
WARNING: untracked: no files
ERROR: No files to upload
Cleaning up project directory and file based variables
00:06
ERROR: Job failed: exit code 1
our .gitmodules looks like this:
[submodule "models"]
path = models
url =../models.git
branch = main
[submodule "jetson/isp-arc-implementations"]
path = jetson/isp-arc-implementations
url = ../isp-arc-implementations.git
branch = main
we are all using linux OS (ubuntu18.04 distribution), couldn't find anything in the gitlab documentations nor anywhere else so i wanted to try my luck here,
if anyone has encountered this and knows how to solve it, it would be much appreciated
a few notes:
for some reason the submodule commit sha's don't match the ones we have on our repositories, but as far as i understand it should fetch the most recent one?
when going to the directory where the runner builds everything, i can't manually pull and update the submodules there (getting authentication errors, i read that the runner can't pull\push in gitlab)
I'm running a deployment using Gitlab CI
I keep getting this error.
secret/review-swagger-rn2vs9-secret replaced
No helm values file found at '.gitlab/auto-deploy-values.yaml'
Deploying new stable release...
UPGRADE FAILED
Error: "review-swagger-rn2vs9" has no deployed releases
ROLLING BACKError: timed out waiting for the condition
Uploading artifacts...
00:01
WARNING: environment_url.txt: no matching files
WARNING: tiller.log: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1
any idea why?
Review your Cluster credentials.
delete the token
recreate a token or more likly try to use the "default" token.
recreate the whole Gitlab <-> Kubernetes connection
try the connection not at project level, try it on group or root level (this was my issue) on Gitlab
I have the following logic in my GitLab-ci:
stages:
- build
- deploy
job_make_zip:
tags:
- test123
image: node:10.19
stage: build
script:
- npm install
- make
- make source-package
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
artifacts:
when:
paths:
- test.bz2
expire_in: 2 days
When the job runs, I see the following message:
17 Restoring cache
18 Checking cache for master...
19 No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
20 Successfully extracted cache
I'm just new to Gitlab... and so I can't tell if this is an error or not. I basically don't want to have to download the same npm modules every single time this build runs.
I found a similar post here: GitLab CI caching key
But I'm already using the correct gitlab CI variable.
Any suggestions would be appreciated.
In my GitLab-CI setup at home I am getting this warning (in my case I am not considering it to be an error) in all of my build jobs. According to https://gitlab.com/gitlab-org/gitlab/-/issues/201861 and https://gitlab.com/gitlab-org/gitlab-runner/-/issues/16097 there seem to be cases where this is a message to be taken seriously.
This is especially true if you are uploading (and later on downloading / extracting) the cache to a particular URL, which is used by several runners to get and sync the cache. In a general case though - meaning that if the cache is stored on a single GitLab-Runner rather than on a shared source, which is supposed to be used by several GitLab-Runners, I don't think this message has any real meaning. On my GitLab-Runners, which usually are project- or group-specific, this never was a problem and I always had the cache properly extracted in a local manner.
I'm aware of this question which shows how to amend an existing project's .gitlab-ci.yml to prepare and deploy some content for a Project's Pages.
I see that Job run:
Uploading artifacts...
Runtime platform arch=amd64 os=linux pid=4464 revision=de7731dd version=12.1.0
public: found 2 matching files
Uploading artifacts to coordinator... ok id=1968 responseStatus=201 Created token=p2LyB8MW
Job succeeded
My problem is not to find the URL to access the pages. We are running our gitlab instance.
The docs say to go to Project -> Settings -> Pages, but I see no such "Pages" setting. I'm on v11.5.3 - could that be the problem? As the build works so nicely I'm thinking that Pages should work on this version.
Occasionally the first upload of artifacts during a GitLab pipeline fail.
I'm getting the following error message in the logs:
2019-08-01 13:43:14,149 [http-nio-8082-exec-187] [ERROR]
(o.j.s.b.p.t.FilePersistenceHelper:87) - Failed moving
'path_to_artifactory\filestore_pre\dbRecord123.bin' to
'path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2'.
Access to file denied null 2019-08-01 13:43:14,149
[http-nio-8082-exec-187] [ERROR] (o.a.w.s.RepoFilter :251) - Upload
request of products-stage-qa:file_to_upload failed due to {}
java.nio.file.AccessDeniedException: Failed to persist file with sha1:
5ecc5f719b4442b9b04f9010646d34917aca8ca2
This seems to happen only during builds, but not during other uploads directly by a user.
It doesn't happen all the time, and only on first tries. But I haven't found any logic when the first try fails or succeeds. It doesn't seem to have anything to do with file types or the like. I can't really determine if it has anything to do with network speeds though since I only have access to part of the infrastructure.
I found an open ticket with the same error message, but only for Conan and for us it only happens with ivy repositories
We are using Artifactory 6.9.1 and GitLab 12.0.3 starter
This looks to be a permission issue. You are getting an error message that states that the move failed due to "Access to file denied".
You can try to log in to the server using the "artifactory" user and manually move the file called "path_to_artifactory\filestore_pre\dbRecord123.bin" to "path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2" and see if you have any issues with this. To log in to the server with the "artifactory" user you can use the command "sudo -s -u artifactory".
You will also need to make sure that all filestore and its subdirectories are owned by the "artifactory" user and have the correct permissions.
Hope this helps.