Gitlab pipeline has no deployed releases - gitlab

I'm running a deployment using Gitlab CI
I keep getting this error.
secret/review-swagger-rn2vs9-secret replaced
No helm values file found at '.gitlab/auto-deploy-values.yaml'
Deploying new stable release...
UPGRADE FAILED
Error: "review-swagger-rn2vs9" has no deployed releases
ROLLING BACKError: timed out waiting for the condition
Uploading artifacts...
00:01
WARNING: environment_url.txt: no matching files
WARNING: tiller.log: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1
any idea why?

Review your Cluster credentials.
delete the token
recreate a token or more likly try to use the "default" token.
recreate the whole Gitlab <-> Kubernetes connection
try the connection not at project level, try it on group or root level (this was my issue) on Gitlab

Related

What means Range error on my Release pipelines?

When the release pipeline run end up with this error
RangeError [ERR_INVALID_OPT_VALUE]: The value "4294967295" is invalid for option "size".
I don't know what means this error
task failed
Log of the task failed when release run
Same issue for me using Web Deploy to a nodejs app service.
Web Deploy worked at first, until we added some packages in our package.json (e.g. "#fortawesome/free-brands-svg-icons")
Then our release pipeline suddenly outputs "64-bit = +" in the deployment log.
If we remove the packages, the "64-bit = +" is gone, and our Web Deploy works again.
Image of log from Release pipeline
Not sure how to resolve it, but in our case we used Zip Deploy instead like #Ari Gunawan recommended.
I had the same issue when trying to deploy a node app to Azure App Service from release pipeline. Not sure what the error means as there are no further details about it but I resolved it by using "Zip Deploy" deployment method and "Continue on error"

Failed to install chrony in bootstrap node during DCOS installation on azure via Terraform

I am installing DC/OS on azure via terraform. My installation fails and I cannot get to the DC/OS dashboard due to this error.
The exact output is
Error:module.dcos.module.dcos-core.module.dcos-install.null_resource.run_ansible_from_bootstrap_node_to_install_dcos (remote-exec):
Terraform exits siting an error in the script that keeps changing. The latest output was:
Error: error executing "/tmp/terraform_1565201352.sh": Process exited with status 2
I creating a main.tf file as provide here
These outputs are after running terraform apply "plan.out" to execute the infrastructure plan.
What is causing this error & how can I resolve this?

Terraform Cloud in Git mode - failed to read schema, permission denied

I'm new to terraform and trying to use a 'custom' provider with Terraform cloud. To be clear, if I use it on my Windows machine without the TCloud everything works just fine.
On the TCloud I've got a workbook synchronized to my Git repo. The custom provider is uploaded to the Git repo: \terraform.d\plugins\zscaler.com\zpa\zpa\2.0.5\linux_amd64\terraform-provider-zpa_v2.0.5.
I've ran the chmod command to compensate for Window's lack of ability to set the provider as executable:
git update-index --chmod=+x .\terraform.d\plugins\zscaler.com\zpa\zpa\2.0.5\linux_amd64\terraform-provider-zpa_v2.0.5
I've also updated the lock file to allow both windows and linux provider hashes to deal with "local provider doesn't match any of the checksums" issue:
terraform providers lock -fs-mirror="C:\Users\user1\AppData\Roaming\terraform.d\plugins\" -platform=windows_amd64 -platform=linux_amd64 zscaler.com/zpa/zpa
When I run terraform plan from VSCode (on my Windows machine) on the repo that's initialized to the TCloud I get the following error:
> terraform plan -var-file terraform.tfvar
. . .
2022-02-02T10:14:28.328-0600 [INFO] cloud: starting Plan operation
Terraform v1.1.4
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
╷
│ Error: failed to read schema for zpa_provisioning_key.iot_edge_key in zscaler.com/zpa/zpa: failed to instantiate provider "zscaler.com/zpa/zpa" to obtain schema: fork/exec .terraform/providers/zscaler.com/zpa/zpa/2.0.5/linux_amd64/terraform-provider-zpa_v2.0.5: permission denied
Enabling debug doesn't give me any more clue on what's wrong. Appreciate any suggestions.
Thank you

Upload from GitLab to Artifactory during pipeline fails occasionally

Occasionally the first upload of artifacts during a GitLab pipeline fail.
I'm getting the following error message in the logs:
2019-08-01 13:43:14,149 [http-nio-8082-exec-187] [ERROR]
(o.j.s.b.p.t.FilePersistenceHelper:87) - Failed moving
'path_to_artifactory\filestore_pre\dbRecord123.bin' to
'path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2'.
Access to file denied null 2019-08-01 13:43:14,149
[http-nio-8082-exec-187] [ERROR] (o.a.w.s.RepoFilter :251) - Upload
request of products-stage-qa:file_to_upload failed due to {}
java.nio.file.AccessDeniedException: Failed to persist file with sha1:
5ecc5f719b4442b9b04f9010646d34917aca8ca2
This seems to happen only during builds, but not during other uploads directly by a user.
It doesn't happen all the time, and only on first tries. But I haven't found any logic when the first try fails or succeeds. It doesn't seem to have anything to do with file types or the like. I can't really determine if it has anything to do with network speeds though since I only have access to part of the infrastructure.
I found an open ticket with the same error message, but only for Conan and for us it only happens with ivy repositories
We are using Artifactory 6.9.1 and GitLab 12.0.3 starter
This looks to be a permission issue. You are getting an error message that states that the move failed due to "Access to file denied".
You can try to log in to the server using the "artifactory" user and manually move the file called "path_to_artifactory\filestore_pre\dbRecord123.bin" to "path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2" and see if you have any issues with this. To log in to the server with the "artifactory" user you can use the command "sudo -s -u artifactory".
You will also need to make sure that all filestore and its subdirectories are owned by the "artifactory" user and have the correct permissions.
Hope this helps.

VSTS - Build a Docker Image

I have a .NET Core repo in VSTS. I'm trying to create a Build pipeline that builds a Docker image and adds it to my Azure Container Registry. My Build pipeline has a Docker task. This task has the "Build an image" action selected. This action relies on my Dockerfile, which looks like this:
FROM microsoft/dotnet:2.1.2-runtime-nanoserver-1803
# Install .NET Core
ENV DOTNET_VERSION 2.1.2
When my Build pipeline runs, I get an error that says:
failed to register layer: re-exec error: exit status 1: output: ProcessUtilityVMImage \\?\C:\ProgramData\docker\windowsfilter\82aba535faccd8bf0e5ce3c122247672fa671214000a12c5481972212c5e2ca0\UtilityVM: The system cannot find the path specified.
##[error]C:\Program Files\Docker\docker.exe failed with return code: 1
Why am I getting this error? How do I fix it?
It should be the same issue with this one : https://github.com/Microsoft/vsts-tasks/issues/6510
Seems it still have some issues with nanoserver-1803
Just try to setup and host a custom agent on Azure VM, then check it again.
https://github.com/Microsoft/vsts-tasks/issues/6510#issuecomment-370152300
I found maybe an explication about this error: VSTS agents seem not
support nanoserver-1709 actually. Maybe this will change with the next
version 1803.
See details here: Microsoft/vsts-agent#1393
When I setup and host a custom agent on a machine on Azure, it's
working. So it's not a bug with this task. I close this issue. Thanks!

Resources