container builder timeout uploading container - google-container-builder

I am using the container registry builder. It seems to work well. However, I get a timeout in an unexpected place. The timeout is in the Preparing stage of uploading a image.
Pushing gcr.io/my-project/nimbl/foo
The push refers to a repository [gcr.io/my-project/nimbl/foo]
c0657b8960f1: Preparing
298d55719be3: Preparing
2c92aca44757: Preparing
456bc265a0bd: Preparing
a787b259a3da: Preparing
9f23e5ebcfc2: Preparing
dab134dd71e8: Preparing
9911896abb59: Preparing
e4a87d9fa1bb: Preparing
5881b25780f5: Preparing
da4d37ad6531: Preparing
004e69f3a081: Preparing
164d3ebb8cbe: Preparing
9f23e5ebcfc2: Waiting
dab134dd71e8: Waiting
9911896abb59: Waiting
e4a87d9fa1bb: Waiting
5881b25780f5: Waiting
da4d37ad6531: Waiting
004e69f3a081: Waiting
164d3ebb8cbe: Waiting
a787b259a3da: Pushed
298d55719be3: Pushed
456bc265a0bd: Pushed
c0657b8960f1: Pushed
dab134dd71e8: Pushed
9f23e5ebcfc2: Pushed
9911896abb59: Pushed
5881b25780f5: Pushed
da4d37ad6531: Pushed
004e69f3a081: Pushed
164d3ebb8cbe: Pushed
Your build timed out. Use the [--timeout=DURATION] flag to change the timeout threshold.
ERROR: (gcloud.container.builds.submit) build e21dc778-8f29-49e9-a17c-a9d52a5cc982 completed with status "TIMEOUT"
I have tried increasing the timeout without success. It never seems to leave the preparing stage.

Related

How to sync mirrors?

I'm using Rabbitmq for a project, and there's a problem :(
It's a queue that I've been using.
rabbitmq_image
As you can see in the picture, master node (rabbit#HSDRABPAP01) is unsync at some point and mirror node (rabbit#HSDRABPAP03) becomes a new master.
However, the newly-uploaded node that was recovered (rabbit#HSDRABPAP01) is in an unsync state.
How can I sync rabbit#HSDRABPAP01?
I'd appreciate it if anybody could help me.
I've tried to sync rabbit#HSDRABPAP01 in 2 ways.
click Synchronise button in management UI
rabbitmqctl sync_queue <queue_name>
But neither works.
Still Synchronise button appears.
RabbitMQ log said it's already synced.
2023-01-26 22:48:25.586 [info] <0.6790.26> Mirrored queue 'M10QM-Q-LM-11' in vhost '/': Synchronising: 0 messages to synchronise
2023-01-26 22:48:25.586 [info] <0.6790.26> Mirrored queue 'M10QM-Q-LM-11' in vhost '/': Synchronising: batch size: 4096
2023-01-26 22:48:25.586 [info] <0.18067.1237> Mirrored queue 'M10QM-Q-LM-11' in vhost '/': Synchronising: all mirrors already synced

Error: Failed to persist state to backend

I have a Terraform module that manages Snowflake resources. When the Circle CI tries to apply it, I often get:
Error: Failed to save state
Error saving state: Error uploading state: Conflict
This workspace is locked by Run
run-[ ID ] and will only accept state uploads from
them while locked.
and
Error: Failed to persist state to backend
The error shown above has prevented Terraform from writing the updated state to the configured backend. To allow for recovery, the state has been written to the file "errored.tfstate" in the current working directory.
Running "terraform apply" again at this point will create a forked state, making it harder to recover.
To retry writing this state, use the following command:
terraform state push errored.tfstate
However, I am able to run terraform apply from local machine without any issues - only the CI seems to have this problem.
What might cause this error and how can I fixed?

Update image of existing azure registry does not work

I am new to docker and I am trying to update an existing web service on an azure website. After building the image, this is what I did:
docker login <regname>.azurecr.io # Successfully logged in
docker tag <myimage> <regname>.azurecr.io/<servicename>
docker push <regname>.azurecr.io/<servicename>
And this is what I get:
C:\Users\user> docker push <regname>.azurecr.io/<servicename>
The push refers to repository [<regname>.azurecr.io/<servicename>]
8338876046a2: Preparing
9b4cb369a379: Preparing
769a276cd781: Preparing
486305c59459: Preparing
c36e2873b733: Preparing
130ae36f8cc8: Preparing
bc6b4902b79e: Preparing
f3d44e887388: Preparing
4a39ef7ed1bb: Preparing
4c5aab3548b9: Preparing
ec348085b0e6: Preparing
c2be8853e0b2: Preparing
0f1151f5fc99: Preparing
00399b079947: Preparing
c82d454eb914: Preparing
b25487d1db04: Preparing
e367fb455ccf: Preparing
bc6b4902b79e: Pushed
57df5852e66c: Layer already exists
d788ea03fce1: Layer already exists
1ffa9e6f04f1: Layer already exists
377e5b96eca6: Layer already exists
90dd0108373f: Layer already exists
eb8fe74986a4: Layer already exists
e2a005b711f9: Layer already exists
3a29b9e0627a: Layer already exists
ca4c28881d11: Layer already exists
33614d3265ba: Layer already exists
270f4d759cc3: Layer already exists
0fa80309f3d6: Layer already exists
4e1d0b4d1868: Layer already exists
910d7fd9e23e: Pushed
4230ff7f2288: Pushed
2c719774c1e1: Layer already exists
ec62f19bb3aa: Layer already exists
f94641f1fe1f: Layer already exists
latest: digest:
sha256:5d2729ae576349b158acc6c480acdde3899e2c6a9445966bb7e8d291677e11dd size: 7866
Note: The 'Layer already exists' is from a previous push I did. I had to do the push 2 times because for some layers it kept retrying and then reached EOF and stopped. So in the first push I pushed most of the layers, then in the second push the rest of the layers that could not be pushed the first time. Could the problem lie here?
The new image I want to push is completely different from the old one (they are both Flask apps).
After the above, I went to azure portal and restarted the service for this resource but nothing happened. The azure service remains the same and the new functionality hasn't been added.
I've read other posts that suggest that the problem lies on the tag names. I can't find a way around this since I want to update the existing image in the azure registry (does that mean that the tag names would be the same?).
Has anyone else encountered this problem or maybe has an idea about what I am doing wrong?
For your issue, you just need something to notify the Web App to update it. Then you need to create a Webhook for your Web App before updating your image. The description here in the next steps. For more details, see Push an updated container image to a geo-replicated container registry for regional web app deployments.

Openshift 3 App Deployment Failed: Took longer than 600 seconds to become ready

I have a problem with my openshift 3 setup, based on Node.js + MongoDB (Persistent) https://github.com/openshift/nodejs-ex.git
Latest App Deployment: nodejs-mongo-persistent-7: Failed
--> Scaling nodejs-mongo-persistent-7 to 1
--> Waiting up to 10m0s for pods in rc nodejs-mongo-persistent-7 to become ready
error: update acceptor rejected nodejs-mongo-persistent-7: pods for rc "nodejs-mongo-persistent-7" took longer than 600 seconds to become ready
Latest Build: Complete
Pushing image 172.30.254.23:5000/husk/nodejs-mongo-persistent:latest ...
Pushed 5/6 layers, 84% complete
Pushed 6/6 layers, 100% complete
Push successful
I have no idea how to debug this? Can you help please.
Check what went wrong in console: oc get events
Failed to pull image? Make sure you included a proper secret

Testing if puppet apply is finished: xxx_controller.pp

I install the openstack one key:
packstack --allinone
But get stuck in here:
Testing if puppet apply is finished: 103.xxx_controller.pp
It takes 30 minutes or more depending upon ram and cpu of machine on which you are installing packstack/devstack. Let it finish and wait it out. Error or success message will appear after it finishes.

Resources