I am currently trying to represent a new google cloud project in terraform. To complete this I still need to create the domain mapping of my cloudrun instance in terraform.
The domain mapping has already been created in the UI and Terraform cannot overwrite it. When I run terraform apply I get an error message that the domain mapping already exists.
I first tried to import the domain mapping with terraform import. This command returns a success message, as soon as I run terraform apply I get an error message that the domain mapping already exists but is flagged for deletion, I should try the command again after the complete deletion. A few hours later, the resource is still not deleted.
Next I tried to delete the domain mapping in the UI. This does not work, I get the message:
Delete domain mapping failed. Deletion in progress. Resource readiness deadline exceeded.
I have also tried it via the command line interface:
gcloud beta run domain-mappings delete --domain=\<domain\> --namespace=\<namespace\>
But I get the same error message as before.
Does anybody know another way of deleting / importing / fixing /workaround this problem, I'd appreciate any help!
Related
So, if you have been working for some time with Terraform, probably you have faced this error message more than once:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value:
Usually, this is caused because something was screwed up during a destroy operation, and now there is a mismatch between the state and the lock.
The known solution for this is to delete the lock from DynamoDB and run terraform init again. And if it didn't resolve with that, also delete the tfstate from S3, which at this point doesn't have any data as the infrastructure was destroyed.
Surprisingly, neither is working now, and I don't have a clue why. There is no tfstate in the bucket, in fact, I even deleted every old version stored (bucket has versioning enabled). There is no lock in DynamoDB either.
Changing the tfstate name works without issues, but I can't change it as it would break the naming convention I'm using for all my tfstates.
So, any ideas what's going on here? On Friday, the infrastructure was deployed and destroyed without issues (as part of the destroy, I always check there is no lock left behind and delete the tfstate from S3). But today, I'm facing this error, and it's been a while already and can't figure it out..
Holy ****!
So, it turns out DynamoDB was being a serious troll here.
I searched for the key of the md5 of this tfstate, and it wasn't returned. But then I noticed there was a message about that there were more items to be returned... After clicking like 6 times on that button, it eventually returned a hidden lock for this tfstate.
Deleted it, and everything is back to normal again.
So, as summary, if you ever face this issue and you can't find the lock on DynamoDB... be sure that all items are returned in the query, as it can take many attempts to return them all.
I have a Running Google Function which I use in my code and it works fine.
But when I go to Google function to see the source code it shows:
Archive not found in the storage location
Why Can't I see my source code? What should I do?
Runtime: Node.js 10
There are two possible reasons:
You may have deleted the source bucket in Google Cloud storage. Have you perhaps deleted GCS bucket named like gcf-source-xxxxxx? It is the source storage where your code is archived. If you are sure you have deleted the source bucket, There is no way to restore your source code.
Much more likely, though, is that you did not delete anything but instead renamed the bucket, for example by choosing a near city for the location settings. If the GCS bucket's region does not match your Cloud function region, the error is thrown. You should check both services' region.
You can check the Cloud Function's region at details -> general information
This error had appeared before when I browsed the Google Storage location that is used by the cloud function - without deleting anything there. It might have happened, though, that I changed the location / city of the bucket to Region MY_REGION (MY_CITY). In my case, the CF was likely already at the chosen region, therefore the other answer above, bullet point 2., probably does not cover the whole issue.
I guess it is about a third point that could be added to the list:
+ 3. if you firstly choose a region at all, the bucket name gets a new suffix that was not there before, that is, from gcf-sources-XXXXXXXXXXXX to gcf-sources-XXXXXXXXXXXX-MY_REGION. Then, the CF is not be able to find its source code anymore in the old bucket address. That would explain this first error.
First error put aside, now the error in question appears again, and this time I have not done anything to get Google app engine deployment fails- Error while finding module specification for 'pip' (AttributeError: module '__main__' has no attribute '__file__'). I left it for two days, doing anything, only to get the error in question afterwards. Thus, you seem to sometimes just lose your deployed script out of nowhere, better keep a backup before each deployment.
Solution:
Create a new Cloud Function or
edit the Cloud Function, choose Inline Editor as source code, create the default files for Runtime Node.js 10 manually and fill them with your backup code.
Our terraform state went to a broken state after we accidentally executed a out of date branch.
Two of the database got deleted, and as GCP doesn't let you use the same name for database after deletion, terraform couldn't recreate the databases, and bailed out.
The problem is, terraform thinks the databases are their, and trying to read its users:
Error when reading or editing SQL User "xxx" in instance "xxx": googleapi: Error 400: Invalid request: Invalid request since instance is not running., invalid
The instance is simply not there!
I tried to taint the database, and the user, but I still get this exact same error.
Does anyone know how to fix this? I cannot afford to destroy and recreate the environment.
terraform state rm can be used to remove the no longer existing databases from the state file which should fix this. Details at https://www.terraform.io/docs/commands/state/rm.html
I am trying to Queue an existing build that worked fine up until a few days ago. The message I am getting is the following:
The pipeline is not valid. Job Phase_1: Step NuGetInstaller_1 input
externalEndpoints references service connection
{guid} which could not be found. The
service connection does not exist or has not been authorized for use.
For authorization details, refer to https://aka.ms/yamlauthz.
From these links Troubleshooting authorization for a YAML pipeline
and Build Error: The pipeline is not valid I understand that there is a chance that someone created a duplicate endpoint that prevents somehow the build ("A duplicate endpoint (which points to a KeyVault) was created by someone not on our team and this duplicated endpoint was preventing any builds that referenced the endpoint.") but I can't really find what exactly to look for and the suggestion of the first link didn't work.
Can anyone suggest what to look for or what might be the problem in my case?
Many thanks.
I am trying to deploy a node app into azure, while doing last step of deployment
git push azure master, an error occurred as below.
fatal: unable to access 'https://lalit#wittyparrot.com#node-deploy-to-azure.scm.azurewebsites.net/node-deploy-to-azure.git/': Couldn't resolve host 'wittyparrot.com#node-deploy-to-azure.scm.azurewebsites.net'
Please help to resolove it.
One solution is like #evilSnobu has said. Use the url https://{appname}.scm.azurewebsites.com.
What you have met is caused by your deployment user name, as you use the format https://{username}#{appname}.scm.azurewebsites.net:443/{appname}.git
In your case, you set it as lalit#wittyparrot.com. You may have done this setting in Azure Cloud Shell and it showed no error.
But in fact, the name can only contain letters, numbers, hyphens and underscores. Otherwise your url can't be resolved correctly.
You can see the tip here.
Azure Cloud Shell may miss some necessary pattern check so that invalid user name causes no error to show.