I set up a workspace and I am following the Enforce Policy with Sentinel hands on guide.
I see the following message in the run tab:
As soon as I try to press the queue plan button I receive this error:
My configured variables are:
Is there something else I need to configure to be able to queue a plan?
Executing from the cli I was able to trigger a run (in TF Cloud) that only included the plan step. The run execution can be viewed if I access the specific run url directly.
Any help, suggestions are more than welcome!
I guess you've already resolved this issue, but I post my resolution anyway.
When this issue occurred, I had wrong workspace's name settings.
There were two workspaces with similar names, and I don't know why this happened. Then I deleted one of them.
On the other one occurred this issue.
In the end, when I deleted the workspace and recreated it, I didn't have the issue occurring anymore.
In my case I didn't have duplicate / similar names workspaces (only one workspace). I found that after running terraform apply locally once first, the UI controls started to work as expected.
Related
I have set up atlantis and configured multiple projects.
I am not using workspaces (therefore, for each project only the default workspace should be applicable).
However, when creating a GitHub Pull Request that includes changes to multiple projects, I get the following error(s)
dir: terragrunt/path1/to/something workspace: default
The default workspace is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again.
dir: terragrunt/path1/to/anotherthing workspace: default
dir: terragrunt/path2/to/anotherthing workspace: default
The default workspace is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again.
This is despite the fact that docs state:
Only the directory in the repo and Terraform workspace are locked, not the whole repo.
Any idea why is this happening?
I saw something similar after setting
parallel_plan: true
parallel_apply: true
in my atlantis.yaml.
Removing these fixed the issue for me, and I assume setting them to false would achieve the same thing.
I am not 100% clear WHY this is happening yet, but it appears to have something to do with the way Atlantis locks and terraform workspaces interact, as the default workspace is called default and it appears that Atlantis locks, might be related to the workspace name.
It looks like you have already reported the issue here and that the maintainers responded that there is a merged but currently (2022-04-29) unreleased fix.
Hope this helps others who happen to stumble upon this...
I am beginner in terraform in a (dangerous) live environment.
I ran a script for creating 3 new accounts in AWS Organizations. Two got generated and due to service limit error I couldn't create one.
To add to it, there was a mistake of the parent-id in the script. I rectified the accounts on the console by moving it to the right parent ID.
That leaves me with one account to be created.
After making the necessary changes in the service limit, I tried running the script. The plan shows 3 accounts to be added 2 to be destroyed. There's no way these accounts can be deleted and added. (Since the script is now version controlled - I can't run just for this one account).
Here's what I did - I modified the terraform state (the parent id) in the S3 bucket. Ensured that terraform show is reflecting the new changes. The terraform plan still shows 3 accounts to add and 2 to destroy.
How do I get this fixed? Any help is deeply appreciated.
Thanks.
The code is source of truth when working with Infrastructure as Code, even if you change state file, you need to update the code as well as state file.
There is no way Terraform can update source code when detecting a drift on your resouces.
So you need:
1- write the manual changes you done in AWS into the Terraform code.
2- Do a terraform plan. It will refresh the state and show you if there is still a difference
If modifying the state file like me, do it at your own risk. I followed how to clean your terraform state and performed the surgery!
Ensure that the code is reflected properly to pick the changes.
I'm running into a strange problem whenever I start a particular build, and I can't get my head around it.
I just imported an existing VSTS-repository into my new GIT-Repository on Azure DevOps. My next step is to create a Build-pipeline which should lead to an artifact which I can deploy. For the company I work for I've done this many times, but I've never seen this error before.
The buildpipeline is setup, and as soon as I start a build it immediately fails with the following error;
Hopefully somebody can help out in resolving this.
UPDATE - Added settings for retrieving sources
After posting the second screenshot and going through everything again properly, I saw that I didn't point the Build Pipeline to the proper GIT-Repository in Azure Devops. After updating this, the issue was resolved.
I have a custom deployment script (*.sh script) defined for my azure deployment.
Just today, I have found that I am unable to publish. I updated my bitbucket repository and after a while I get an error similar to the following:
Command 'starter.cmd deploy_pvl_cont ...' was aborted due to no output nor CPU activity for 180 seconds. You can increase the SCM_COMMAND_IDLE_TIMEOUT app setting (or WEBJOBS_IDLE_TIMEOUT if this is a WebJob) if needed.\r\nstarter.cmd deploy_pvl_content.sh
I have tried a number of things to try to diagnose the problem.
Increase SCM_COMMAND_IDLE_TIMEOUT to 300
Run the script locally (Works)
Set up a new fresh deployment slot and try publishing same commit (Same error)
Tried publishing the previously successful commit (Same error)
Look for useful error messages in a diagnostic log dump (Coldn't find anything more useful)
Tried running the deployment script from the Kudu Console (No output returned, like it didn't actually run)
Tried reverting git to a previous version as suggested by #david-ebbo
Tried simplifying my script to a single echo command with the same results
Not sure what I can do to debug this further. Ideally I would like to get the output of the shell script on the azure host but don't know how to get it. Any ideas?
Updated answer
This is a regression caused by the move to git 2.8.x in Azure. The issue is tracked by https://github.com/projectkudu/kudu/issues/2041.
Here is a very simple workaround (and you don't need to bring in the old git tools): instead of setting your COMMAND to deploy_pvl_content.sh, set it to bash deploy_pvl_content.sh
We'll address the issue, but this workaround will get you going.
Original answer (only leaving for context)
You could be running into some flavor of this issue, which is caused by the upgrade to git 2.8.1 that we just did.
While we're trying to get to the bottom of it, please try this workaround to see if that helps:
Go to Kudu Console
Create a d:\home\bin folder
Copy the old Windows git 1.8.x folder in there. You can get the content from here. If you drag and drop the zip into Kudu console, there is a special unzip drop area that will expand it.
Try your deployment again
I have deleted the workspace from accurev. Now when I am again creating the workspace. It's giving error that workspace already exists. How can I resolve this?
update: Is there any accurev plugin through which i can directly promote the code to accurev strean using IBM RSA.
Workspaces and streams are never really removed due to the time-safe architecture; they are deactivated and can later be reactivated. This also means that a workspace or stream owns every name it's ever had. You will need to create the new workspace with a new, unique name.
Via the command line, "accurev reactivate wspace workspaceName"