Terraform / atlantis creates locks for no apparent reason - terraform

I have set up atlantis and configured multiple projects.
I am not using workspaces (therefore, for each project only the default workspace should be applicable).
However, when creating a GitHub Pull Request that includes changes to multiple projects, I get the following error(s)
dir: terragrunt/path1/to/something workspace: default
The default workspace is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again.
dir: terragrunt/path1/to/anotherthing workspace: default
dir: terragrunt/path2/to/anotherthing workspace: default
The default workspace is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again.
This is despite the fact that docs state:
Only the directory in the repo and Terraform workspace are locked, not the whole repo.
Any idea why is this happening?

I saw something similar after setting
parallel_plan: true
parallel_apply: true
in my atlantis.yaml.
Removing these fixed the issue for me, and I assume setting them to false would achieve the same thing.
I am not 100% clear WHY this is happening yet, but it appears to have something to do with the way Atlantis locks and terraform workspaces interact, as the default workspace is called default and it appears that Atlantis locks, might be related to the workspace name.
It looks like you have already reported the issue here and that the maintainers responded that there is a merged but currently (2022-04-29) unreleased fix.
Hope this helps others who happen to stumble upon this...

Related

GitLab: Option to create branch from the issue is missing

I am trying to create a branch linked from an issue on GitLab. The option to create a branch from the issue however is missing on this particular project. I have an access level of Maintainer on this project.
The current project I'm working on:
I have checked the other project I made a few months back on which I have exactly the same access level, the option that I'm looking for is there.
My previous project (This is a different project btw, not the source of the fork)
The difference being is that the current project I am working on is a forked version of the old repo so I could keep historical branches from the previous version of the project. I also imported the issues from the previous repo to the new one. I tried to create a new test issue but I still can't see the menu.
It seems like I configured something wrong, could you please help me identify why I cannot access this menu? Any help would be appreciated.
Thanks!
After some digging, I found that this may be a current known issue on GitLab. It only happens on forked projects similar to #VonC's answer. However it doesn't show how to resolve the issue.
To resolve the issue you have to remove the project's fork relationship found on the Settings > General > Advance. If you forked the repo from another project, you should see the Remove fork relationship button there. This essentially removes the fork relationship of the project from the original repository. Once done, the Create merge request should pop-up immediately upon refreshing the page. Do note you need an Owner access to see the Remove fork relationship option.
For more details, please refer to this issue and this solution was from here.
Check first if this is similar to issue 39778 which refers to issue
I disable the button for projects which are forked.
The context in when it references (from a fork) issue from the original project.
No "Create merge request" in that case.

GitLab error fetching variables after restoring backup

Yesterday, I have moved my GitLab installation to another machine.
It was installed with docker-compose, and I followed the official GitLab guide to back up and restore GitLab including the 'secrets' files.
Everything works so far, except the CI/CD variables in the admin area.
I get the error 'There was an error fetching the variables.' when I navigate to this site.
Can you give me a hint in which log I can found more information about this error?
Finally I could solve the problem.
With the Doctor Rake tasks I could determine where the problem was.
Afterwards I followed the steps to reset the runner registration tokens.
Finally I deleted al the instance variables in the dbconsole, by deleting them out of the database.
Check first if this is similar to gitlab-org/gitlab issue 218913 which includes two possible root causes:
Either you have an adblocker on, which could affect that functionality
Or:
go to the project settings general -> Visibility, project features, permissions
In Pipelines (Build, test, and deploy your changes) select Only Project Members
I had the same issus after restore a backup :
My solution was to delete variables from the database :
sudo gitlab-rails dbconsole --database main
gitlabhq_production=>delete from ci_instance_variables;
gitlabhq_production=>delete from ci_variables;
then it works

Configuration Version is missing Terraform Cloud

I set up a workspace and I am following the Enforce Policy with Sentinel hands on guide.
I see the following message in the run tab:
As soon as I try to press the queue plan button I receive this error:
My configured variables are:
Is there something else I need to configure to be able to queue a plan?
Executing from the cli I was able to trigger a run (in TF Cloud) that only included the plan step. The run execution can be viewed if I access the specific run url directly.
Any help, suggestions are more than welcome!
I guess you've already resolved this issue, but I post my resolution anyway.
When this issue occurred, I had wrong workspace's name settings.
There were two workspaces with similar names, and I don't know why this happened. Then I deleted one of them.
On the other one occurred this issue.
In the end, when I deleted the workspace and recreated it, I didn't have the issue occurring anymore.
In my case I didn't have duplicate / similar names workspaces (only one workspace). I found that after running terraform apply locally once first, the UI controls started to work as expected.

Issue with Gitlab - Projects scheduled for deletion but never deleted

We've been using Gitlab on my current job for a while now, and have encountered some instability which express itself in various ways.
The most recent one: projects that should be deleted are flagged as such but actual deletion never occurs.
Some research has allowed me to see the probable cause of the problem, but not how to resolve it: the ProjectDestroyWorker hasn't run for over 10 days.
Could someone point me to some documentation on the mechanism(s) that trigger the workers, and how to monitor them?
Version: GitLab Community Edition 8.5.0 a513e09
You have a few issues with this kind of problem: issue 15334, issue 20984.
Checking the backtrace from sidekiq.log can help
Merge Requests 5695 and 4341 (for GitLab 8.11) should fix some of those issues, like:
There is a race condition in DestroyGroupService now that projects are deleted asynchronously:
User attempts to delete group
DestroyGroupService iterates through all projects and schedules a Sidekiq job to delete each Project
DestroyGroupService destroys the Group, leaving all its projects without a namespace
Projects::DestroyService runs later but the can? (current_user, :remove_project) is false because the user no longer has permission to destroy projects with no namespace.
This leaves the project in pending_delete state with no namespace/group.
I came across this issue during the migration to hashed storage in v13.5.7
Drop to the ruby console & destroy projects that are pending
deletion
In my particular case this allowed gitlab-rake gitlab:storage:legacy_projects to correctly show 0 so further upgrades were possible.

Freeing up disk space on azure web apps

I've recently been on a support ticket with Azure, and they've recommended turning on Local caching to eliminate occasional outage blips.
The problem with that, is that you need to watch your disk space, since >1Gig is not allowed. And if you deploy from git, like I do, that's an issue because the whole repository is checked out, then built locally, and then kudu-synced.
I've looked at trimming my repo down, but that's only going to yield small savings. What I'd like to do is to remove my repository folder once the deployment has complete. Is that a sensible idea, or are there other solutions to this problem?
There is an upcoming change to the Local Caching behavior that will make it skip the repository folder (since it's not needed at runtime). This should be in the next couple of weeks.
Once that change is out, this issue should automatically go away for you.
the repository folder only container a copy of your repo. it is ok to remove it if you want to safe some space. it will be re-create when there is a new deployment.
There is one side effect when you delete your repository folder, your next deployment will take longer time since it will need to sync your entire repository.
Other than repository folder, you can cleanup files that under D:\home\LogFiles as well, to safe you some more spaces
I would recommend you to use build sequence using Visual Studio Team Services - there, you can do anything you want and include operations into the build pipeline (build trigger => delete the folder).

Resources