One can (or should) also target a module and not the whole infrastructure while using destroy Something like this.
terraform destroy -target=module.<module_name>
or better see what you are going to destroy first
terraform plan -destroy -target=module.<module_name>
I agree with above comment.
how do I schedule to destroy after certain condition?
How do I destroy (or reflect changes to) other dependent modules?
Related
Context: even though this question is related to How to avoid "Objects have changed outside of Terraform"? it's not exactly the same.
I can't share my exact TF configuration but the idea is I'm getting an empty "Objects have changed outside of Terraform" warning message?
$ terraform plan
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to
these changes.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
No changes. Your infrastructure matches the configuration.
that doesn't display any potential changes.
When I copy my current state and then compare it against the new state after running terraform apply --auto-approve there're no changes either:
diff terraform.tfstate old.tfstate
4c4
< "serial": 25,
---
> "serial": 24,
217d216
< "data.foo.test",
219c218,219
< "data.bar.test2"
---
> "data.bar.test2",
> "data.bar.test2"
seems the only diff is ordering of resource in TF state. Is it TF bug or something?
$ terraform version
Terraform v0.15.4
Also found related issues on GitHub: https://github.com/hashicorp/terraform/issues/28776
This sort of odd behavior can occur in response to quirks in the way particular providers handle the "refresh" step. For backward compatibility with Terraform providers written using the older SDK designed for Terraform v0.11 and earlier, the plan renderer will suppress certain differences that tend to arise due to limitations/flaws in that SDK, such as a value being null before refresh but "" (empty string) after refresh.
Unfortunately if that sort of change is the only change then it can confuse Terraform in this way, where Terraform can see that there is a difference but then when it tries to render the difference it gets suppressed as legacy SDK noise and so there ends up being no difference to report.
This behavior was refined in later versions of Terraform, and so upgrading to the latest v1.x.y release may avoid the problem, assuming it's one of the quirks that the Terraform team already addressed.
I think the reason why you don't see any change to the state here could be that, since there were no changes to make in response to differences in your configuration, Terraform skipped the "apply" step and thus didn't actually commit the refreshed state. You can force Terraform to treat the refresh changes as something to be committed by creating and applying a refresh-only plan:
terraform apply -refresh-only
Assuming that the provider that's misbehaving here is consistent in how it misbehaves (that is: once it's returned a marginally different value, if it will keep returning that value), after applying the refresh only plan you would no longer see this message because future refreshes of the same object will not detect any such immaterial differences.
I'm trying to revise our codebase which seems to be using libgit2 wrong (at least TSAN is going crazy over how we use it).
I understand that most operations are object based (aka, operations on top of repo are localized to that repo), but I'm unclear when it comes to the global state and which operations need to be synchronized globally.
Is there a list of functions that require global synchronization?
Also when it comes to git_repository_open(), do I need to ensure that one path is only ever held by a single thread? I.e. do I need to prevent multiple threads accessing the same repo?
This seems like a trivial question, but in fact it isn't. I'm using puppet 3.7 to deploy and configure artifacts from my project on to a variety of environments. Puppet 5.5 upgrade is on the roadmap, but without an ETA so far.
One of the things I'm trying to automate is the incremental changes to the underlying db. It's not SQL so standard tools are out of question. These changes will come in the form of shell scripts contained in a special module, that will also be deployed as an artifact. For each release we want to have a file, whose content will list the shell scripts to execute in scope of this release. For instance, if in version 1.2 we had implemented JIRA-123, JIRA-124 and JIRA-125, I'd like to execute scripts JIRA-123.sh, JIRA-124.sh and JIRA-125.sh, but no other ones that will still be in that module from previous releases.
So my release "control" file would be called something like jiras.1.2.csv and have one line looking like this:
JIRA-123,JIRA-124,JIRA-125
The task for puppet here seems trivial - read the content of this file, split on "," character, and go on to build exec tasks for each of the jiras. The problem is that the puppet function that should help me do it
file("/somewhere/in/the/filesystem/jiras.1.2.csv")
gets executed at the time of building the puppet catalog, not at the time when the catalog is applied. However, since this file is a part of the payload of the release, it's not there yet. It will be downloaded from nexus in a tar.gz package of the release and extracted later. I have an anchor I can hold on to, which I use to synchronize the exec tasks, but can I attach the reading of the file content to that anchor?
Maybe I'm approaching the problem incorrectly? I was thinking the module with the pre-implementation and post-implementation tasks that constitute the incremental db upgrades could be structured so that for each release there's a subdirectory matching the version name, but then I need to list the contents of that subdirectory to build my exec tasks, and that too - at least to my limited puppet knowledge - can't be deferred until a particular anchor.
--- EDITED after one of the comments ---
The problem is that the upgrade to puppet 5.x is beyond my control - it's another team handling this stuff in a huge organisation, so I have no influence over that and I'm stuck on 3.7 for the foreseeable future.
As for what I'm trying to do - for a bunch of different software packages that we develop and release I want to create three new ones: pre-implementation, post-implementation and checks. The first will hold any tasks that are performed prior to releasing new code in our actual packages. This is typically things like backing up db. Post-implementation will deal with issues that need to be addressed after we've deployed the new code - example operation would be to go and modify older data because for instance we've changed a type of column in a table. Checks are just validations performed to make sure the release is 100% correctly implemented - for instance run a select query and assert on the type of data in the column, whose type we've just changed. Today all of these are daunting manual operations performed by whoever is unlucky to be doing a release. Above all else though, being manual these are by definition error prone.
The approach taken is that for every JIRA ticket being part of the release the responsible developer will have to decide what steps (if any) are needed to release their work, and script that. Puppet is supposed to orchestrate the execution of all of this.
Can I modify state to convince terraform to only report actual changes and not scope changes?
Goal
Easily verify conversion of plan to module by reducing the delta.
Process
Create a module from a plan
Modify plan to use module passing all its required variables
Run terraform plan and expect NO changes
spoiler: all items in the plan get destroyed and recreated as the shift in scope.
- myplan.aws_autoscaling_group.foo
+ module.mymodule.aws_autoscaling_group.foo
Possible Solution
If I could update state as if I ran an apply without changing infrastructure, then I could run a plan and see just the difference between the actual infrastructure and my plan with not scope changes.
terraform state list
for each item
terraform state mv <oldval> module.mymodule.<oldval>
This works, but I'm moving from a plan that uses another module to a module that uses a module and mv on a module doesn't seem to work.
terraform state mv module.ecs_cluster module.mymodule.ecs_cluster
Error moving state: Unexpected value for InstanceType field: "ecs_cluster"
Please ensure your addresses and state paths are valid. No
state was persisted. Your existing states are untouched.
I need some guidance on a use case I've run into when using Perforce Streams. Say I have the following structure:
//ProductA/Dev:
share ...
//ProductA/Main
share ...
import Component1/... //Component1/Release-1_0/...
//ProductA/Release-1_0
share ...
//Component1/Dev
share ...
//Component1/Main
share ...
//Component1/Release-1_0
share ...
ProductA_Main imports code from Component1_Release-1_0. Whenever Component1_Release-1_0 gets updated, it will automatically be available to ProductA (but read-only).
Now. The problem I'm running into is that since ProductA_Release-1_0 inherits from Main and thus also imports Component1_Release-1_0, any code or changes made to the component will immediately affect the ProductA Release. This sort of side effect seems very risky.
Is there any way to isolate the code such that in the release stream such that ALL code changes are tracked (even code that was imported) and there are 0 side-effects from other stream depots but for main and and dev streams, the code is imported. This way, the release will have 0 side effects, while main and dev conveniently import any changes made in the depot.
I know one option would be to create some sort of product specific release stream in the Component1 depot, but that seems a bit of a kludge since Component1 shouldn't need any references to ProductA.
If you are just looking to be able to rebuild the previous versions, you can use labels to sync the stream back to the exact situation it was in at the time by giving a change list number (or label) to p4 sync.
If you are looking for explicit change tracking, you may want to branch the component into your release line. This will make the release copy of the library completely immune to changes in the other stream, unless you choose to branch and reconcile the data from there. If you think you might make independent changes to the libraries in order to patch bugs, this might be something to consider. Of course, perforce won't copy the files in your database on the server, just pointers to them in the metadata, and since you're already importing them into the stream, you're already putting copies of the files on your build machines, so there shouldn't be any "waste" except on the metadata front.
In the end, this looks like a policy question. Rebuilding can be done by syncing back to a previous version, but if you want to float the library fixes into the main code line, leave it as is; if you want to lock down the libraries and make the changes explicit, I'd just branch the libraries in.
Integrating into your release branch
In answer to the question in the comments, if you choose to integrate directly into your release branch, you'll need to remove the import line from the stream specification and replace it with the isolate line which would then place the code only in the release branch. Then you would use the standard p4 integrate command (or p4v) to integrate from the //Component1/Release-1_0/... to //ProductA/Main/Component1/....
Firming up a separate component stream
One final thought is that you could create another stream on the //Component1 line to act as a placeholder for the release.