Is there a way to specify a workspace for a remote state provider in HCL? How do I ensure collaborators use the proper workspace? I'd expect to see something like
terraform {
backend "s3" {
workspace = "someworkspace"
...
}
}
Terraform's documentation describes how to use workspace_key_prefix but that's not what I'm looking for.
For example, if one team member runs terraform workspace select dev then terraform apply then a different team member runs terraform apply without first running terraform workspace terraform will redeploy the resources defined (because the proper workspace wasn't selected).
Found a workaround. You can commit the environment file to VCS (.terraform/environment). Others running terraform apply will target the workspace specified by the environment file.
Related
Currently, I am using "Mongey/kafka" provider and now I have to switch to "confluentinc/confluent" provider with my existing terraform pipeline.
How can I do this ?
Steps currently following to switch the provider
Changing the provider in main.tf file and running following command to replace provider
terraform state replace-provider Mongey/kafka confluentinc/confluent
and after that I run
terraform init command to install the new provider
But after that when I am running
terraform plan
it is giving "no schema available for module.iddn_news_cms_kafka_topics.kafka_acl.topic_writer[13] while reading state; this is a bug in terraform and should be reported" error.
Is there any way, I will change the terraform provider without disturbing the existing resources created using terraform pipeline ?
The terraform state replace-provider command is intended for switching between providers that are in some way equivalent to one another, such as the hashicorp/google and hashicorp/google-beta providers, or when someone forks a provider into their own namespace but remains compatible with the original provider.
Mongey/kafka and confluentinc/confluent do both have resource types that seem to represent the same concepts in the remote system:
Mongey/kafka
confluentinc/confluent
kafka_acl
confluent_kafka_acl
kafka_quota
confluent_kafka_client_quota
kafka_topic
confluent_kafka_topic
However, despite representing the same concepts in the remote system these resource types have different names and incompatible schemas, so there is no way to migrate directly between them. Terraform has no way to understand which resource types in one provider match with resource types in another, or to understand how to map attributes from one of the resource types onto corresponding attributes of the other.
Instead, I think the best thing to do here would be to ask Terraform to "forget" the objects and then re-import them into the new resource types:
terraform state rm kafka_acl.example to ask Terraform to forget about the remote object associated with kafka_acl.example. There is no undo for this action.
terraform import confluent_kafka_acl.example OBJECT-ID to bind the OBJECT-ID (as described in the documentation) to confluent_kafka_acl.example.
I suggest practicing this in a non-production environment first so that you can be confident about the behavior of each of these commands, and learn how to translate from whatever ID format the Mongey/kafka provider uses into whatever import ID format the confluentinc/confluent provider uses to describe the same objects.
I have a 'development' - 'staging' and 'production' workspace within the terraform cloud organisation.
I'm attempting to interact with them as per documentation here.
Particularly this:
If you associate the directory with multiple workspaces (using
workspace tags), you can use the terraform workspace commands to
select which remote workspace to use.
Locally, I also have the exact same three terraform workspaces created.
Screenshots:
local workspaces:
remote workspaces and tags:
100% same organisation I was able to interact with the workspaces if I hardcode the workspace value instead of using tags.
My terraform backend cloud definitions:
terraform {
cloud {
organization = "<<myorgname>>"
workspaces {
tags = ["development", "staging", "production"]
}
}
}
When I run a simple terraform init I get greeted by:
No workspaces found.
There are no workspaces with the configured tags (development, production, staging)
in your Terraform Cloud organization. To finish initializing, Terraform needs at
least one workspace available.
Terraform can create a properly tagged workspace for you now. Please enter a
name to create a new Terraform Cloud workspace.
I've been going through the documentation here that goes into CLI-driven runs with this context but I can't figure out the right way to do this.
What I want is:
run a terraform plan or terraform apply whilst in the locally-selected development workspace
and then:
the cloud terraform to perform a run on the remote development workspace.
If I just go ahead and write 'development' as a name, it will then apply all 3 tags in the static definition to the remote 'development' workspace, thus defeating the whole purpose of using tags instead of a name.
What's the right way of doing this?
That is true, however there is also this part of documentation [1]:
tags - (Optional) A set of Terraform Cloud workspace tags. You will be able to use this working directory with any workspaces that have all of the specified tags, and can use the terraform workspace commands to switch between them or create new workspaces. New workspaces will automatically have the specified tags. This option conflicts with name.
EDIT: As mentioned in the comments, in order for local workspaces to be usable in Terraform Cloud as well (i.e., to be able to apply the code in Terraform Cloud), there has to be a "common" or a "main" tag across all workspaces created in the Terraform Cloud.
[1] https://www.terraform.io/cli/cloud/settings#arguments
I'm building CI/CD pipeline using GitHub Actions and Terraform. I have a main.tf file like below, which I'm calling from GitHub action for multiple environments. I'm using https://github.com/hashicorp/setup-terraform to interact with Terraform in GitHub actions. I have MyService component and I'm deploying to DEV, UAT and PROD environments. I would like to reuse main.tf for all of the environments and dynamically set workspace name like so: MyService-DEV, MyService-UAT, MyService-PROD. Usage of variables is not allowed in the terraform/cloud block. I'm using HashiCorp cloud to store state.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.0"
}
}
cloud {
organization = "tf-organization"
workspaces {
name = "MyService-${env.envname}" #<==not allowed to use variables
}
}
}
Update
I finally managed to get this up and running with helpful comments. Here are my findings:
TF_WORKSPACE needs to be defined upfront like: service-dev
I didn't get tags to work the way I want when running in automation. If I define a tag in cloud.workspaces.tags as 'service' then there is no way to set a second tag like 'dev' dynamically. Both of the tags are needed to during init ['service', 'dev'] in order for TF to select workspace service-dev automatically.
I ended up using tfe provider in order to set up workspaces(with tags) automatically. In the end I still needed to set TF_WORKSPACE=service-dev
It doesn't make sense to refer to terraform.workspace as part of the workspaces block inside a cloud block, because that block defines which remote workspaces Terraform will use and therefore dictates what final value terraform.workspace will have in the rest of your configuration.
To declare that your Terraform configuration belongs to more than one workspace in Terraform Cloud, you can assign each of those workspaces the tag "MyService" and then use the tags argument instead of the name argument:
cloud {
organization = "tf-organization"
workspaces {
tags = ["MyService"]
}
}
If you assign that tag to hypothetical MyService-dev and MyService-prod workspaces in Terraform Cloud and then initialize with the configuration above, Terraform will present those two workspaces for selection using the terraform workspace commands when working in this directory.
terraform.workspace will then appear as either MyService-dev or MyService-prod, depending on which one you have selected.
I am trying to migrate a project's CLI workspaces to Terraform Cloud. I am using Terraform version 0.14.8 and following the official guide here.
$ terraform0.14.8 workspace list
default
* development
production
staging
Currently, the project uses the S3 remote state backend configuration
terraform {
backend "s3" {
profile = "..."
key = "..."
workspace_key_prefix = "environments"
region = "us-east-1"
bucket = "terraform-state-bucketA"
dynamodb_table = "terraform-state-bucketA"
encrypt = true
}
I changed the backend configuration to:
backend "remote" {
hostname = "app.terraform.io"
organization = "orgA"
workspaces {
prefix = "happyproject-"
}
}
and execute terraform0.14.8 init in order to begin the state migration process. Expected behaviour would be to create 3 workspaces in Terraform Cloud:
happyproject-development
happyproject-staging
happyproject-production
However, I get the following error:
$ terraform0.14.8 init
Initializing modules...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Terraform detected that the backend type changed from "s3" to "remote".
Error: Error looking up workspace
Workspace read failed: invalid value for workspace
I also enabled TRACE level logs and just before it throws the error I can see this: 2021/03/23 10:08:03 [TRACE] backend/remote: looking up workspace for orgA/.
Notice the empty string after orgA/ and the omission of the prefix! I am guessing that TF tries to query Terraform Cloud for the default workspace, which is an empty string, and it fails to do so.
I have not been using the default workspace at all and it just appears when I am executing terraform0.14.8 init. The guide mentions:
Some backends, including the default local backend, allow a special default workspace that doesn't have a specific name. If you previously used a combination of named workspaces and the special default workspace, the prompt will next ask you to choose a new name for the default workspace, since Terraform Cloud doesn't support unnamed workspaces:
However, it never prompts me to choose a name for the default workspace. Any help would be much appreciated!
I had similar issue and what helped me was to create in advance the empty workspace with expected name and then run terraform init.
I have also copied .tfstate file from remote location to root directory of the project before doing init. Hope this will help you as well.
What I ended up doing was
Created the empty workspaces in Terraform Cloud
For every CLI workspace, I pointed the backend to the respective TFC workspace and executed terraform init. That way, the Terraform state was automatically migrated from S3 backend to TFC
Finally, after all CLI workspaces were migrated, I used the prefix argument of the workspaces block instead of the name argument to manage the different TFC workspaces
I have created resources in a different workspace in Terraform but I am not able to destroy resource from one specific workspace. Is there any way to destroy resources by specifying the workspace?
I have tried switched to that specific workspace while destroying resources but it is still pointing to other workspace's state file.
I am using the below commands:
Terraform workspace new test
Terraform apply -var-file terraform.test.tfvars
Terraform destroy
You have to first select the workspace with the following command
terraform workspace select <workspace_name>
Then you can destroy the workspace with
terraform destroy -refresh=false
if you want to list the workspaces created use
terraform workspace list