I am trying to retrieve the deployed manifest and metadata information using the terraform
"helm_release" resource
Metadata values are coming out nicely however its failing in getting the manifest
Sample code:
`output "show_release_md" {
value=helm_release.testdeploy.metadata
}` -- This works
output "show_release_manifest" { value=helm_release.testdeploy.manifest } - Manifest fails
Unsupported attribute. This object has no argument,nested block or exported attributes named
"manifest"
Any ideas?
You must enable the manifest feature flag in experiments block of the Helm provider first. For reference see the Helm provider experiments block documentation and helm_release attribute reference.
e.g.
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
experiments {
manifest = true
}
}
Related
I want to create two different workspaces on Terraform Cloud: One for DEV environment, the other for PROD environment.
I am trying to create them hust using a single configuration file. The infrastructure will be the same just in two different Azure subscriptions with different credentials.
Here the code I am trying:
terraform {
required_version = ">= 1.1.0"
required_providers {
#https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.40.0"
}
}
cloud {
organization = "mycompany"
workspaces {
tags = ["dev", "prod"]
}
}
}
I am watching the documentantion. It seems like inside the cloud -> workspace command I just can use either name or tags attributes. It is required I have at least one of them in my configuration.
Now in my Terraform Cloud account, I have two workspaces: 1 with the tag prod and one with the tag dev.
I set the envinroment variable:
$Env:TF_WORKSPACE="mycompany-infrastructure-dev"
And I try to initialize Terraform Cloud:
terraform init
But I get this error:
Error: Invalid workspace selection
Terraform failed to find workspace "mycompany-infrastructure-dev" with the tags specified in your configuration: │ [dev, prod]
How can I create one configuration that I can use with different environment/workspaces?
Thank you
First, I ran the similar code as yours in my environment and received an error shown below: It prompted me to use terraform login to generate a token for accessing the organization on Terraform Cloud.
The login was successful, and the browser generated an API token.
Token received and entered.
Logged into Terraform cloud as shown:
In Terraform Cloud -> Organizations, I created a new organization:
Script for creating different workspaces from a single configuration file:
cloud {
organization = "mycompanyone"
workspaces {
tags = ["dev", "prod"]
}
}
Taken your script and made a few changes as seen below:
Terraform will prompt for basic concerns while initializing, as shown here.
Now run terraform init or terraform init -upgrade.
terraform initialized successfully:
I created this simple example:
terraform {
required_version = "~> 1.0"
required_providers {
tfe = {
source = "hashicorp/tfe"
version = "~> 0.40"
}
}
cloud {
organization = "myorg"
workspaces {
name = "test-management"
}
}
}
resource "tfe_workspace" "test" {
organization = "myorg"
name = "test"
}
When I run the init command, it creates the test-management workspace as expected. The problem is that when I run the apply I get this error:
│ Error: Error creating workspace test for organization myorg: resource not found
│
│ with tfe_workspace.test,
│ on main.tf line 17, in resource "tfe_workspace" "test":
│ 17: resource "tfe_workspace" "test" {
Is it impossible to have Terraform Cloud manage its resources?
It's a must to define the API token. At first, I thought it would be done automatically by Terraform Cloud, but it isn't.
You can set the token variable in the provider "tfe" {} or the TFE_TOKEN environment variable. I recommend the environment variable using the sensitive mode so you don't leak the API token.
(I'm assuming that the problem is credentials-related due to the other existing answer.)
The hashicorp/tfe provider is (like all providers) a separate plugin program from Terraform itself, but it supports mostly the same methods for searching for credentials that Terraform CLI does.
If you use one of the following options then the credentials will work across both Terraform CLI and the hashicorp/tfe provider at the same time:
Run terraform login to generate a credentials file for the host app.terraform.io (Terraform Cloud's hostname) in your home directory.
Set the environment variable TF_TOKEN_app_terraform_io to an existing Terraform Cloud API token. Terraform CLI and hashicorp/terraform both treat this environment variable as equivalent to a credentials configuration like what terraform login would create.
(You only need to do one of these things; they all achieve a similar result.)
My GitLab CI pipeline terraform configuration requires a couple of required_provider blocks to be declared. These are "hashicorp/azuread" and "hashicorp/vault" and so in my provider.tf file, I have given the below declaration:
terraform {
required_providers {
azuread = {
source = "hashicorp/azuread"
version = "~> 2.0.0"
}
vault = {
source = "hashicorp/vault"
version = "~> 3.0.0"
}
}
}
When my GitLab pipeline runs the terraform plan stage however, it throws the following error:
Error: Invalid provider configuration
Provider "registry.terraform.io/hashicorp/vault" requires explicit configuraton.
Add a provider block to the root module and configure the providers required
arguments as described in the provider documentation.
I realise my required provider block for hashicorp/vault is incomplete/not properly configured but despite all my efforts to find an example of how it should be configured, I have simply run into a brick wall.
Any help with a very basic example would be greatly appreciated.
It depends on the version of Terraform you are using. However, for each provider there is (in the top right corner) a Use Provider button which explains how to add the required blocks of code to your files.
Each provider has some additional configuration parameters which could be added and some are required.
So based on the error, I think you are missing the second part of the configuration:
provider "vault" {
# Configuration options
}
There is also an explanation on how to upgrade to version 3.0 of the provider. You might also want to take a look at Hashicorp Learn examples and Github repo with example code.
I have a diagnostic setting configured on my master db. As shown below in my main.tf
resource "azurerm_monitor_diagnostic_setting" "main" {
name = "Diagnostic Settings - Master"
target_resource_id = "${azurerm_mssql_server.main.id}/databases/master"
log_analytics_workspace_id = azurerm_log_analytics_workspace.main.id
log {
category = "SQLSecurityAuditEvents"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = false
}
}
lifecycle {
ignore_changes = [log, metric]
}
}
If I don't delete it before in the resource group before I run the Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via
Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains - but I don't know why that is a problem with Terraform. I have also noticed that it is in my tfplan.
What could be the problem?
If I don't delete it before in the resource group before I run the
Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains but I don't know why that is a problem with Terraform.
If you have created the resource in Azure from a different way (i.e. Portal/Templates/CLI/Powershell), that means Terraform is not aware of resource already existing in Azure. So, during Terraform Plan, it shows you the plan what will be created from what you have written in main.tf. But when you run Terraform Apply the azurerm provider checks the resources names with the existing resources of the same resource providers and result in giving an error that it already exists and needs to be imported to be managed by Terraform.
Also if you have created everything from Terraform then doing a Terraform destroy deletes all the resources present on the main.tf.
Well, it's in the .tfplan and also it's in main.tf - so it's imported, right ?
If you mention the resource and its details in main.tf and .tfplan, it doesn't mean that you have imported the resource or Terraform gets aware of the resource. Terraform is only aware of the resources that are stored in the Terraform state file i.e. .tfstate.
So , to overcome the error that you get without deleting the resource from Portal, you will have to add the resource in the main.tf as you have already done and then use Terraform import command to import the Azure resource to Terraform State file like below:
terraform import azurerm_monitor_diagnostic_setting.example "{resourceID}|{DiagnosticsSettingsName}"
So, for you it will be like:
terraform import azurerm_monitor_diagnostic_setting.main "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Sql/servers/<SQLServerName>/databases/master|Diagnostic Settings - Master"
After the Import is done, any changes you make from Terraform to that resource will get reflected in portal as well and you will be able to destroy the resource from terraform as well.
I'm building CI/CD pipeline using GitHub Actions and Terraform. I have a main.tf file like below, which I'm calling from GitHub action for multiple environments. I'm using https://github.com/hashicorp/setup-terraform to interact with Terraform in GitHub actions. I have MyService component and I'm deploying to DEV, UAT and PROD environments. I would like to reuse main.tf for all of the environments and dynamically set workspace name like so: MyService-DEV, MyService-UAT, MyService-PROD. Usage of variables is not allowed in the terraform/cloud block. I'm using HashiCorp cloud to store state.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.0"
}
}
cloud {
organization = "tf-organization"
workspaces {
name = "MyService-${env.envname}" #<==not allowed to use variables
}
}
}
Update
I finally managed to get this up and running with helpful comments. Here are my findings:
TF_WORKSPACE needs to be defined upfront like: service-dev
I didn't get tags to work the way I want when running in automation. If I define a tag in cloud.workspaces.tags as 'service' then there is no way to set a second tag like 'dev' dynamically. Both of the tags are needed to during init ['service', 'dev'] in order for TF to select workspace service-dev automatically.
I ended up using tfe provider in order to set up workspaces(with tags) automatically. In the end I still needed to set TF_WORKSPACE=service-dev
It doesn't make sense to refer to terraform.workspace as part of the workspaces block inside a cloud block, because that block defines which remote workspaces Terraform will use and therefore dictates what final value terraform.workspace will have in the rest of your configuration.
To declare that your Terraform configuration belongs to more than one workspace in Terraform Cloud, you can assign each of those workspaces the tag "MyService" and then use the tags argument instead of the name argument:
cloud {
organization = "tf-organization"
workspaces {
tags = ["MyService"]
}
}
If you assign that tag to hypothetical MyService-dev and MyService-prod workspaces in Terraform Cloud and then initialize with the configuration above, Terraform will present those two workspaces for selection using the terraform workspace commands when working in this directory.
terraform.workspace will then appear as either MyService-dev or MyService-prod, depending on which one you have selected.