I have the following provider configured in terraform:
provider "google" {
credentials = "${file("key.json")}"
project = "project-123456"
region = "${var.region}"
}
I was able to move the project name to a variable that I pass in when calling terraform plan and apply. But the credential key file doesn't seem to be configurable.
provider "google" {
credentials = "${var.key}"
project = "${var.project}"
region = "${var.region}"
}
terraform plan -var key='${file("key.json")}' -var project=project-123456
Throws this error:
provider.google: credentials are not valid JSON '${file("key.json")}': invalid character '$' looking for beginning of
value
I also tried like this:
provider "google" {
credentials = "${file(${var.key})}"
project = "${var.project}"
region = "${var.region}"
}
terraform plan -var key=key.json -var project=project-123456
But it throws this error:
Error reading config for provider config google: parse error at 1:8: expected expression but found invalid sequence "$"
How can I configure the credentials file for a provider?
I guessed it! Just need some extra quotes in my last attempt:
credentials = "${file(${var.key})}"
credentials = "${file("${var.key}")}"
terraform plan -var key=key.json -var project=project-123456
Related
I'm trying to get my terraform (which manages our mongo atlas infrastructure) to use dynamic secrets (through vault, running on localhost for now) when terraform is connecting to atlas, but I cant seem to get it to work.
I can't find any examples of how to do this so I have put together a sample github repo showing the few things I've tried to do so far.
All the magic is contained in the 3 provider files. I have the standard method of connecting to atlas using static keys (even with said keys generated as temporary API keys through vault) see provider1.tf
The problem comes when I try and use the atlas vault secret engine for terraform along with the mongo atlas provider. There are just no examples!
My question is how do I use the atlas vault secret engine to generate and use temporary keys when provisioning infrastructure with terraform?
I've tried two different ways of wiring up the providers, see files provider2.tf and provider3.tf, code is copied here:
provider2.tf
provider "mongodbatlas" {
public_key = vault_database_secrets_mount.db.mongodbatlas[0].public_key
private_key = vault_database_secrets_mount.db.mongodbatlas[0].private_key
}
provider "vault" {
address = "http://127.0.0.1:8200"
}
resource "vault_database_secrets_mount" "db" {
path = "db"
mongodbatlas {
name = "foo"
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
provider3.tf
provider "mongodbatlas" {
public_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].public_key
private_key = vault_database_secret_backend_connection.db2.mongodbatlas[0].private_key
}
resource "vault_mount" "db1" {
path = "mongodbatlas01"
type = "database"
}
resource "vault_database_secret_backend_connection" "db2" {
backend = vault_mount.db1.path
name = "mongodbatlas02"
allowed_roles = ["dev", "prod"]
mongodbatlas {
public_key = var.mongodbatlas_org_public_key
private_key = var.mongodbatlas_org_private_key
project_id = var.mongodbatlas_project_id
}
}
Both methods give the same sort of error:
mongodbatlas_cluster.cluster-terraform01: Creating...
╷
│ Error: error creating MongoDB Cluster: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/10000000000000000000001/clusters: 401 (request "") You are not authorized for this resource.
│
│ with mongodbatlas_cluster.cluster-terraform01,
│ on main.tf line 1, in resource "mongodbatlas_cluster" "cluster-terraform01":
│ 1: resource "mongodbatlas_cluster" "cluster-terraform01" {
Any pointers or examples would be greatly appreciated
many thanks
Once you have setup and started vault, enabled mongodbatlas, added config and roles to vault, its actually rather easy to connect terraform to atlas using dynamic ephemeral keys created by vault.
Run these commands first to start and configure vault locally:
vault server -dev
export VAULT_ADDR='http://127.0.0.1:8200'
vault secrets enable mongodbatlas
# Write your master API keys into vault
vault write mongodbatlas/config public_key=org-api-public-key private_key=org-api-private-key
vault write mongodbatlas/roles/test project_id=100000000000000000000001 roles=GROUP_OWNER ttl=2h max_ttl=5h cidr_blocks=123.45.67.1/24
Now add the vault provider and data source to your terraform config:
provider "vault" {
address = "http://127.0.0.1:8200"
}
data "vault_generic_secret" "mongodbatlas" {
path = "mongodbatlas/creds/test"
}
Finally, add the mongodbatlas provider with the keys as provided by the vault data source:
provider "mongodbatlas" {
public_key = data.vault_generic_secret.mongodbatlas.data["public_key"]
private_key = data.vault_generic_secret.mongodbatlas.data["private_key"]
}
This is shown in a full example in this example github repo
I am doing a small POC with terraform and I am unable to run terraform plan
My code:
terraform {
backend "azurerm" {
storage_account_name = "appngqastorage"
container_name = "terraform"
key = "qa.terraform.tfstate"
access_key = "my access key here"
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.77"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "qa_resource_group" {
location = "East US"
name = "namehere"
}
My execution:
terraform init = success
terraform validate = configuration is valid
terraform plan = throws exception
Error:
│ Error: Plugin error
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 15, in provider "azurerm":
│ 15: provider"azurerm"{
│
│ The plugin returned an unexpected error from plugin.(*GRPCProvider).ConfigureProvider: rpc error: code = Internal desc = grpc: error while marshaling: string field contains invalid UTF-8
After digging a little deeper I was able to figure out what was the issue.
Current project I am working on is been utilized within multiple regions. So, when testing work I am swapping my region in order to properly test the data that displays in specific region. This time when running terraform apply my windows configuration was pointing to another region not the US.
This thread helped me understand.
Upgrade azure CLI to the latest version using az upgrade
Log into your azure account using az login
Re-run your terraform commands!
I'm trying to figure out Terraform Cloud, just wondering how I would access environment variables I've set in the workspace within my files?
// main.tf
// Configure the Google Cloud provider
provider "google" {
credentials = "GOOGLE_CREDENTIALS"
project = "my-project"
region = "australia-southeast1"
}
I've set an environment variable on the cloud workspace GOOGLE_CREDENTIALS with the value being my .json key formatted to work with TFC.
Just not sure how to access it within my main.tf file like above
add credentials to the Terraform Cloud environment variable TF_VAR_credentials and set it as sensitive. then you can use the variable here :
provider "google" {
credentials = var.credentials
project = var.project
region = var.region
}
and declare your variable like this
variable "credentials" {
description = "credentials"
}
variable "project" {
description = "project"
}
variable "region" {
description = "region"
}
then on the apply command you can pass :
terraform apply \
-var "region=${REGION_FROM_ENV}" \
-var "project=${PROJECT_FROM_ENV}" \
-var "credentials=${GOOGLE_CREDENTIALS}"
here's a reference :
https://www.terraform.io/docs/commands/apply.html#var-39-foo-bar-39-
I'm working on an AWS multi-account setup with Terraform. I've got a master account that creates several sub-accounts, and in the sub-accounts I'm referencing the master's remote state to retrieve output values.
The terraform plan command is failing for this configuration in a test main.tf:
terraform {
required_version = ">= 0.12.0"
backend "s3" {
bucket = "bucketname"
key = "statekey.tfstate"
region = "us-east-1"
}
}
provider "aws" {
region = "us-east-1"
version = "~> 2.7"
}
data "aws_region" "current" {}
data "terraform_remote_state" "common" {
backend = "s3"
config {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
With the following error:
➜ test terraform plan
Error: Unsupported block type
on main.tf line 20, in data "terraform_remote_state" "common":
20: config {
Blocks of type "config" are not expected here. Did you mean to define argument
"config"? If so, use the equals sign to assign it a value.
From what I can tell from the documentation, this should be working… what am I doing wrong?
➜ test terraform -v
Terraform v0.12.2
+ provider.aws v2.14.0
Seems the related document isn't updated after upgrade to 0.12.x
As the error prompt, add = after config
data "terraform_remote_state" "common" {
backend = "s3"
config = {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
If the problem is fixed, recommend to raise a PR to update the document, then others can avoid the same issue again.
I am trying to run a Terraform deployment via a Shell script where within the Shell script I first dynamically collect the access key for my Azure storage account and assign it to a variable. I then want to use the variable in a -var assignment on the terraform command line. This method works great when configuring the backend for remote state but it is not working for doing a deployment. The other variables used in the template are being pulled from a terraform.tfvars file. Below is my Shell script and Terraform template:
Shell script:
#!/bin/bash
set -eo pipefail
subscription_name="Visual Studio Enterprise with MSDN"
tfstate_storage_resource_group="terraform-state-rg"
tfstate_storage_account="terraformtfstatesa"
az account set --subscription "$subscription_name"
tfstate_storage_access_key=$(
az storage account keys list \
--resource-group "$tfstate_storage_resource_group" \
--account-name "$tfstate_storage_account" \
--query '[0].value' -o tsv
)
echo $tfstate_storage_access_key
terraform apply \
-var "access_key=$tfstate_storage_access_key"
Deployment template:
provider "azurerm" {
subscription_id = "${var.sub_id}"
}
data "terraform_remote_state" "rg" {
backend = "azurerm"
config {
storage_account_name = "terraformtfstatesa"
container_name = "terraform-state"
key = "rg.stage.project.terraform.tfstate"
access_key = "${var.access_key}"
}
}
resource "azurerm_storage_account" "my_table" {
name = "${var.storage_account}"
resource_group_name = "${data.terraform_remote_state.rg.rgname}"
location = "${var.region}"
account_tier = "Standard"
account_replication_type = "LRS"
}
I have tried defining the variable in my terraform.tfvars file:
storage_account = "appastagesa"
les_table_name = "appatable
region = "eastus"
sub_id = "abc12345-099c-1234-1234-998899889988"
access_key = ""
The access_key definition appears to get ignored.
I then tried not using a terraform.tfvars file, and created the variables.tf file below:
variable storage_account {
description = "Name of the storage account to create"
default = "appastagesa"
}
variable les_table_name {
description = "Name of the App table to create"
default = "appatable"
}
variable region {
description = "The region where resources will be deployed (ex. eastus, eastus2, etc.)"
default = "eastus"
}
variable sub_id {
description = "The ID of the subscription to deploy into"
default = "abc12345-099c-1234-1234-998899889988"
}
variable access_key {}
I then modified my deploy.sh script to use the line below to run my terraform deployment:
terraform apply \
-var "access_key=$tfstate_storage_access_key" \
-var-file="variables.tf"
This results in the error invalid value "variables.tf" for flag -var-file: multiple map declarations not supported for variables Usage: terraform apply [options] [DIR-OR-PLAN] being thrown.
After playing with this for hours...I am almost embarrassed as to what the problem was but I am also frustrated with Terraform because of the time I wasted on this issue.
I had all of my variables defined in my variables.tf file with all but one having default values. For the one without a default value, I was passing it in as part of the command line. My command line was where the problem was. Because of all of the documentation I read, I thought I had to tell terraform what my variables file was by using the -var-file option. Turns out you don't and when I did it threw the error. Turns out all I had to do was use the -var option for the variable that had no defined default and terraform just automagically saw the variables.tf file. Frustrating. I am in love with Terraform but the one negative I would give it is that the documentation is lacking.