How to reload the terraform provider at runtime to use the different AWS profile - terraform

How to reload the terraform provider at runtime to use the different AWS profile.
Create a new user
resource "aws_iam_user" "user_lake_admin" {
name = var.lake_admin_user_name
path = "/"
tags = {
tag-key = "data-test"
}
}
provider "aws" {
access_key = aws_iam_access_key.user_lake_admin_AK_SK.id
secret_key = aws_iam_access_key.user_lake_admin_AK_SK.secret
region = "us-west-2"
alias = "lake-admin-profile"
}
this lake_admin user is created in the same file.
trying to use
provider "aws" {
access_key = aws_iam_access_key.user_lake_admin_AK_SK.id
secret_key = aws_iam_access_key.user_lake_admin_AK_SK.secret
region = "us-west-2"
alias = "lake-admin-profile"
}
resource "aws_glue_catalog_database" "myDB" {
name = "my-db"
provider = aws.lake-admin-profile
}
As I know terraform providers are executed first in all terraform files.
But is there any way we can reload the configurations of providers in the mid of terraform execution?

You can't do this directly.
You can apply the creation of the user in one root module and state and use its credentials in a provider for the second.
For the purposes of deploying infrastructure, you are likely better off with IAM Roles and assume role providers to handle this kind of situation.
Generally, you don't need to create infrastructure with a specific user. There's rarely an advantage to doing that. I can't think of a case where the principal creating infrastructure has any implied specific special access to the created infrastructure.
You can use a deployment IAM Role or IAM User to deploy everything in the account and then assign resource based and IAM policy to do the restrictions in the deployment.

Related

Why does terraform solve optional attribute value drift in some cases and not others?

Take 2 cases.
The AWS terraform provider
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.14.0"
}
}
}
provider "aws"{
region = "us-east-1"
access_key = "<insert here>"
secret_key = "<insert here>"
}
resource "aws_instance" "my_ec2" {
ami = "ami-0022f774911c1d690"
instance_type = "t2.micro"
}
After running a terraform apply on the above, if one manually creates a custom security group and assigns it to the EC2 instance created above, then a terraform apply thereafter will update the terraform.tfstate file to contain the custom security group in its security group section. However, it will NOT put back the previous "default" security group.
This is what I would expect as well, since my tf code did not explicitly want to anchor down a certain security group.
The github terraform provider
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 4.0"
}
}
}
# Configure the GitHub Provider
provider "github" {
token = var.github-token
}
resource "github_repository" "example" {
name = "tfstate-demo-1"
//description = "My awesome codebase" <------ NOTE THAT WE DO NOT SPECIFY A DESCRIPTION
auto_init = true
visibility = "public"
}
In this case, the repo is created without a description. Thereafter, if one updates the description via github.com and re-runs terraform apply on the above code, terraform will
a) put the new description into its tf state file during the refresh stage and
b) remove the description from the terraform.tfstate file as well as the repo in github.com.
A message on the terraform command line does allude to this confusing behavior:
Unless you have made equivalent changes to your configuration,. or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.
May include? Why the ambiguity?
And why does tf enforce the blank description in this case when I have not specified anything about the optional description field in my tf code? And why does this behavior vary across providers? Shouldnt optional arguments be left alone unenforced, the way the AWS provider does not undo a custom security group attached to an EC2 instance outside of terraform? What is the design thinking behind this?

Artifact's permissions issue with terrafrom

I am trying to create a service account using terraform and I also want to apply multiple permissions to that account using terraform.
# create artifact register
resource "google_artifact_registry_repository" "yacht-away" {
provider = google-beta
location = "asia-south1"
repository_id = "yacht-away"
description = "yacht-away docker repository with iam"
format = "DOCKER"
}
# create service account
resource "google_service_account" "yacht-away-service-acc" {
provider = google-beta
account_id = "yacht-away-service-ac"
display_name = "Yacht Away Service Account"
}
However, I constantly see this error. I have verified the value of location everywhere it is the same as mentioned above. So probably that is not the issue. The service account being used by the terraform has project editor access and I have also tried after providing it owner access.
Error: Error when reading or editing Resource "artifactregistry repository \"projects/dhb-222614/locations/asia-south1/repositories/yacht-away\"" with IAM Member: Role "roles/artifactregistry.reader" Member "serviceAccount:yacht-away-service-ac#dhb-222614.iam.gserviceaccount.com": Error retrieving IAM policy for artifactregistry repository "projects/dhb-222614/locations/asia-south1/repositories/yacht-away": googleapi: Error 403: The caller does not have permission
So I don't understand where am I going wrong.

How to share Terraform variables across workpaces/modules?

Terraform Cloud Workspaces allow me to define variables, but I'm unable to find a way to share variables across more than one workspace.
In my example I have, lets say, two workspaces:
Database
Application
In both cases I'll be using the same AzureRM credentials for connectivity. The following are common values used by the workspaces to connect to my Azure subscription:
provider "azurerm" {
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000"
client_secret = "00000000000000000000000000000000"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
It wouldn't make sense to duplicate values (in my case I'll have probably 10 workspaces).
Is there a way to do this?
Or the correct approach is to define "database" and "application" as a Module, and then use Workspaces (DEV, QA, PROD) to orchestrate them?
In Terraform Cloud, the Workspace object is currently the least granular location where you can specify variable values directly. There is no built in mechanism to share variable values between workspaces.
However, one way to approach this would be to manage Terraform Cloud with Terraform itself. The tfe provider (named after Terraform Enterprise for historical reasons, since it was built before Terraform Cloud launched) will allow Terraform to manage Terraform Cloud workspaces and their associated variables.
variable "workspaces" {
type = set(string)
}
variable "common_environment_variables" {
type = map(string)
}
provider "tfe" {
hostname = "app.terraform.io" # Terraform Cloud
}
resource "tfe_workspace" "example" {
for_each = var.workspaces
organization = "your-organization-name"
name = each.key
}
resource "tfe_variable" "example" {
# We'll need one tfe_variable instance for each
# combination of workspace and environment variable,
# so this one has a more complicated for_each expression.
for_each = {
for pair in setproduct(var.workspaces, keys(var.common_environment_variables)) : "${pair[0]}/${pair[1]}" => {
workspace_name = pair[0]
workspace_id = tfe_workspace.example[pair[0]].id
name = pair[1]
value = var.common_environment_variables[pair[1]]
}
}
workspace_id = each.value.workspace_id
category = "env"
key = each.value.name
value = each.value.value
sensitive = true
}
With the above configuration, you can set var.workspaces to contain the names of the workspaces you want Terraform to manage and var.common_environment_variables to the environment variables you want to set for all of them.
Note that for setting credentials on a provider the recommended approach is to set them in environment variables rather than Terraform variables, because that then makes the Terraform configuration itself agnostic to how those credentials are obtained. You could potentially apply the same Terraform configuration locally (outside of Terraform Cloud) using the integration with Azure CLI auth, while the Terraform Cloud execution environment would often use a service principal.
Therefore to provide the credentials in the Terraform Cloud environment you'd put the following environment variables in var.common_environment_variables:
ARM_CLIENT_ID
ARM_TENANT_ID
ARM_SUBSCRIPTION_ID
ARM_CLIENT_SECRET
If you use Terraform Cloud itself to run operations on this workspace managing Terraform Cloud (naturally, you'd need to set this one up manually to bootstrap, rather than having it self-manage) then you can configure var.common_environment_variables as a sensitive variable on that workspace.
If you instead set it via Terraform variables passed into the provider "azurerm" block (as you indicated in your example) then you force any person or system running the configuration to directly populate those variables, forcing them to use a service principal vs. one of the other mechanisms and preventing Terraform from automatically picking up credentials set using az login. The Terraform configuration should generally only describe what Terraform is managing, not settings related to who is running Terraform or where Terraform is being run.
Note though that the state for the Terraform Cloud self-management workspace will include
a copy of those credentials as is normal for objects Terraform is managing, so the permissions on this workspace should be set appropriately to restrict access to it.
You can now use variable sets to reuse variable across multiple workspaces

Terraform profile field usage in AWS provider

I have a $HOME/.aws/credentials file like this:
[config1]
aws_access_key_id=accessKeyId1
aws_secret_access_key=secretAccesskey1
[config2]
aws_access_key_id=accessKeyId2
aws_secret_access_key=secretAccesskey2
So I was expecting that with this configuration, terraform will choose the second credentials:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
But when I try terraform init it says it hasn't found any valid credentials:
Initializing the backend...
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
As as workaround, I changed config2 by default in my credentials file and I removed the profile field from the provider block so it works but I really need to use something like the first approach. What am I missing here?
Unfortunately you also need to provide the IAM credential configuration to the backend configuration as well as your AWS provider configuration.
The S3 backend configuration takes the same parameters here as the AWS provider so you can specify the backend configuration like this:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
profile = "config2"
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
There's a few reasons behind this needing to be done separately. One of the reasons would be that you can independently use different IAM credentials, accounts and regions for the S3 bucket and the resources you will be managing with the AWS provider. You might also want to use S3 as a backend even if you are creating resources in another cloud provider or not using a cloud provider at all, Terraform can manage resources in a lot of places that don't have a way to store Terraform state. The main reason though is that the backends are actually managed by the core Terraform binary rather than the provider binaries and the backend initialisation happens before pretty much anything else.

Restrict creation of resources to a particular AWS Provider Profile in Terraform

I am trying to implement a Logic to Restrict creation of AWS Resources for a Particular AWS Profile only, so that no one can accidentally create AWS resources in a different AWS Profile.
Eg -
Only if the AWS Variables are set for the below profile will the AWS Resources be created
provider "aws" {
profile = "AWS_Horizontal_Dev"
region = "us-east-1"
}
If the user set's the AWS Variables for a Different Profile accidentally, then the AWS resources should not be created.
What's the best way to achieve this logic?
you could add allowed_account_ids argument here as well to restrict to exact AWS account, assuming your AWS profiles map to AWS accounts:
provider "aws" {
profile = "AWS_Horizontal_Dev"
region = "us-east-1"
allowed_account_ids = ["${var.allowed_account_id}"]
}
Or you could use forbidden_account_ids to exclude the accounts not allowed:
provider "aws" {
profile = "AWS_Horizontal_Dev"
region = "us-east-1"
forbidden_account_ids = ["${var.excluded_account_id}"]
}

Resources