How to disable public access to Azure storage account but still accessible from cloudshell.
What I have and works:
Az-storage account that contains "terraform.tfstate" with public access
main.tf file in my "Azure Cloudshell" with "backend" config for remote statefile
terraform {
required_version = ">= 1.2.4"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98.0"
}
}
# To store the state in a storage account
# Benefit=working with team and if local shell destroyed -> state=lost)
backend "azurerm" {
resource_group_name = "RG-Telco-tf-statefiles"
storage_account_name = "telcostatefiles"
container_name = "tf-statefile-app-1"
key = "terraform.tfstate"
}
}
This works perfectly.
But if I restrict public access in the storage account, my "Azure Cloudshell" has no permission to the statefile anymore.
How can I make it work and what are the best security best practices in this case?
I think this is what you need.
After you set this, you can make a network restriction rule and you can allow the cloud shell virtual network.
Some other best practices:
The storage account that stores the state file should be in a separate resource group and have a delete lock on it.
1 SAS token per user renewed every 6 months with a scope at the folder level, one container per project, and per environment
Storage with redundancy in a paired region for reading access in case of issues
Related
We use Terraform to deploy to our Microsoft Azure tenants.
One of our environments connects to a storage account for it's state file. For that storage account we need to rotate the keys, but I don't understand where I am meant to set the new key. I have been looking at the documentation and it sounds like he backend setting does this, but I don't understand how it is done exactly.
The code we have looks like:
terraform {
backend "azurerm" {
resource_group_name = "ourresourcegroup"
storage_account_name = "ourstorageaccount"
container_name = "tstate"
key = "terraform.tfstate"
}
}
Please can someone help me understand where I set the storage account keys please.
I have terraform script which creates Backend address pools and Loadbalancer rules in Loadbalancer in resource group. These tasks are included in Azure-pipeline. FOr the first time I run the pipeline.Its creating properly. If I run the pipeline for the second time. Its not updating the existing one .Its keeping the Backend address pools and Loadbalancer rules which are created by previous release and adding the extra Backend address pools and Loadbalancer rules for this release which is causing duplicates in Backend address pools and Loadbalancer rules. Any suggestions on this please
resource "azurerm_lb_backend_address_pool" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "BackEndAddressPool"
}
resource "azurerm_lb_rule" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "LBRule"
protocol = "All"
frontend_port = 0
backend_port = 0
frontend_ip_configuration_name = "PublicIPAddress"
enable_floating_ip = true
backend_address_pool_id = azurerm_lb_backend_address_pool.example
}
This is likely happening because the Terraform state file is being lost between pipeline runs.
By default, Terraform stores state locally in a file named terraform.tfstate. When working with Terraform in a team, use of a local file makes Terraform usage complicated because each user must make sure they always have the latest state data before running Terraform and make sure that nobody else runs Terraform at the same time.
With remote state, Terraform writes the state data to a remote data store, which can then be shared between all members of a team. Terraform supports storing state in Terraform Cloud, HashiCorp Consul, Amazon S3, Alibaba Cloud OSS, and more.
Remote state is a feature of backends. Configuring and using remote backends is easy and you can get started with remote state quickly. If you then want to migrate back to using local state, backends make that easy as well.
You will want to configure Remote State storage to keep the state. Here is an example using Azure Blob Storage:
terraform {
backend "azurerm" {
resource_group_name = "StorageAccount-ResourceGroup"
storage_account_name = "abcd1234"
container_name = "tfstate"
key = "prod.terraform.tfstate"
}
}
Stores the state as a Blob with the given Key within the Blob Container within the Blob Storage Account. This backend also supports state locking and consistency checking via native capabilities of Azure Blob Storage.
This is more completely described in the azurerm Terraform backend docs.
Microsoft also provides a Tutorial: Store Terraform state in Azure Storage, which goes through the setup step by step.
How to reload the terraform provider at runtime to use the different AWS profile.
Create a new user
resource "aws_iam_user" "user_lake_admin" {
name = var.lake_admin_user_name
path = "/"
tags = {
tag-key = "data-test"
}
}
provider "aws" {
access_key = aws_iam_access_key.user_lake_admin_AK_SK.id
secret_key = aws_iam_access_key.user_lake_admin_AK_SK.secret
region = "us-west-2"
alias = "lake-admin-profile"
}
this lake_admin user is created in the same file.
trying to use
provider "aws" {
access_key = aws_iam_access_key.user_lake_admin_AK_SK.id
secret_key = aws_iam_access_key.user_lake_admin_AK_SK.secret
region = "us-west-2"
alias = "lake-admin-profile"
}
resource "aws_glue_catalog_database" "myDB" {
name = "my-db"
provider = aws.lake-admin-profile
}
As I know terraform providers are executed first in all terraform files.
But is there any way we can reload the configurations of providers in the mid of terraform execution?
You can't do this directly.
You can apply the creation of the user in one root module and state and use its credentials in a provider for the second.
For the purposes of deploying infrastructure, you are likely better off with IAM Roles and assume role providers to handle this kind of situation.
Generally, you don't need to create infrastructure with a specific user. There's rarely an advantage to doing that. I can't think of a case where the principal creating infrastructure has any implied specific special access to the created infrastructure.
You can use a deployment IAM Role or IAM User to deploy everything in the account and then assign resource based and IAM policy to do the restrictions in the deployment.
I'm building terraform scripts to orcastrate Azure deployment. I used Azure blob storage to store a tfstate file. This file is shared with several pipelines IAC pipelines.
If for instance I create an Azure Resource Group with terraform, when that is done, I try to create a new custom role, terraform plan will mark the Resource Group for destruction.
This is the script for the role creation:
terraform {
backend "azurerm" {
storage_account_name = "saiac"
container_name = "tfstate"
key = "dev.terraform.tfstate"
resource_group_name = "rg-devops"
}
}
data "azurerm_subscription" "primary" {
}
resource "azurerm_role_definition" "roles" {
count = length(var.roles)
name = "${var.role_prefix}${var.roles[count.index]["suffix_name"]}${var.role_suffix}"
scope = "${data.azurerm_subscription.primary.id}"
permissions {
actions = split(",", var.roles[count.index]["actions"])
not_actions = split(",", var.roles[count.index]["not_actions"])
}
assignable_scopes = ["${data.azurerm_subscription.primary.id}"]
}
and this is script for resource group creation:
terraform {
backend "azurerm" {
storage_account_name = "saiac"
container_name = "tfstate"
key = "dev.terraform.tfstate"
resource_group_name = "rg-devops"
}
}
resource "azurerm_resource_group" "rg" {
count = "${length(var.rg_purposes)}"
name = "${var.rg_prefix}-${var.rg_postfix}-${var.rg_purposes[count.index]}"
location = "${var.rg_location}"
tags = "${var.rg_tags}"
}
If I remove the backend block, everything works as expected, does that mean I need the backend block?
Terraform use the .tfstate file to check and compare your code and existing cloud infra structure, it is like backbone of terraform.
If your code and existing infra is differe, terraform will destroy it and apply code changes.
To overcome this, terraform provides the import facility, you can import the existing resource and terraform will update it's .tfstate file.
This .tfstate file must be specify into your backend.tf file,best practices is to store your .tfstate file on cloude storage not in local directory.
When you run the terraform init command, it will check for the .tfstate file.
below is the sample file for backend.tf file (aws s3 is used):
backend "s3" {
bucket = "backends.terraform.file"
key = "my-terraform.tfstate_key"
region = "my-region-1"
encrypt = "false"
acl = "bucket-owner-full-control"
}
}
A terraform backend is not required for terraform. If you do not use it however no one else will be able to pull your code and run your terraform. The state will ONLY be stored in your .terraform directory. This means if you lose your local files your in trouble. It is recommended to use a backend that also supports state locking which azurerm does. With a backend in place the state will get pulled on terraform init after pulling the repo.
I have worked with terraform before, where terraform can place the tfstate files in S3. Does terraform also support azure blob storage as a backend? What would be the commands to set the backend to be azure blob storage?
As of Terraform 0.7 (not currently released but you can compile from source) support for Azure blob storage has been added.
The question asks for some commands, so I'm adding a little more detail in case anyone needs it. I'm using Terraform v0.12.24 and azurerm provider v2.6.0. You need two things:
Create a storage account (general purpose v2) and a container for storing your states.
Configure your environment and your main.tf
As for the second point, your terraform block in main.tf should contain a "azurerm" backend:
terraform {
required_version = "=0.12.24"
backend "azurerm" {
storage_account_name = "abcd1234"
container_name = "tfstatecontainer"
key = "example.prod.terraform.tfstate"
}
provider "azurerm" {
version = "=2.6.0"
features {}
subscription_id = var.subscription_id
}
Before calling to plan or apply, init the ARM_ACCESS_KEY variable with a bash export:
export ARM_ACCESS_KEY=<storage access key>
Finally, run the init command:
terraform init
Now, if you run terraform plan you will see the tfstate created in the container. Azure has a file locking feature built in, in case anyone tries to update the state file at the same time.