Cannot access Azure backend storage using SSL - azure

I am using Azure Blob Storage as a state backend, due to new security requirements, I now need to access the azure storage accounts using SSL. This however fails with the following:
module.core_infra.data.terraform_remote_state.mccp_core_infra:
data.terraform_remote_state.mccp_core_infra: storage: service returned
error: StatusCode=403, ErrorCode=AuthenticationFailed,
ErrorMessage=Server failed to authenticate the request. Make sure the
value of Authorization header is formed correctly including the
signature.
Here’s an example configuration:
resource "azurerm_storage_account" "terraform_state_account" {
name = "${lower(replace(var.azure_tenant_name, "/\\W|_/", ""))}tfstate"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${var.azure_location}"
account_tier = "Standard"
account_replication_type = "LRS"
enable_https_traffic_only = true
network_rules {
ip_rules = ["masked/24"]
virtual_network_subnet_ids = ["${azurerm_subnet.mccp_vnet_subnet.id}"]
}
tags = {
environment = "${var.azure_tenant_name} terraform state account"
}
}
data "terraform_remote_state" "mccp_core_infra" {
backend = "azurerm"
config = {
storage_account_name = "${lower(replace(var.azure_tenant_name, "/\\W|_/", ""))}tfstate"
container_name = "mccp-core-infra-tf-state"
key = "terraform.tfstate"
access_key = "${var.azure_mccp_storage_account_key}"
}
}
I am using Terraform 0.11.11 with azurerm provider 1.33.0. This works just fine without the enable_https_traffic_only flag. What am I missing here?

The enable_https_traffic_only feature would not affect on that error. This works fine with enable_https_traffic_only flag in the Terraform v0.12.9
+ provider.azurerm v1.35.0 on my side.
It looks like a credential issue. I can reproduce your issue when the access_key is invalid in the data source. You could verify if you could access that storage account blob with that access key or you are getting references from a correct storage account name that hosts the .tfstate.
You could also try to delete the local .terraform folder and try again as it is mentioned in this post.

Related

Unable to create terraform backend - Variables not allowed

I'm trying to create a terraform backend in my TF script. The problem is that Im getting errors that the variables are not allowed.
Here is my code:
# Configure the Azure provider
provider "azurerm" {
version = "~> 2.0"
}
# Create an Azure resource group
resource "azurerm_resource_group" "example" {
name = "RG-TERRAFORM-BACKEND"
location = "$var.location"
}
# Create an Azure storage account
resource "azurerm_storage_account" "example" {
name = "$local.backendstoragename"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = "$var.tags"
}
# Create an Azure storage container
resource "azurerm_storage_container" "example" {
name = "example"
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
# Create a Terraform backend configuration
resource "azurerm_terraform_backend_configuration" "example" {
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_name = azurerm_storage_container.example.name
key = "terraform.tfstate"
}
# Use the backend configuration to configure the Terraform backend
terraform {
backend "azurerm" {
resource_group_name = azurerm_terraform_backend_configuration.example.resource_group_name
storage_account_name = azurerm_terraform_backend_configuration.example.storage_account_name
container_name = azurerm_terraform_backend_configuration.example.container_name
key = azurerm_terraform_backend_configuration.example.key
}
}
What am I doing wrong? All of a sudden Terraform init is giving me the following errors:
Error: Variables not allowed
│
│ on main.tf line 65, in terraform:
│ 65: key = azurerm_terraform_backend_configuration.example.key
│
│ Variables may not be used here.
╵
I get the above error for ALL lines.
What am I doing wrong?
I tried to refactor the
azurerm_terraform_backend_configuration.example.container_name
as an interpolation - i.e. "$.." - but that didn't get accepted.
Has anything changed in Terraform? This wasn't the case a few years ago.
I have not found this resource azurerm_terraform_backend_configuration in any of the terraform-provider-azurerm documentation.
Check this URL for search results.
https://github.com/hashicorp/terraform-provider-azurerm/search?q=azurerm_terraform_backend_configuration
I am not even aware of the resource azurerm_terraform_backend_configuration but As of now, terraform-provider-azurerm does not support variables in the backend configuration.
Official documentation on Azurerm Backend
and what you are trying here is creating a Chicken-Egg problem (if I Ignore "azurerm_terraform_backend_configuration"). The initialization of terraform code needs a remote backend and the remote backend requires not just initialization but also terraform apply to the resources which are not possible.
The following are two possible solutions.
1: Create the resources required by the backend manually on the portal and then use them in your backend config. ( values in spite of any data source or variables)
2: Create the resources with the local backend and then migrate the local backend config to the remote backend.
Step 2.1: Create backend resources with local backend initially.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
Backend resources
locals {
backendstoragename = "stastackoverflow001"
}
# variable defintions
variable "tags" {
type = map(string)
description = "(optional) Tags attached to resources"
default = {
used_case = "stastackoverflow"
}
}
# Create an Azure resource group
resource "azurerm_resource_group" "stackoverflow" {
name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
location = "West Europe"
}
# Create an Azure storage account
resource "azurerm_storage_account" "stackoverflow" {
name = local.backendstoragename ## or "${local.backendstoragename}" but better is local.backendstoragename
location = azurerm_resource_group.stackoverflow.location
resource_group_name = azurerm_resource_group.stackoverflow.name
account_tier = "Standard"
account_replication_type = "LRS"
tags = var.tags ## or "${var.tags}" but better is var.tags
}
# Create an Azure storage container
resource "azurerm_storage_container" "stackoverflow" {
name = "stackoverflow"
storage_account_name = azurerm_storage_account.stackoverflow.name
container_access_type = "private"
}
Step 2.2: Apply the code with local backend.
terraform init
terraform plan # to view the plan
terraform apply -auto-approve # ignore `-auto-approve` if not desired auto approval on apply.
After applying you will get the message:
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Step 2.3: Update the backend configuration from local to remote.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
## Add remote backend config.
backend "azurerm" {
resource_group_name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
storage_account_name = "stastackoverflow001"
container_name = "stackoverflow"
key = "terraformstate"
}
}
Re-Initialize the terraform.
After adding your remote backend run ``terraform init -reconfigurecommand and then typeyes` to migrate your local backend to remote backend.
➜ variables_in_azurerm_backend git:(main) ✗ terraform init -reconfigure <aws:sre>
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "azurerm" backend. No existing state was found in the newly
configured "azurerm" backend. Do you want to copy this state to the new "azurerm"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed hashicorp/azurerm v3.37.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Now terraform should use the remote backend configured and also will be able to manage the resources created in the steps {2.1 && 2.2}. You can verify this by running terraform plan command and it should give No changes message.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed.
One more Side Note: Version constraints inside provider configuration blocks are deprecated and will be removed in a future version of Terraform
Special Consideraions: Use a different container key and directory for your other infrastructure terraform configurations to avoid accidental destruction of the storage account used for the backend config.

Getting a status code 403 on terraform plan for Azure deployment

I'm trying to deploy a web app with a database on Azure but can't seem to get it to work despite double/triple checking the credentials for the Tenant in Azure. Tried creating new client secrets but doesn't work regardless.
Unable to list provider registration status, it is possible that this is due to invalid credentials or the service principal does not have permission to use the Resource Manager API, Azure error: resources.ProvidersClient#List: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '########-########-########-########-########' with object id '########-########-########-########-########' does not have authorization to perform action 'Microsoft.Resources/subscriptions/providers/read' over scope '/subscriptions/########-########-########-########-########' or the scope is invalid. If access was recently granted, please refresh your credentials."
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
provider "azurerm" {
features {}
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.client_secret
tenant_id = var.tenant_id
}
resource "azurerm_resource_group" "example" {
name = "azure-tf-bgapp"
location = "West Europe"
}
resource "azurerm_container_group" "example" {
name = "bgapp-tf"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_address_type = "Public"
dns_name_label = "aci-label"
os_type = "Linux"
container {
name = "bgapp-web"
image = "shekeriev/bgapp-web"
cpu = "0.5"
memory = "1.5"
ports {
port = 80
protocol = "TCP"
}
}
container {
name = "bgapp-web"
image = "shekeriev/bgapp-db"
cpu = "0.5"
memory = "1.5"
environment_variables = {
"MYSQL_ROOT_PASSWORD" = "Password1"
}
}
tags = {
environment = "bgapp"
}
}
I tried in my environment and got below results:
Initially I tried the same code and got same error in my environment.
Console:
The above error occurs due to your (Service principal) doesn't has required permission to do that operation (Authorization).
After assigning a role like Owner to the service principal code worked successfully.
Go to portal -> subscription -> Access control (IAM) -> Add role assignments -> owner -> Add your service principal -> review + create.
After I executed code of terraform it executed perfectly.
Console:
Portal:

Unable to create Storage Sync Cloud Endpoint

When i am trying to create cloud endpoint from terraform script in azure i am getting following error,
Error: waiting for creation of Storage Sync Cloud Endpoint: (Cloud Endpoint Name “azbackup001zscallerc-file-sync-grp-CE” / Sync Group Name “azbackup001zscallerc-file-sync-grp” / Storage Sync Service Name “azbackup001zscallerc-file-sync” / Resource Group “RG”): Code=“-2134364065” Message=“Unable to read specified storage account. Please check the permissions and try again after some time.”
however when i am creating the same from azure portal i am able to create without any issues. I have checked all my permissions and even from global admin account as well, i am unable to do so. Please assist the possible solution
Please assist on checking permission issue as i can do same thing from az cli as well as powershell.
As it is even having issues with global admin account,Check When creation of Cloud Endpoint setup permission to that storage sync service that cloud sync is dependent on.
See Storage Sync service errors : make sure Azure File Sync has access to the storage account.
resource "azurerm_storage_sync" "example" {
name = "kaexample-ss"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_storage_sync_group" "example" {
name = "kaexample-ss-group"
storage_sync_id = azurerm_storage_sync.example.id
}
resource "azurerm_storage_account" "example" {
name = "kaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_share" "example" {
name = "kaexample-share"
storage_account_name = azurerm_storage_account.example.name
quota = 50
acl {
id = "GhostedRecall"
access_policy {
permissions = "r"
}
}
}
resource "azurerm_storage_sync_cloud_endpoint" "example" {
name = "example-ss-ce"
storage_sync_group_id = azurerm_storage_sync_group.example.id
file_share_name = azurerm_storage_share.example.name
storage_account_id = azurerm_storage_account.example.id
}
Please check this Az.StorageSync: Cloud endpoint creation access rigths failure issue · GitHub

error deploying resources on azure using terraform cloud

I have deployed resources on Microsoft Azure using terraform. I'm using azure storage account container to save my terraform states. I tried to configure terraform cloud to automate the deployment but I get this error.
Error: A resource with the ID "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/msk-stage-keyvault" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
with module.keyvault.azurerm_resource_group.msk-keyvault
on ../../modules/az-keyvault/main.tf line 2, in resource "azurerm_resource_group" "msk-keyvault":
resource "azurerm_resource_group" "msk-keyvault" {
It seems that terraform cloud is not using my backend state in my provider.tf. How do I make terraform cloud use my backend state in provider.tf.
My Backend Provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.91.0"
}
}
backend "azurerm" {
resource_group_name = "msk-configurations"
storage_account_name = "mskconfigurations"
container_name = "key-vault"
key = "stage.tfstate"
}
}
provider "azurerm" {
features {}
subscription_id = var.subscription
tenant_id = var.ternant_id
}
It looks like your main.tf has already existing keyvault state.
So Initially please check if you have already configured keyvault resource in main.tf file or if you have already imported the state.
If its already present in main.tf file , and if you are again giving it in the backend , please try to remove the one from main.tf file and then execute again.
Also please note that terraform backend needs azure storage account credentials before hand in order to store into the tfstate.
So please avoid creating storage simultaneously all account, container and then the keyvault resource to tfstate.
So if storage account is already created first , then terraform can refer it later in backend.
To preconfigure the storage account and container :
Example:
1. Create storage account and container one after the other instead in the same file:
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "example" {
name = "resourcegroupname"
}
resource "azurerm_storage_account" "example" {
name = "<yourstorageaccountname>"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "newterraformcont"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
Then create the msk-keyvault resource group and store the tfstate in container.
This is my already created state file in terraform (terraform.tf)
provider "azurerm" {
features {}
}
terraform {
# Configure Terraform State Storage
backend "azurerm" {
resource_group_name = "<resourcegroup>"
storage_account_name = "<storage-earliercreated>"
container_name = " newterraformcont "
key = "terraform.tfstate"
}
}
resource "azurerm_resource_group" " msk-keyvault" {
name = "<msk-keyvault>"
location = "west us"
}
Reference:
azurerm_resource_group | Resources | hashicorp/azurerm | Terraform
Registry
https://www.jorgebernhardt.com/terraform-backend

terraform backend state file storage using keys instead of AD account

It appears that Terraform uses Keys for backend state files when persisting to an Azure storage account. I wish to use a single storage account with dedicated folders for different service principals but without cross-folder write access. I am trying to avoid accidental overwrites of the state files by different service principals. But since Terraform is using the keys to update the storage account, every service principal technically has rights to update every file. And the developer would have to take care not to accidentally reference the wrong state file to update. Any thoughts on how to protect against this?
You can use a SAS token generated for a Container to be used by that service principal only and no other service principals .
I tested with something like below:
data "terraform_remote_state" "foo" {
backend = "azurerm"
config = {
storage_account_name = "cloudshellansuman"
container_name = "test"
key = "prod.terraform.tfstate"
sas_token = "sp=racwdl&st=2021-09-28T05:49:01Z&se=2023-04-01T13:49:01Z&sv=2020-08-04&sr=c&sig=O87nHO01sPxxxxxxxxxxxxxsyQGQGLSYzlp6F8%3D"
}
}
provider "azurerm" {
features {}
use_msi = true
subscription_id = "948d4068-xxxxx-xxxxxx-xxxxxxxxxx"
tenant_id = "72f988bf-xxxx-xxxxx-xxxxxx-xxxxxxx"
}
resource "azurerm_resource_group" "test" {
name="xterraformtest12345"
location ="east us"
}
But If I change container name to another container then I can't write as it will error out saying the authentication failed as the SAS token is for Test container not Test1 container.
For more information on how to generate SAS token for containers and how to set backend azurerm for terraform , please refer the below links:
Generate shared access signature (SAS) token for containers and blobs with Azure portal. | Microsoft Docs
Use Azure storage for Terraform remote state
OR
You can set the containers authentication method to azure ad user account, after assigning storage blob data contributor/owner role to the service principal which will use that specific container .
Then you can use something like below:
data "terraform_remote_state" "foo" {
backend = "azurerm"
config = {
storage_account_name = "cloudshellansuman"
container_name = "test1"
key = "prod.terraform.tfstate"
subscription_id = "b83c1ed3-xxxx-xxxxxx-xxxxxxx"
tenant_id = "72f988bf-xxx-xxx-xxx-xxx-xxxxxx"
client_id = "f6a2f33d-xxxx-xxxx-xxx-xxxxx"
client_secret = "y5L7Q~oiMOoGCxm7fK~xxxxxxxxxxxxxxxxx"
use_azuread_auth =true
}
}
provider "azurerm"{
subscription_id = "b83c1ed3-xxxx-xxxxxx-xxxxxxx"
tenant_id = "72f988bf-xxx-xxx-xxx-xxx-xxxxxx"
client_id = "f6a2f33d-xxxx-xxxx-xxx-xxxxx"
client_secret = "y5L7Q~oiMOoGCxm7fK~xxxxxxxxxxxxxxxxx"
features {}
}
data "azurerm_resource_group" "test" {
name="resourcegroupname"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
resource_group_name = data.azurerm_resource_group.test.name
location = data.azurerm_resource_group.test.location
address_space = ["10.254.0.0/16"]
}
Output:
If service principal doesn't have Role assigned to it for the container , then it will give error like below:
Note: For the first scenario I have used managed system identity, but the same can be achieved for service principal as well.

Resources