object_id of current client is is empty - azure

I am trying to execute the code as defined in this file:
https://github.com/Vizzuality/marxan-cloud/blob/staging/infrastructure/kubernetes/modules/key_vault/main.tf
However, when I try to get the object_id (data.azurerm_client_config.current.object_id) I see that the value is empty. So, I cannot set the access policy further down.
Now looking at other people's posts regarding the empty object_id, it says that that is due to a change in Azure CLI.
Given this, how does one set the access policy in a key vault for the current client?

I had the same issue with an older version of the AzureRM provider - Upgrading the version to v3.9.0 fixed the issue. Note that terraform will lock the provider version at init, but you can force an upgrade with the command:
terraform init -upgrade
Also you may have version restrictions on providers in your terraform code. You can validate this with the command:
terraform providers
You can read more about provider resquirements here

Related

how to change terraform provider?

Currently, I am using "Mongey/kafka" provider and now I have to switch to "confluentinc/confluent" provider with my existing terraform pipeline.
How can I do this ?
Steps currently following to switch the provider
Changing the provider in main.tf file and running following command to replace provider
terraform state replace-provider Mongey/kafka confluentinc/confluent
and after that I run
terraform init command to install the new provider
But after that when I am running
terraform plan
it is giving "no schema available for module.iddn_news_cms_kafka_topics.kafka_acl.topic_writer[13] while reading state; this is a bug in terraform and should be reported" error.
Is there any way, I will change the terraform provider without disturbing the existing resources created using terraform pipeline ?
The terraform state replace-provider command is intended for switching between providers that are in some way equivalent to one another, such as the hashicorp/google and hashicorp/google-beta providers, or when someone forks a provider into their own namespace but remains compatible with the original provider.
Mongey/kafka and confluentinc/confluent do both have resource types that seem to represent the same concepts in the remote system:
Mongey/kafka
confluentinc/confluent
kafka_acl
confluent_kafka_acl
kafka_quota
confluent_kafka_client_quota
kafka_topic
confluent_kafka_topic
However, despite representing the same concepts in the remote system these resource types have different names and incompatible schemas, so there is no way to migrate directly between them. Terraform has no way to understand which resource types in one provider match with resource types in another, or to understand how to map attributes from one of the resource types onto corresponding attributes of the other.
Instead, I think the best thing to do here would be to ask Terraform to "forget" the objects and then re-import them into the new resource types:
terraform state rm kafka_acl.example to ask Terraform to forget about the remote object associated with kafka_acl.example. There is no undo for this action.
terraform import confluent_kafka_acl.example OBJECT-ID to bind the OBJECT-ID (as described in the documentation) to confluent_kafka_acl.example.
I suggest practicing this in a non-production environment first so that you can be confident about the behavior of each of these commands, and learn how to translate from whatever ID format the Mongey/kafka provider uses into whatever import ID format the confluentinc/confluent provider uses to describe the same objects.

Terraform Kubernetes persistent storage setup no connection made dial tcp error

I am getting this error when ever I try to create a persistent claim and volume according this kubernetes_persistent_volume_claim
Error: Post "http://localhost/api/v1/namespaces/default/persistentvolumeclaims": dial tcp [::1]:80: connectex: No connection could
be made because the target machine actively refused it.
I have also tried spooling a azure disk and creating a volume through that outlined here Persistent Volume using Azure Managed Disk
My terraform kubernetes provider looks like this:
provider "kubernetes" {
alias = "provider_kubernetes"
host = module.kubernetes-service.kube_config.0.host
username = module.kubernetes-service.kube_config.0.username
password = module.kubernetes-service.kube_config.0.password
client_certificate = base64decode(module.kubernetes-service.kube_config.0.client_certificate)
client_key = base64decode(module.kubernetes-service.kube_config.0.client_key)
cluster_ca_certificate = base64decode(module.kubernetes-service.kube_config.0.cluster_ca_certificate)
}
I don't believe its even hitting K8 in my RG. Is there something I am missing or maybe I am not understanding how this works to put it together the right way. I have the RG spooled with the K8 resource in the same terraform which creates fine but when it comes to setting up the persistent storage I can't get past the error.
The provider is aliased, so first make sure that all kubernetes resources use the correct provider. You have to specify the aliased provider for each resource.
resource "kubernetes_cluster_role_binding" "current" {
provider = kubernetes.provider_kubernetes
# [...]
}
Another possibility is, that the localhost connection error may be, because there is a pending change to the Kubernetes cluster resource which leads to its return attributes being in known-after-apply state.
Try terraform plan --target module.kubernetes-service.kube_config to see if that shows any pending changes to the K8s resource (it presumably depends on). Better, target the Kubernetes cluster resource directly.
If it does, first apply those changes alone: terraform apply --target module.kubernetes-service.kube_config, then run a second apply without --target like this: terraform apply.
If there is no pending change to the cluster resource, check that the module returns correct credentials. Also double check, that the use of base64decode is correct.
Try terraform plan --target module.kubernetes-service.kube_config to see if >that shows any pending changes to the K8s resource (it presumably depends on). >Better, target the Kubernetes cluster resource directly.
If it does, first apply those changes alone: terraform apply --target >module.kubernetes-service.kube_config, then run a second apply without -->target like this: terraform apply.
In my case it was a conflict in the IAM role definition and assignment which caused the problem. Executing terraform plan --target module.eks (module.eks being the module name used in the terraform code) followed by terraform apply --target module.eks removed the conflicting role definitions. From the terraform output I could see which role policy and role was causing the issue.

Terraform state replace-provider update state with wrong data

We upgraded terraform version and we have a problem with terraform remote state. Basiacaly I run this command to update azurerm provider:
terraform state replace-provider 'registry.terraform.io/-/azurerm' 'registry.terraform.io/hashicorp/azurerm'
Right now when I run plan command it shows me some errors. All are the same but resource if different. For example:
To work with module.name.module.lb_name.azurerm_lb_probe.instance
its original provider configuration at
provider["registry.terraform.io/-/azurerm"] is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.name.module.lb_name.azurerm_lb_probe.instance, after which
you can remove the provider configuration again.
Basically the state was updated and the provider looks like this:
"provider": "provider.azurerm"
but it should look like this:
"provider": "provider[\"registry.terraform.io/hashicorp/azurerm\"]"
Is there any way to update it via terraform commands or the only way to fix it is to edit state file manually?
When you replace the providers for the Terraform upgrade with the command:
terraform state replace-provider 'registry.terraform.io/-/azurerm' 'registry.terraform.io/hashicorp/azurerm'
OK, there is no problem. And then you can use the command below to check the current providers:
terraform providers
The screenshot will show like this:
At this time, the providers are the same as the requirement. Then you need to init again to pull the current providers to replace the existing ones with the command below:
terraform init
This is the step you have missed.

where to store the azure service principal data when using with terraform from CI or docker

I am reading all the terraform docs about using a service principal with a client secret when in CI or docker file or whatever and I quote:
We recommend using either a Service Principal or Managed Service Identity when running Terraform non-interactively (such as when running Terraform in a CI server) - and authenticating using the Azure CLI when running Terraform locally.
It then goes into great detail about creating a service principal and then gives an awful example at the end where the client id and client secret are hardcoded in the file by either storing them in environment variables:
export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
or in the terraform provider block:
provider "azurerm" {
# Whilst version is optional, we /strongly recommend/ using it to pin the version of the Provider being used
version = "=1.43.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000"
client_secret = "${var.client_secret}"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
It does put a nice yellow box about it saying do not do this but there is no suggestion of what to do.
I don't think client_secret in an environment variable is a particularly good idea.
Should I be using the client certificate and if so, the same question arises about where to keep the configuration.
I want to avoid azure-cli if possible.
Azure-cli will not return the client secret anyway.
How do I go about getting these secrets into environment variables? Should I be putting them into a vault or is there another way?
For your requirements, I think you're a little confused that how to choose a suitable one from the four ways.
You can see that the Managed Service Identity is only available for the services with the Managed Service Identity feature. So docker cannot use it. And you need also to assign it with appropriate permission as the service principal. You don't want to use Azure CLI if possible, I don't know why, but let's skip it first.
The service principal is a good way I think. It recommends you do not put the secret into a variable inside the Terraform file. So you can only use the environment variable. And if you also do not want to set the environment variable, then I don't think there is a way to use the service principal. The certificate for the service principal only needs to set the certificate path more than the other one.
And there is a caution for the service principal. You can see the secret of the service principal only one time when you finish creating it and then it will do not display anymore. If you forget, you can only reset the secret.
So I think the service principal is the most suitable way for you. You can set the environment variables with the parameter --env of the command docker run. Or just set them in the Dockerfile with ENV. The way to store the secret in the key vault, I think you can get the answer in my previous answer.

Terraform back-end to azure blob storage errors

I have been using the below to successfully create a back-end state file for terraform in Azure storage, but for some reason its stopped working. I've recycled passwords for the storage, trying both keys and get the same error every-time
backend.tf
terraform {
backend "azurerm" {
storage_account_name = "terraformstorage"
resource_group_name = "automation"
container_name = "terraform"
key = "testautomation.terraform.tfstate"
access_key = "<storage key>"
}
}
Error returned
terraform init
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: storage: service returned error: StatusCode=403, ErrorCode=AuthenticationFailed, ErrorMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:665e0067-b01e-007a-6084-97da67000000
Time:2018-12-19T10:18:18.7148241Z, RequestInitiated=Wed, 19 Dec 2018 10:18:18 GMT, RequestId=665e0067-b01e-007a-6084-97da67000000, API Version=, QueryParameterName=, QueryParameterValue=
Any ideas what im doing wrong?
What worked for me is to delete the local .terraform folder and try again.
Another problem can be time resolution.
I experienced those problems as well, tried all the above mentioned steps, but nothing helped.
What happened on my system (Windows 10, WSL2) was, that WSL lost its time sync and I was hours apart. This behaviour is described in https://github.com/microsoft/WSL/issues/4245.
For me it helped to
get the appropriate time in WSL (sudo hwclock -s) and
to reboot WSL
Hope, this will help others too.
Here are few suggestions:
Run: terraform init -reconfigure.
Confirm your "terraform/backend" credentials.
In case your Terraform contains some "azurerm_storage_account/network_rules" to allow certain IP addresses, or make sure you're connected to the right VPN network.
If above won't work, run TF_LOG=TRACE terraform init to debug further.
Please ensure you've been authenticated properly to Azure Cloud.
If you're running Terraform externally, re-run: az login.
If you're running Terraform on the instance, you can use managed identities, or by defining the following environmental variables:
ARM_USE_MSI=true
ARM_SUBSCRIPTION_ID=xxx-yyy-zzz
ARM_TENANT_ID=xxx-yyy-zzz
or just run az login --identity, then assign the right role (azurerm_role_assignment, e.g. "Contributor") and appropriate policies (azurerm_policy_definition).
See also:
Azure Active Directory Provider: Authenticating using Managed Service Identity.
Unable to programmatically get the keys for Azure Storage Account.
There should a .terraform directory , where you are running the terraform init command from.
Remove .terraform or move it to Someotehr name. Next time terraform init runs , it will recreate that directory with new init.

Resources