I am trying to install Grafana with TimescaleDB using Terraform. Everything with TimescaleDB worked flawlessly, however the Grafana block seems to be completely ignored by Terraform. This is the code I am using to enable Grafana:
provider "grafana" {
url = "http://localhost:3000/"
auth = "test:test"
}
I am on Terraform 0.13.4 and my required_providers block includes Grafana:
grafana = {
source = "grafana/grafana"
}
Unlike when I install Grafana through the console, no grafana files are created, grafana-cli is not installed, and I get errors trying to use grafana with subsequent resource blocks in Terraform so it seems to me that the only issue is with Terraform, and it is just choosing not to install Grafana at all.
What is going on here? I am pretty new to Terraform so it could be that I am missing something obvious...
Related
I have a set of cloud run services created/maintained via terraform cloud.
When I create a new version, a github actions workflow pushes a new image to gcr.io.
Now in a normal scenario, I'd call:
gcloud run deploy auth-service --image gcr.io/riu-production/auth-service:latest
And a new version would be up. If I do this and the resource is managed by terraform, on the next run, terraform apply will fail saying it can't create that cloud run service due to a service with that name already existing. So it drifts apart in state and terraform no longer recognizes it.
A simple solution is to connect the pipeline to terraform cloud and run terraform apply -auto-approve for deployment purposes. That should work.
The problem with that is I really realy don't want to apply terraform commands in a pipeline, for now.
And the biggest one is I really would like to keep terraform out of the deployment process altogether.
Is there any way to force cloud run to take that new image for a service without messing up the terraform infrastructure?
Cloud run configs:
resource "google_cloud_run_service" "auth-service" {
name = "auth-service"
location = var.gcp_region
project = var.gcp_project
template {
spec {
service_account_name = module.cloudrun-sa.email
containers {
image = "gcr.io/${var.gcp_project}/auth-service:latest"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
In theory yes it should be possible ...
But I would recommend against that, you should be doing terraform apply on every deployment to guarantee the infrastructure is as expected.
Here are some things you can try:
Keep track of when it changes and use the import on that resource:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/cloud_run_service#import
Look into lifecycle ignore, you can ignore the attribute that triggers the change:
https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes
I've recently started using Terraform and I love it. However in migrating an application to use terraform I have encountered an AWS service that doesn't appear to be implemented using terraforms aws provider.
What does one do in such a situation? Is there a way i can hack this in to my terraform code to call this api?
https://docs.aws.amazon.com/ses/latest/APIReference/API_CreateCustomVerificationEmailTemplate.html
I'm using the latest aws provider.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.5.0"
}
}
}
The only possibility I could imagine is to run using local-exec and call the missing API manually.
E.g. you can use null_resource (https://www.terraform.io/language/resources/provisioners/null_resource) and execute a bash script or aws cli directly.
Like mentioned before, search https://github.com/hashicorp/terraform-provider-aws/issues for your issue, vote for it or create a new feature request.
I am trying to apply a configMap to an EKS cluster through Terraform, but I don't see how. There is lots of documentation about this, but I don't see anyone succeeding with it, so I am not sure if this is possible or not.
Currently we control our infrastructure through Terraform. When I create the .kube/config file through AWS cli, and try to connect to the cluster, I get the Unauthorized error, which is documented how to solve here; in AWS. According to the docs, we need to edit aws-auth configMap and add some lines to it, which configures API server to accept requests from a VM with certain role. The problem is that only cluster creator has access to connect to the cluster and make these changes. The cluster creator in this case is Terraform, so what we do is aws config, we add the credentials of Terraform to the VM from where we are trying to connect to the cluster, we successfully authenticate against it, add the necessary lines to the configMap, then revoke the credentials from the VM.
From there on, any user can connect to the cluster from that VM, which is our goal. Now we would like to be able to edit the configMap through Terraform object, instead of doing all this process. There is a resource kubernetes_config_map in Terraform, but that's a different provider (kubernetes), not AWS, so it is not being able to find the cluster, and fails with trying to connect to the API server running in localhost.
There is a resource kubernetes_config_map in Terraform, but that's a different provider (kubernetes), not AWS
It is a different provider, because Terraform should now interact with a different API (Kubernetes API instead of AWS API).
There are data sources for aws_eks_cluster and aws_eks_cluster_auth that can be used to authenticate the kubernetes provider.
The aws_eks_cluster_auth has examples for authenticating the kubernetes provider:
data "aws_eks_cluster" "example" {
name = "example"
}
data "aws_eks_cluster_auth" "example" {
name = "example"
}
provider "kubernetes" {
host = data.aws_eks_cluster.example.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.example.token
load_config_file = false
}
Another example is how the Cloud Posse AWS EKS module authenticate the kubernetes provider and also use a ConfigMap.
We're using Terraform to deploy AKS clusters to an environment behind a proxy over VPN. Deployment of the cluster works correctly when off-network without the proxy, but errors out on Helm deployment creation on-network.
We are able to connect to the cluster after it's up while on the network using the following command after retrieving the cluster context.
kubectl config set-cluster <cluster name> --certificate-authority=<path to organization's root certificate in PEM format>
The Helm deployments are also created with Terraform after the creation of the cluster. It seems that these require the certificate-authority data to deploy and we haven't been able to find a way to automate this at the right step in the process. Consequently, the apply fails with the error:
x509: certificate signed by unknown authority
Any idea how we can get the certificate-authority data in the right place so the Helm deployments stop failing? Or is there a way to get the cluster to implicitly trust that root certificate? We've tried a few different things:
Researched if you could automatically have that data in there when retrieving the cluster context (i.e. az aks get-credentials --name <cluster name> --resource-group <cluster RG>)?** Couldn't find an easy way to accomplish this.
We started to consider adding the root cert info as part of the kubeconfig that's generated during deployment (rather than the one you create when retrieving the context). The idea is that it can be passed in to the kubernetes/helm providers and also leveraged when running kubectl commands via local-exec blocks. We know that works but that means that we couldn't find a way to automate that via Terraform.
We've tried providing the root certificate to the different fields of the provider config, shown below. We've specifically tried a few different things with cluster_ca_certificate, namely providing the PEM-style cert of the root CA.
provider "kubernetes" {
host = module.aks.kube_config.0.host
client_certificate = base64decode(module.aks.kube_config.0.client_certificate)
client_key = base64decode(module.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(module.aks.kube_config.0.cluster_ca_certificate)
}
provider "helm" {
version = ">= 1.2.4"
kubernetes {
host = module.aks.kube_config.0.host
client_certificate = base64decode(module.aks.kube_config.0.client_certificate)
client_key = base64decode(module.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(module.aks.kube_config.0.cluster_ca_certificate)
}
}
Thanks in advance for the help! Let me know if you need any additional info. I'm still new to the project so I may not have explained everything correctly.
In case anyone finds this later, we ultimately ended up just breaking the project up into two parts: cluster creation and bootstrap. This let us add a local-exec block in the middle to run the kubectl config set-cluster... command. So the order of operations is now:
Deploy AKS cluster (which copies Kube config locally as one of the Terraform outputs)
Run the command
Deploy microservices
Because we're using Terragrunt, we can just use its apply-all function to execute both operations, setting the dependencies described here.
I have been using the below to successfully create a back-end state file for terraform in Azure storage, but for some reason its stopped working. I've recycled passwords for the storage, trying both keys and get the same error every-time
backend.tf
terraform {
backend "azurerm" {
storage_account_name = "terraformstorage"
resource_group_name = "automation"
container_name = "terraform"
key = "testautomation.terraform.tfstate"
access_key = "<storage key>"
}
}
Error returned
terraform init
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: storage: service returned error: StatusCode=403, ErrorCode=AuthenticationFailed, ErrorMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:665e0067-b01e-007a-6084-97da67000000
Time:2018-12-19T10:18:18.7148241Z, RequestInitiated=Wed, 19 Dec 2018 10:18:18 GMT, RequestId=665e0067-b01e-007a-6084-97da67000000, API Version=, QueryParameterName=, QueryParameterValue=
Any ideas what im doing wrong?
What worked for me is to delete the local .terraform folder and try again.
Another problem can be time resolution.
I experienced those problems as well, tried all the above mentioned steps, but nothing helped.
What happened on my system (Windows 10, WSL2) was, that WSL lost its time sync and I was hours apart. This behaviour is described in https://github.com/microsoft/WSL/issues/4245.
For me it helped to
get the appropriate time in WSL (sudo hwclock -s) and
to reboot WSL
Hope, this will help others too.
Here are few suggestions:
Run: terraform init -reconfigure.
Confirm your "terraform/backend" credentials.
In case your Terraform contains some "azurerm_storage_account/network_rules" to allow certain IP addresses, or make sure you're connected to the right VPN network.
If above won't work, run TF_LOG=TRACE terraform init to debug further.
Please ensure you've been authenticated properly to Azure Cloud.
If you're running Terraform externally, re-run: az login.
If you're running Terraform on the instance, you can use managed identities, or by defining the following environmental variables:
ARM_USE_MSI=true
ARM_SUBSCRIPTION_ID=xxx-yyy-zzz
ARM_TENANT_ID=xxx-yyy-zzz
or just run az login --identity, then assign the right role (azurerm_role_assignment, e.g. "Contributor") and appropriate policies (azurerm_policy_definition).
See also:
Azure Active Directory Provider: Authenticating using Managed Service Identity.
Unable to programmatically get the keys for Azure Storage Account.
There should a .terraform directory , where you are running the terraform init command from.
Remove .terraform or move it to Someotehr name. Next time terraform init runs , it will recreate that directory with new init.