Can we able to change GCP cloud build settings using terraform or gcloud command - terraform

I have a use-case where I need to enable cloud build access on GKE but I did not found a terraform resource to do that, also not found gcloud CLI command to do the same.

Yes, you can do this in Terraform by creating a google_project_iam_member for the Cloud Build service account that's created by default when you enable the Cloud Build API. For example:
resource "google_project_iam_member" "cloudbuild_kubernetes_policy" {
project = var.project_id
role = "roles/container.developer"
member = "serviceAccount:${var.project_number}#cloudbuild.gserviceaccount.com"
}
The value declared in the role attribute/key corresponds to a role in the console user interface (an image of which you have included above).

Related

Can't create google_storage_bucket via Terraform

I'd like to create the following resource via Terraform:
resource "google_storage_bucket" "tf_state_bucket" {
name = var.bucket-name
location = "EUROPE-WEST3"
storage_class = "STANDARD"
versioning {
enabled = true
}
force_destroy = false
public_access_prevention = "enforced"
}
Unfortunately, during the execution of terraform apply, I got the following error:
googleapi: Error 403: X#gmail.com does not have storage.buckets.create access to the Google Cloud project. Permission 'storage.buckets.create' denied on resource (or it may not exist)., forbidden
Here's the list of things I tried and checked:
Verified that Google Cloud Storage (JSON) API is enabled on my project.
Checked the IAM roles and permissions: X#gmail.com has the Owner and the Storage Admin roles.
I can create a bucket manually via the Google Console.
Terraform is generally authorised to create resources, for example, I can create a VM using it.
What else can be done to authenticate Terraform to create Google Storage Buckets?
I think you run the Terraform code in a Shell session from your local machine and use an User identity instead of a Service Account identity.
In this case to solve your issue from your local machine :
Create a Service Account in GCP IAM console for Terraform with Storage Admin roles role.
Download a Service Account token key from IAM.
Set the GOOGLE_APPLICATION_CREDENTIALS env var in your Shell session to the Service Account token key file path.
If you run your Terraform code in other place, you need to check if Terraform is correctly authenticated to GCP.
The use of a token key is not recommended because it's not the more secure way, that's why it is better to launch Terraform from a CI tool like Cloud Build instead of launch it from your local machine.
From Cloud Build no need to download and set a token key.

Cluster Access Issue in Azure Using Terraform

Error: authentication is not configured for provider. Please configure it through one of the following options: 1. DATABRICKS_HOST + DATABRICKS_TOKEN environment variables. 2. host + token provider arguments. 3. azure_databricks_workspace_id + AZ CLI authentication. 4. azure_databricks_workspace_id + azure_client_id + azure_client_secret + azure_tenant_id for Azure Service Principal authentication. 5. Run databricks configure --token that will create ~/.databrickscfg file.
Please check and Advice about this error
You can try with below workaround given in this github discussion to troubleshoot your issue
Overall recommendation is to also separate workspace creation (azurerm provider) and for workspace provisioning (databricks provider).
the other workaround is to have an empty ~/.databrickscfg file, so locals block might be avoided. not ideal, but will work..
you can also managed to work around it by using locals to pre-configure related resource names and then reference those when building the workspace resource id in the provider config
and databricks provider resources all have a depends_on block with the databricks workspace

Terraform cloud : Import existing resource

I am using terraform cloud to manage the state of the infrastructure provisioned in AWS.
I am trying to use terraform import to import an existing resource that is currently not managed by terraform.
I understand terraform import is a local only command. I have set up a workspace reference as follows:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
}
The AWS credentials are configured in the remote cloud workspace but terraform does not appear to be referencing the AWS credentials from the workspace but instead falls back trying to using the local credentials which points to a different AWS account. I would like Terraform to use the credentials by referencing the variables in the workspace when I run terraform import.
When I comment out the locally configured credentials, I get the error:
Error: No valid credential sources found for AWS Provider.
I would have expected terraform to use the credentials configured in the workspace.
Note that terraform is able to use the credentials correctly, when I run the plan/apply command directly from the cloud console.
Per the backends section of the import docs, plan and apply run in Terraform Cloud whereas import runs locally. Therefore, the import command will not have access to workspace credentials set in Terraform Cloud. From the docs:
In order to use Terraform import with a remote state backend, you may need to set local variables equivalent to the remote workspace variables.
So instead of running the following locally (assuming you've provided access keys to Terraform Cloud):
terraform import aws_instance.myserver i-12345
we should run for example:
export AWS_ACCESS_KEY_ID=abc
export AWS_SECRET_ACCESS_KEY=1234
terraform import aws_instance.myserver i-12345
where the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY have the same permissions as those configured in Terraform Cloud.
Note for AWS SSO users
If you are using AWS SSO and CLI v2, functionality for terraform to be able to use the credential cache for sso was added per this AWS provider issue. The steps for importing with an SSO profile are:
Ensure you've performed a login and have an active session with e.g. aws sso login --profile my-profile
Make the profile name available to terraform as an environment variable with e.g. AWS_PROFILE=my-profile terraform import aws_instance.myserver i-12345
If the following error is displayed, ensure you are using a version of the cli > 2.1.23:
Error: SSOProviderInvalidToken: the SSO session has expired or is invalid
│ caused by: expected RFC3339 timestamp: parsing time "2021-07-18T23:10:46UTC" as "2006-01-02T15:04:05Z07:00": cannot parse "UTC" as "Z07:00"
Use the data provider, for Example:-
data "terraform_remote_state" "test" {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
key = "BUCKET_KEY WHERE YOUR TERRAFORM.TFSTATE FILE IS PRESENT"
region = "CLOUD REGION"
}
}
Now you can call your provisioned resources
Example :-
For getting the VPC ID:-
data.terraform_remote_state.test.*.outputs.vpc_id
Just make the cloud resource property you want to refer should be in exported as output and stored in terraform.tfstate file

GCP Service account key management and usage in Terraform

I am creating CI/CD pipeline for Terraform so that my GCP resource creation would be automated. But Terraform needs Service account to do the job, I create the service account and the key is downloaded to my machine, but what should be the correct way to store it so when running Cloud build pipeline so that Terraform would pick on it and execute scripts.
provider "google" {
credentials = file(var.cred_file)
project = var.project_name
region = var.region
}
Is it okay to store this file in Cloud storage bucket ? Or there are some better alternatives ?
On GCP you have the bucket option to keep sensitive information and you can use access control lists (ACLs) to define who has access on your buckets and objects. GCP offers the next options to storage and I think that the better is according with your needs, just ensure that the option provides you the security tools to keep your files safe. I think that once you are Granting permissions to your Cloud Build service account, you can pass the path to the service account key in code

Azure Terraform initial setup

I worked with Terraform for AWS before successfully. Now I am trying to work with Azure and facing a few challenges. I have successfully authenticated to my azure account using Azure CLI. When I run the basic terraform provider arm .tf and do a terraform init it just works. But when I put in any additional code like container creation or blob creation .tfs, the init is not working and is giving me the below message :
No available provider "azure" plugins are compatible with this Terraform version.
Error: no available version is compatible with this version of Terraform
Terraform version :
bash-3.2$ terraform -v
Terraform v0.12.19
+ provider.azurerm v1.38.0
I used version 1.38.0 and tried many others but it still continues to give me error.
They are the two providers for the different Azure models.
Azure Service Management Provider model is the classic model in Azure and is not recommended to use now. It provides the resources with format azure_xxx.
Azure Resource Manager Provider model is the Resource Manager model which calls ARM and is recommended to use and supported well. It provides the resources with format azurerm_xxx.
You can also learn more about the ASM and ARM model in document Azure Resource Manager vs. classic deployment: Understand deployment models and the state of your resources.

Resources