I'd like to create the following resource via Terraform:
resource "google_storage_bucket" "tf_state_bucket" {
name = var.bucket-name
location = "EUROPE-WEST3"
storage_class = "STANDARD"
versioning {
enabled = true
}
force_destroy = false
public_access_prevention = "enforced"
}
Unfortunately, during the execution of terraform apply, I got the following error:
googleapi: Error 403: X#gmail.com does not have storage.buckets.create access to the Google Cloud project. Permission 'storage.buckets.create' denied on resource (or it may not exist)., forbidden
Here's the list of things I tried and checked:
Verified that Google Cloud Storage (JSON) API is enabled on my project.
Checked the IAM roles and permissions: X#gmail.com has the Owner and the Storage Admin roles.
I can create a bucket manually via the Google Console.
Terraform is generally authorised to create resources, for example, I can create a VM using it.
What else can be done to authenticate Terraform to create Google Storage Buckets?
I think you run the Terraform code in a Shell session from your local machine and use an User identity instead of a Service Account identity.
In this case to solve your issue from your local machine :
Create a Service Account in GCP IAM console for Terraform with Storage Admin roles role.
Download a Service Account token key from IAM.
Set the GOOGLE_APPLICATION_CREDENTIALS env var in your Shell session to the Service Account token key file path.
If you run your Terraform code in other place, you need to check if Terraform is correctly authenticated to GCP.
The use of a token key is not recommended because it's not the more secure way, that's why it is better to launch Terraform from a CI tool like Cloud Build instead of launch it from your local machine.
From Cloud Build no need to download and set a token key.
Related
I'm working on a Terraform project in which I setup several Azure resources.
One of these Azure resources is a service principal (linked to an app registration) which I use to deploy my Terraform code in a CI/CD pipeline via Github actions.
When developing locally, I use az login to authenticate, but occasionally I'm receiving an error for the Terraform app service principal. Most of the times, when I re-run terraform apply the error is not raised. Sometimes the error persists for several terraform apply calls.
Error: Retrieving Application with object ID "fe2b93b7-e26c-402c-ab4f-87e3695c1f45" with module.app_registrations.azuread_application.terraform_app on ../modules/app_registrations/terraform.tf line 56, in resource "azuread_application" "terraform_app": 56: resource "azuread_application" "terraform_app" { ApplicationsClient.BaseClient.Get(): Get "https://graph.microsoft.com/beta/b859b851-97d8-4dc2-bf56-f2a5bc5c494b/applications/fe2b93b7-e26c-402c-ab4f-87e3695c1f45": http: RoundTripper implementation (*retryablehttp.RoundTripper) returned a nil *Response with a nil error
I'm having a hard time to understand why I'm able to generate this service principal using my user credentials via Terraform, but am receiving this retrieval error when refreshing the state of the same service principal. When I'm deploying the Terraform code via Github Actions, which uses the service principal to authenticate, this retrieval error is never raised.
Anyone can point me in the right direction?
I've already added the Application administrator role to my user credentials and added the Application.ReadWrite.All role to the Terraform service principal. Above error when refreshing the state using user credentials via az login persists.
I understand there is a difference between a service account and a service agent for different services such as composer.
How do you enable a service agent via terraform?
What I'm trying to do is this :
# TODO : Maybe enable this service agent somehow via gcloud? It got enabled when trying to manually create the composer env from the console
# Step 4 (Src2) - Host project GKE service account: service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com
# Need 'container.serviceAgent' in the host project
resource "google_project_iam_member" "dev-omni-orch-gke-project-lvl-roles-t2" {
provider = google.as_super_admin
for_each = toset([
"roles/container.serviceAgent",
])
role = each.value
member = "serviceAccount:service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com"
# member = "serviceAccount:service-${google_project.main-shared-vpc-host.number}#container-engine-robot.iam.gserviceaccount.com"
# project = google_project.dev-main-code-base.project_id
project = google_project.main-shared-vpc-host.project_id
}
I get
Request `Create IAM Members roles/container.serviceAgent serviceAccount:service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com for project "<shared-vpc-host-project-id>"` returned error: Batch request and retried single request "Create IAM Members roles/container.serviceAgent serviceAccount:service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com for project \"<shared-vpc-host-project-id>\"" both failed. Final error: Error applying IAM policy for project "<shared-vpc-host-project-id>": Error setting IAM policy for project "<shared-vpc-host-project-id>": googleapi: Error 400: Service account service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com does not exist., badRequest
But when I try to do it via the console manually, there is a prompt that asks me if I want to enable this service agent, which I do, but I want to be able to do this on terraform.
The said prompt :
The service-[PROJECT_ID]#cloudcomposer-accounts.iam.gserviceaccount.com service agent will only exist after the Cloud Composer API has been enabled.
This can be done in Terraform using the google_project_service resource, for example:
resource "google_project_service" "project" {
project = "your-project-id"
service = "composer.googleapis.com"
}
Once the API has been enabled, the service agent should exist and you should be able to grant it the required permissions.
I have a use-case where I need to enable cloud build access on GKE but I did not found a terraform resource to do that, also not found gcloud CLI command to do the same.
Yes, you can do this in Terraform by creating a google_project_iam_member for the Cloud Build service account that's created by default when you enable the Cloud Build API. For example:
resource "google_project_iam_member" "cloudbuild_kubernetes_policy" {
project = var.project_id
role = "roles/container.developer"
member = "serviceAccount:${var.project_number}#cloudbuild.gserviceaccount.com"
}
The value declared in the role attribute/key corresponds to a role in the console user interface (an image of which you have included above).
I am creating CI/CD pipeline for Terraform so that my GCP resource creation would be automated. But Terraform needs Service account to do the job, I create the service account and the key is downloaded to my machine, but what should be the correct way to store it so when running Cloud build pipeline so that Terraform would pick on it and execute scripts.
provider "google" {
credentials = file(var.cred_file)
project = var.project_name
region = var.region
}
Is it okay to store this file in Cloud storage bucket ? Or there are some better alternatives ?
On GCP you have the bucket option to keep sensitive information and you can use access control lists (ACLs) to define who has access on your buckets and objects. GCP offers the next options to storage and I think that the better is according with your needs, just ensure that the option provides you the security tools to keep your files safe. I think that once you are Granting permissions to your Cloud Build service account, you can pass the path to the service account key in code
I am reading all the terraform docs about using a service principal with a client secret when in CI or docker file or whatever and I quote:
We recommend using either a Service Principal or Managed Service Identity when running Terraform non-interactively (such as when running Terraform in a CI server) - and authenticating using the Azure CLI when running Terraform locally.
It then goes into great detail about creating a service principal and then gives an awful example at the end where the client id and client secret are hardcoded in the file by either storing them in environment variables:
export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
or in the terraform provider block:
provider "azurerm" {
# Whilst version is optional, we /strongly recommend/ using it to pin the version of the Provider being used
version = "=1.43.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000"
client_secret = "${var.client_secret}"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
It does put a nice yellow box about it saying do not do this but there is no suggestion of what to do.
I don't think client_secret in an environment variable is a particularly good idea.
Should I be using the client certificate and if so, the same question arises about where to keep the configuration.
I want to avoid azure-cli if possible.
Azure-cli will not return the client secret anyway.
How do I go about getting these secrets into environment variables? Should I be putting them into a vault or is there another way?
For your requirements, I think you're a little confused that how to choose a suitable one from the four ways.
You can see that the Managed Service Identity is only available for the services with the Managed Service Identity feature. So docker cannot use it. And you need also to assign it with appropriate permission as the service principal. You don't want to use Azure CLI if possible, I don't know why, but let's skip it first.
The service principal is a good way I think. It recommends you do not put the secret into a variable inside the Terraform file. So you can only use the environment variable. And if you also do not want to set the environment variable, then I don't think there is a way to use the service principal. The certificate for the service principal only needs to set the certificate path more than the other one.
And there is a caution for the service principal. You can see the secret of the service principal only one time when you finish creating it and then it will do not display anymore. If you forget, you can only reset the secret.
So I think the service principal is the most suitable way for you. You can set the environment variables with the parameter --env of the command docker run. Or just set them in the Dockerfile with ENV. The way to store the secret in the key vault, I think you can get the answer in my previous answer.