Artifact's permissions issue with terrafrom - terraform

I am trying to create a service account using terraform and I also want to apply multiple permissions to that account using terraform.
# create artifact register
resource "google_artifact_registry_repository" "yacht-away" {
provider = google-beta
location = "asia-south1"
repository_id = "yacht-away"
description = "yacht-away docker repository with iam"
format = "DOCKER"
}
# create service account
resource "google_service_account" "yacht-away-service-acc" {
provider = google-beta
account_id = "yacht-away-service-ac"
display_name = "Yacht Away Service Account"
}
However, I constantly see this error. I have verified the value of location everywhere it is the same as mentioned above. So probably that is not the issue. The service account being used by the terraform has project editor access and I have also tried after providing it owner access.
Error: Error when reading or editing Resource "artifactregistry repository \"projects/dhb-222614/locations/asia-south1/repositories/yacht-away\"" with IAM Member: Role "roles/artifactregistry.reader" Member "serviceAccount:yacht-away-service-ac#dhb-222614.iam.gserviceaccount.com": Error retrieving IAM policy for artifactregistry repository "projects/dhb-222614/locations/asia-south1/repositories/yacht-away": googleapi: Error 403: The caller does not have permission
So I don't understand where am I going wrong.

Related

Can't create google_storage_bucket via Terraform

I'd like to create the following resource via Terraform:
resource "google_storage_bucket" "tf_state_bucket" {
name = var.bucket-name
location = "EUROPE-WEST3"
storage_class = "STANDARD"
versioning {
enabled = true
}
force_destroy = false
public_access_prevention = "enforced"
}
Unfortunately, during the execution of terraform apply, I got the following error:
googleapi: Error 403: X#gmail.com does not have storage.buckets.create access to the Google Cloud project. Permission 'storage.buckets.create' denied on resource (or it may not exist)., forbidden
Here's the list of things I tried and checked:
Verified that Google Cloud Storage (JSON) API is enabled on my project.
Checked the IAM roles and permissions: X#gmail.com has the Owner and the Storage Admin roles.
I can create a bucket manually via the Google Console.
Terraform is generally authorised to create resources, for example, I can create a VM using it.
What else can be done to authenticate Terraform to create Google Storage Buckets?
I think you run the Terraform code in a Shell session from your local machine and use an User identity instead of a Service Account identity.
In this case to solve your issue from your local machine :
Create a Service Account in GCP IAM console for Terraform with Storage Admin roles role.
Download a Service Account token key from IAM.
Set the GOOGLE_APPLICATION_CREDENTIALS env var in your Shell session to the Service Account token key file path.
If you run your Terraform code in other place, you need to check if Terraform is correctly authenticated to GCP.
The use of a token key is not recommended because it's not the more secure way, that's why it is better to launch Terraform from a CI tool like Cloud Build instead of launch it from your local machine.
From Cloud Build no need to download and set a token key.

How to enable GCP service agent account via Terraform?

I understand there is a difference between a service account and a service agent for different services such as composer.
How do you enable a service agent via terraform?
What I'm trying to do is this :
# TODO : Maybe enable this service agent somehow via gcloud? It got enabled when trying to manually create the composer env from the console
# Step 4 (Src2) - Host project GKE service account: service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com
# Need 'container.serviceAgent' in the host project
resource "google_project_iam_member" "dev-omni-orch-gke-project-lvl-roles-t2" {
provider = google.as_super_admin
for_each = toset([
"roles/container.serviceAgent",
])
role = each.value
member = "serviceAccount:service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com"
# member = "serviceAccount:service-${google_project.main-shared-vpc-host.number}#container-engine-robot.iam.gserviceaccount.com"
# project = google_project.dev-main-code-base.project_id
project = google_project.main-shared-vpc-host.project_id
}
I get
Request `Create IAM Members roles/container.serviceAgent serviceAccount:service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com for project "<shared-vpc-host-project-id>"` returned error: Batch request and retried single request "Create IAM Members roles/container.serviceAgent serviceAccount:service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com for project \"<shared-vpc-host-project-id>\"" both failed. Final error: Error applying IAM policy for project "<shared-vpc-host-project-id>": Error setting IAM policy for project "<shared-vpc-host-project-id>": googleapi: Error 400: Service account service-<some-numeric-id>#container-engine-robot.iam.gserviceaccount.com does not exist., badRequest
But when I try to do it via the console manually, there is a prompt that asks me if I want to enable this service agent, which I do, but I want to be able to do this on terraform.
The said prompt :
The service-[PROJECT_ID]#cloudcomposer-accounts.iam.gserviceaccount.com service agent will only exist after the Cloud Composer API has been enabled.
This can be done in Terraform using the google_project_service resource, for example:
resource "google_project_service" "project" {
project = "your-project-id"
service = "composer.googleapis.com"
}
Once the API has been enabled, the service agent should exist and you should be able to grant it the required permissions.

Terraform apply can't be ran because of azurerm_management_lock

I have two resources azurerm_storage_account and azurerm_cosmosdb_account created in a resource group my-rg.
I also have a azurerm_management_lock set to ReadOnly at my-rg level.
resource "azurerm_storage_account" "main" {
name = "my-storage"
resource_group_name = azurerm_resource_group.main.name
...
}
resource "azurerm_cosmosdb_account" "main" {
name = "my-cosmosdb"
resource_group_name = azurerm_resource_group.main.name
...
}
resource "azurerm_resource_group" "main" {
name = "my-rg"
...
}
resource "azurerm_management_lock" "resource-group-level" {
name = "terraform-managed-resources"
scope = azurerm_resource_group.main.id
lock_level = "ReadOnly"
}
When I run terraform apply I run into that errors :
Error: [ERROR] Unable to List Write keys for CosmosDB Account
"my-cosmosdb":
documentdb.DatabaseAccountsClient#ListKeys: Failure sending request:
StatusCode=409 -- Original Error: autorest/azure: Service returned an
error. Status= Code="ScopeLocked" Message="The scope
'/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-rg/providers/Microsoft.DocumentDB/databaseAccounts/my-cosmosdb'
cannot perform write operation because following scope(s) are locked:
'/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-rg'.
Please remove the lock and try again."
Error: building Queues Client: retrieving Account Key: Listing Keys
for Storage Account "my-storage" (Resource Group
"my-rg"): storage.AccountsClient#ListKeys: Failure
sending request: StatusCode=409 -- Original Error: autorest/azure:
Service returned an error. Status= Code="ScopeLocked"
Message="The scope
'/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-rg/providers/Microsoft.Storage/storageAccounts/my-storage'
cannot perform write operation because following scope(s) are locked:
'/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-rg'.
Please remove the lock and try again."
What should I do in order to allow terraform apply to be run without removing the lock manually?
Note that this is a simplified example and I have many more resources that aren't impacted by this lock. I only have listed the resources involved in the Error log.
Please check the Considerations before applying your locks.
As ,when a ReadOnly lock applied to resource group which is parent
to a storage account ,it applies to storage account too.
For storage account locked with read only access, List Keys operation is blocked for that account.
The operation : List Keys is an HTTPS POST operation, and all POST operations are prevented when a ReadOnly lock is configured for the
account.i.e .; Locks prevent the POST method from sending data to the (ARM) API.
When a read-only lock is configured for a storage account, users/client who already have access keys ready can continue to access data but users who don't have the account keys need to use Azure AD credentials to access blob or queue data.
Please check : authorizing-data-operations-when-a-readonly-lock-is-in-effect- Azure Storage | Microsoft Docs for the minimum roles required.
Same may apply to cosmosDB but to ensure try to check list keys in cosmos db by checking if you need to assign Cosmos DB Account Reader role which has Microsoft.DocumentDB/databaseAccounts/readonlykeys/action permissions in it. Manage locks
"permissions": [
{
"actions": [
…
"Microsoft.DocumentDB/databaseAccounts/readonlykeys/action",
…..
],
also you can customize actions custom-roles to have Microsoft.DocumentDB/databaseAccounts/listKeys/* action
References:
Azure Cosmos DB read data using role based access control - Stack
Overflow
Read-only access to Cosmos DB · Issue · GitHub

Terraform azurerm_data_factory vsts_configuration failing with Error: Error configuring Repository for Data Factory

I'm trying to set up the code repository in Azure Data Factory using Terraform deploying with Azure Cloud Shell with contributor access following this: https://www.terraform.io/docs/providers/azurerm/r/data_factory.html#vsts_configuration
I'm getting the error message:
Error: Error configuring Repository for Data Factory "adf-name"
(Resource Group "rg-name"):
datafactory.FactoriesClient#ConfigureFactoryRepo: Failure responding
to request: StatusCode=403 -- Original Error: autorest/azure: Service
returned an error. Status=403 Code="AuthorizationFailed" Message="The
client 'xxx#xxx.com' with object id 'xxxxx' does not have
authorization to perform action
'Microsoft.DataFactory/locations/configureFactoryRepo/action' over
scope '/subscriptions/xxxxxx' or the scope is invalid. If access was recently granted,
please refresh your credentials.
I've de-sensitised the client, object id and scope.
I am able to set up the code repository in the portal, but fails when I try and run the terraform in the Azure Cloud Shell. Has anyone seen this error message before or know how to get past it?
Code snip it:
`provider "azurerm" {
version = "=2.3.0"
features {}
}
resource "azurerm_data_factory" "example" {
name = var.adf_name
location = var.location
resource_group_name = var.rg_name
vsts_configuration {
account_name = var.account_name
branch_name = var.branch_name
project_name = var.project_name
repository_name = var.repo_name
root_folder = var.root_folder
tenant_id = var.tenant_id
}
}`
A custom role had to be added for the action ‘ Microsoft.DataFactory/locations/configureFactoryRepo/action’ and assigned to the service principal. Contributor role itself was not enough to set up the code repository for Azure Data Factory using Terraform azurerm.

How to reload the terraform provider at runtime to use the different AWS profile

How to reload the terraform provider at runtime to use the different AWS profile.
Create a new user
resource "aws_iam_user" "user_lake_admin" {
name = var.lake_admin_user_name
path = "/"
tags = {
tag-key = "data-test"
}
}
provider "aws" {
access_key = aws_iam_access_key.user_lake_admin_AK_SK.id
secret_key = aws_iam_access_key.user_lake_admin_AK_SK.secret
region = "us-west-2"
alias = "lake-admin-profile"
}
this lake_admin user is created in the same file.
trying to use
provider "aws" {
access_key = aws_iam_access_key.user_lake_admin_AK_SK.id
secret_key = aws_iam_access_key.user_lake_admin_AK_SK.secret
region = "us-west-2"
alias = "lake-admin-profile"
}
resource "aws_glue_catalog_database" "myDB" {
name = "my-db"
provider = aws.lake-admin-profile
}
As I know terraform providers are executed first in all terraform files.
But is there any way we can reload the configurations of providers in the mid of terraform execution?
You can't do this directly.
You can apply the creation of the user in one root module and state and use its credentials in a provider for the second.
For the purposes of deploying infrastructure, you are likely better off with IAM Roles and assume role providers to handle this kind of situation.
Generally, you don't need to create infrastructure with a specific user. There's rarely an advantage to doing that. I can't think of a case where the principal creating infrastructure has any implied specific special access to the created infrastructure.
You can use a deployment IAM Role or IAM User to deploy everything in the account and then assign resource based and IAM policy to do the restrictions in the deployment.

Resources