Organization security policy error in terraform - terraform-provider-gcp

I am getting the error below during terraform apply in cloud build pipeline and cloud build has these roles : Compute Organization Firewall Policy Admin
, Owner, and compute admin :
for Creating OrganizationSecurityPolicy: error while retrieving operation: googleapi: Error 403: Required 'compute.globalOperations.get' permission for 'locations/global/operations/org-66596309756-1634926613476-5cef50407b412-cf45ce60-0943c3bd', forbidden

Typically 403 error is gcp permissions error, check IAM and give cloud build account roles.
Also tell you that it is a bad practice for the cloud build role to have owner permissions. I would recommend that you make another service account and give it specific roles for what you want to do.

Related

Unable to connect to Azure DevOps from Azure Logic Apps

I’m trying to queue Azure DevOps pipeline from Azure Logic App. When I create workflow, the connection is configured correctly without any issue. However, the project dropdown list is unable to populate team project and same as build definition id dropdown list. The organization dropdown list is populated correctly. I do have team project administrator to the team project, and do have logic app contributor. I'm also able to get list of team project from this organization using REST-API. Here is an error I got:
Could not retrieve values. Error code: ‘Unauthorized’, Message: ‘TF400813: The user ‘573f1013-71ca-6a2f-ac35-ba1bef678b59’ is not authorized to access this resource.
Azure DevOps ActivityId: 0ba5ef8c-4ac4-4810-bf92-7835ca5bf444
Details: TF400813: The user ‘573f1013-71ca-6a2f-ac35-ba1bef678b59’ is not authorized to access this resource.
clientRequestId: eae306a3-f638-424b-96e5-579a70c9dcf7’. More diagnostic information: x-ms-client-request-id is ‘F6A975D5-74AA-41E3-9DCA-70A508139387’.
Error code: ‘Unauthorized’, Message: ‘TF400813: The user ‘573f1013-71ca-6a2f-ac35-ba1bef678b59’ is not authorized to access this resource. Azure DevOps ActivityId: 0ba5ef8c-4ac4-4810-bf92-7835ca5bf444
According to the error message, it may be that the account you logged in in the Queue a build action selected the wrong domain (AAD directory).
You can try the following steps to sign in the account again in the queue a build action.
Here are the steps:
Step1: Navigate to this user Profile URL: https://aex.dev.azure.com/me?mkt=zh-CN&campaign=o~msft~old~vsts~profile
Then you could select the correct AAD directory.
Step2: Sign in your account to Azure DevOps in Azure Logic App again.
You need to check if the domain is correct.

Azure terraform storage account permission

I want to learn more about azure open vpn configurations and how it work. So looking around I found a open source project on GitHub, at the following link:
https://github.com/terraform-azurerm-examples/example-hub.git (Thank you for your code)
I set all the variable I wanted, and removed the version from azure provider.
but when I run terraform apply, I got an error on azure Storage account.
the error is this one:
Error: reading queue properties for AzureRM Storage Account "examplehubw6sr1wyncn": queues.Client#GetServiceProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationPermissionMismatch" Message="This request is not authorized to perform this operation using this permission.\nRequestId:cce5a313-b003-005c-2bb2-9d8a2f000000\nTime:2021-08-30T15:19:07.9036073Z"
As far as I understand, the error is due to setting secret permissions, which I did updated giving Get, List and Set but the error keeps showing up.
I am using terraform version 0.14.5
and my azurerm version is 2.74.0
I never had this type of error, on my subscription I have administrator role.
Did anyone get this error and know how to solve it, I would really appreciate you help
The error is probably because your user does not have data plane permissions on your storage account - which is where Terraform wants to put the statefile. Give your user Storage Blob Data Contributor role: https://learn.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=portal

Azure RBAC application-insights-component-contributor vs monitoring-contributor

I am trying to understand the overlap between two of those roles in Azure RBAC. Looks like monitor-contributor completely covers application-insights-component-contributor besides "Microsoft.Resources/deployments/*". Considering the following situation whether I am deploying web availability tests into AppInsights resource and the deployment identity is service principal which was already granted monitor-contributor permissions. Should I grant this identity also 'application-insights-component-contributor' to be able to create those resources or 'monitor contributor' is good enough?
1 Edit
I am also deploying alert rules along with the tests and those rules implemented as rm template, if SP was granted monitoring-contributor only it's fails with
Error: requesting Validation for Template Deployment "app508-dfpg-dev3-diag-eastus2-backoffice-ai-test-dep" (Resource Group "app508-dfpg-ne-diag-eastus2"): resources.DeploymentsClient#Validate: Failure sending request: StatusCode=403 -- Original Error: Code="AuthorizationFailed" Message="The client '2c20abbf-e825-495c-9d06-90c5f04f9c60' with object id '2c20abbf-0000-0000-0000-90c5f04f9c60' does not have authorization to perform action 'Microsoft.Resources/deployments/validate/action' over scope '/subscriptions/s/resourcegroups/app508-dfpg-ne-diag-eastus2/providers/Microsoft.Resources/deployments/app508-dfpg-dev3-diag-eastus2-backoffice-ai-test-dep' or the scope is invalid. If access was recently granted, please refresh your credentials."
No need to give the Application Insights Component Contributor role, Monitoring Contributor role is enough. When you deploying the web availability tests, you just need the Microsoft.Insights/webtests/* action permission, it is already included in Monitoring Contributor.

CICD Authentication using SPN

I am creating a CI CD pipeline to move code between dev and test instances of databricks. I am able to achieve this using my personal token. Now, I am trying to do the same thing using a SPN and when I do the same, i get the following error.
HTTP ERROR 403
Problem accessing /api/2.0/workspace/mkdirs. Reason:
User not authorized.
Can any of you help me resolve this error or provide any links which talks about how to use SPN to authenticate from devops to databricks.
Does your SPN have Contributor Role on either the Databricks resources or Azure Resource Groups ? It could be throwing a very similar error if not.

"insufficient authentication scopes" from Google API when calling from K8S cluster

I'm trying to report Node.js errors to Google Error Reporting, from one of our kubernetes deployments running on a GCP/GKE cluster with RBAC. (i.e. permissions defined in a service account associated to the cluster)
const googleCloud = require('#google-cloud/error-reporting');
const googleCloudErrorReporting = new googleCloud.ErrorReporting();
googleCloudErrorReporting.report('[test] dummy error message');
This works only in certain environments:
it works when run on my laptop, using a service account that has the "Errors Writer" role
it works when running in my cluster as a K8S job, after having added the "Errors Writer" role to that cluster's service account
it causes the following error when called from my Node.js application running in one of my K8S deployments:
ERROR:#google-cloud/error-reporting: Encountered an error while attempting to transmit an error to the Stackdriver Error Reporting API.
Error: Request had insufficient authentication scopes.
It feels like the job did pick up the permission changes of the cluster's service account, whereas my deployment did not.
I did try to re-create the deployment to make it refresh its auth token, but the error is still happening...
Any ideas?
UPDATE: I ended up following Jérémie Girault's suggestion: create a service account and bind it to my deployment. It works!
The error message has to do with the access scopes set on the cluster when using the default service account. You must enable access to the appropriate API.
As you mentioned, creating a separate service account, providing it the appropriate IAM permissions and linking it to your cluster or workload will bypass this error as well.

Resources