PubSub from GKE: PERMISSION_DENIED - node.js

I'm trying to connect to PubSub from within one of my GKE cluster by using the nodejs client but get permission issues, no matter the IAM service account or list of OAuth scopes used in the cluster configuration.
While testing locally, everything works fine. I've tried various things, such as using the same account that I was using to successfully connect to PubSub while testing locally in my remote GKE, using a service account with the project owner permission, or setting the scopes manually, but nothing seems to do the trick, and I always face the same permission denied error:
Error: 7 PERMISSION_DENIED: User not authorized to perform this action.
at Object.callErrorFromStatus (/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/node_modules/#grpc/grpc-js/build/src/client.js:176:52)
at Object.onReceiveStatus (/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:342:141)
at Object.onReceiveStatus (/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:305:181)
at /node_modules/#grpc/grpc-js/build/src/call-stream.js:124:78 at processTicksAndRejections (internal/process/task_queues.js:75:11)
I am quite lost given that I can connect to my other services (Redis, SQL, BigTable, ...) without any issue from this GKE instance.
Any help would be greatly appreciated.

The issue was actually on my end. I was overriding the GOOGLE_APPLICATION_CREDENTIALS env variable in my YAML file, causing libraries that directly used this variable to use the wrong service account file. The cluster would still show the correct file, but another one would be used in the background.

Related

I want to publish a signed URL from cloud run

When I try to issue a signed URL with cloud run, I get a response of 'Permission 'iam.serviceAccounts.signBlob' denied on resource (or it may not exist).'
I granted iam.serviceAccounts.signBlob to the CloudRun service account, but it remains the same
environment
Node.js 16
However, it worked once I placed the service _account.json locally. Please let me know if there is a way to make it work without placing it locally.

GKE: Impossible to delete a cluster

I have a weird issue with GKE, the cluster has been created by Terraform, and I tried to make a change requiring a deletion and re-creation.
It failed at the re-creation because I was missing an API, so I added it and retry.
Thing is that I have a cluster that exists, empty but with failed to delete cluster message on it.
I never had this issue and I already destroyed and re-created this very resource. I tried to destroy all the resources created by terraform on this project but I still get an error "failed to delete cluster".
Also I tried to do it by hand on the UI but still get the same error.
I tried to do it using
gcloud container clusters delete <cluster_name> and got
"Failed to delete cluster, name: operation-xxx-xxx..." and got a link to the operation failed.
It's a JSON with a 401 code, with the following message:
Request is missing required authentication credential. Expected OAuth
2 access token, login cookie or other valid authentication credential.
See
https://developers.google.com/identity/sign-in/web/devconsole-project.
I tried to re-auth but it doesn't help I get the same error.
I'm running out of idea, can you help me here?
A 401 (unauthorized) suggests that you've insufficient permissions to delete the cluster.
Either get a role that permits your user account to delete clusters.
Or ask someone who has an account that has sufficient powers to delete it for you.
Or authenticate gcloud (gcloud activate-service-account) with the Service Account that you used to create the cluster (assuming it can delete clusters too) and then use gcloud container clusters delete ... optionally include --account=${SERVICE_ACCOUNT_EMAIL} or just ensure the Service Account is ACTIVE with gcloud auth list.
I did not found a proper solution, but what did work was to delete the whole project and start over.
Luckily for me it was a lab, not a production project...

Azure Storage Explorer : Unable to retrieve child resources

Getting error ONLY while accessing Blob storage.
No issues in Queues, File Share or table.
Any idea ?
Unable to retrieve child resources.
Details:
["FetchError:request to https://fssaicessunsetsbxv1sa.blob.core.windows.net/?include=metadata&comp=list failed, reason: unable to get local issuer certificate"]
Error : Self-Signed Certificate in Certificate Chain ,Unable to retrieve child resources.
Issue for me: I am attached with office proxy server. But Azure Storage Explorer is not using that proxy.
Solution:
Azure Storage Explorer -> Edit -> Configure Proxy,
Source = No proxy "Changed to" Use System proxy(preview)
After making these changes; I am able access the resources.
Moreover, Verify the permissions do you have on the connection string?
To generate your connection string either through the Azure Portal or some apps. When you generate the connection string, you need to give "Allowed permissions". Beside Read/Write you also need the List permission so Storage Explorer can list the blobs. Here is a screenshot in Azure portal to check/uncheck the permissions:
Have set any RBAC policies?
If you are connected to Azure through a proxy, verify that your proxy settings are correct. If you were granted access to a resource from the owner of the subscription or account, verify that you have read or list permissions for that resource.
If possible can you try to un-install and reinstall the latest version and check for the status of the issue.
Azure Storage Explorer Troubleshooting: "unable to retrieve child resources” or “The request action could not be completed”.
If the issue still persist after trying above mentioned steps, I would like to work closer on this issue. Let me know the status
Warning: For the noobs !
if you got luck you can also fix it by closing and re-opening the visual studio.
Reason: Authorization is tightly coupled with azure
Motivation: To err is Human ! Even Soft. DEV working at Microsoft are Human.

"insufficient authentication scopes" from Google API when calling from K8S cluster

I'm trying to report Node.js errors to Google Error Reporting, from one of our kubernetes deployments running on a GCP/GKE cluster with RBAC. (i.e. permissions defined in a service account associated to the cluster)
const googleCloud = require('#google-cloud/error-reporting');
const googleCloudErrorReporting = new googleCloud.ErrorReporting();
googleCloudErrorReporting.report('[test] dummy error message');
This works only in certain environments:
it works when run on my laptop, using a service account that has the "Errors Writer" role
it works when running in my cluster as a K8S job, after having added the "Errors Writer" role to that cluster's service account
it causes the following error when called from my Node.js application running in one of my K8S deployments:
ERROR:#google-cloud/error-reporting: Encountered an error while attempting to transmit an error to the Stackdriver Error Reporting API.
Error: Request had insufficient authentication scopes.
It feels like the job did pick up the permission changes of the cluster's service account, whereas my deployment did not.
I did try to re-create the deployment to make it refresh its auth token, but the error is still happening...
Any ideas?
UPDATE: I ended up following Jérémie Girault's suggestion: create a service account and bind it to my deployment. It works!
The error message has to do with the access scopes set on the cluster when using the default service account. You must enable access to the appropriate API.
As you mentioned, creating a separate service account, providing it the appropriate IAM permissions and linking it to your cluster or workload will bypass this error as well.

Unable to get access to Key Vault using Azure MSI on App Service

I have enabled Managed Service Identities on an App Service. However, my WebJobs seem unable to access the keys.
They report:
Tried the following 3 methods to get an access token, but none of them worked.
Parameters: Connectionstring: [No connection string specified], Resource: https://vault.azure.net, Authority: . Exception Message: Tried to get token using Managed Service Identity. Unable to connect to the Managed Service Identity (MSI) endpoint. Please check that you are running on an Azure resource that has MSI setup.
Parameters: Connectionstring: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.microsoftonline.com/common. Exception Message: Tried to get token using Active Directory Integrated Authentication. Access token could not be acquired. password_required_for_managed_user: Password is required for managed user
Parameters: Connectionstring: [No connection string specified], Resource: https://vault.azure.net, Authority: . Exception Message: Tried to get token using Azure CLI. Access token could not be acquired. 'az' is not recognized as an internal or external command,
Kudo does not show any MSI_ environmental variables.
How is this supposed to work? This is an existing App Service Plan.
The AppAuthentication library leverages an internal endpoint in App Service that receives the tokens on your site's behalf. This endpoint is non-static and therefore is set to an environment variable. After activating MSI for your site through ARM, your site will need to be restarted to get two new Environment Variables set in it:
MSI_ENDPOINT and MSI_SECRET
The presence of these variables are essential to the MSI feature working properly during runtime as the AppAuthentication library uses them to get the authorization token. The error message reflects this:
Exception Message: Tried to get token using Managed Service Identity. Unable to connect to the Managed Service Identity (MSI) endpoint. Please check that you are running on an Azure resource that has MSI setup.
If these variables are absent, you might need to restart the site.
https://learn.microsoft.com/en-us/azure/app-service/app-service-managed-service-identity
If the environment variables are set and you still see the same error, the article above has a code sample showing how to send requests to that endpoint manually.
public static async Task<HttpResponseMessage> GetToken(string resource, string apiversion) {
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Add("Secret", Environment.GetEnvironmentVariable("MSI_SECRET"));
return await client.GetAsync(String.Format("{0}/?resource={1}&api-version={2}", Environment.GetEnvironmentVariable("MSI_ENDPOINT"), resource, apiversion));
}
I would try that and see what kind of response I get back.
I just solved this issue when trying to use MSI with a Function app, though I already had the environment variables set. I tried restarting multiple times to no success. What I ended up doing was manually turning off MSI for the Function, then re-enabling it. This wasn't ideal, but it worked.
Hope it helps!
I've found out that if you enable MSI and then swap out the slot, the functionality leaves with the slot change. You can re-enable it by switching it off and on again but that will create a new identity in AD and will require you to reset permissions on the key vault for it to work.
Enable the identity and give access to your azure function app in keyvault via access policy.
You can find identity in platform feature tab
These two steps works for me
In my case I had forgotten to add an Access Policy for the application in the Key Vault
Just switched ON the Status like #Sebastian Inones showed.
Than add access policy for KeyVault like
This is resolved the issue!!
For the ones, like my self, wondering how to enable MSI.
My scenario:
I have an App Service already deployed and running for a long time.
In addition, on Azure DevOps I have my Pipeline configured to Auto-Swap my Deployment Slots (Staging/Production). Suddenly, after a normal push, Production starts failing because of the described issue.
So, in order to enable MSI again (I don't know why it has to be re-enabled but I believe this is only a workaround, not a solution, as it should be still enabled in the first place)
Go to your App Service. Then Under Settings --> Identity.
Check the status: In my case, it was off
I have attached an image below to make it easier to follow.
For the folks that will come across these answers, I would like to share my experience.
I got this problem with Azure Synapse pipeline run. Essentially I added access policies properly to the KeyVault, and also I added a LinkedService to the Azure Synapse pointing to my KeyVault.
If I trigger the notebook manually it works, but in the pipeline, it fails.
Initially, I used the following statement:
url = TokenLibrary.getSecret("mykeyvault", "ConnectionString")
Then I added the name of the linked service as a third parameter, and the pipeline was able to leverage that linked service to obtain the MSI token for a Vault.
url = TokenLibrary.getSecret("mykeyvault", "ConnectionString", "AzureKeyVaultLinkedServiceName")
Might be unrelated to your issue but I was getting the same error message.
For me, the issue was using pip3's azure-cli. I was able to fix this issue by using brew packages for both azure-cli and azure-functions-core-tools.
Uninstall pip3 azure-cli
pip3 uninstall azure-cli
Install brew azure-cli
brew update
brew install azure-cli
Double check if the error message ends with:
Please go to Tools->Options->Azure Services Authentication, and re-authenticate the account you want to use.

Resources