When I try to issue a signed URL with cloud run, I get a response of 'Permission 'iam.serviceAccounts.signBlob' denied on resource (or it may not exist).'
I granted iam.serviceAccounts.signBlob to the CloudRun service account, but it remains the same
environment
Node.js 16
However, it worked once I placed the service _account.json locally. Please let me know if there is a way to make it work without placing it locally.
Related
Good day I am new on web developing and want to ask on how to fix this error in the terminal of Azure webapp service, git push azure main this is the command I keep inserting inside the terminal but the response is always this Password for <webapp url> and I don't know what password I should enter
therefore I browse the internet and still stuck on this, the fixes I tried is removing some credentials on windows credential, changing the HTTPS to SSHS, configuring global password, and lastly installing the GCM from github thank you very much
In Azure Portal, first we need to create Azure App service with the required run time stack.
You will get this option, if we deploy our App using Local Git.
We need to provide Credentials while pushing the code from local GitHub.
You will get the Credentials from Azure Portal => App Service.
Navigate to Azure Portal => Your App Service (which you have created in first step) => Deployment Center => Local Git/ FTPS credentials.
We can use the existing Application scope Username and Password or can create new User scope and use them.
I'm following this article in order to deploy the web app on azure with private endpoint: https://azure.github.io/AppService/2021/03/01/deploying-to-network-secured-sites-2.html. My understanding is that the access to scm service is not required but is it the case? If I'm wrong, is there any other way to deploy in this case without using VM?
Unfortunately I keep getting the forbidden error:
An error occured during deployment. Status Code: 403,
I've tested it from a VM with access to scm and it works fine there but I need to make it work from the machine without the access to scm.
There is no change in permission or account used for gcloud app deploy for my NodeJS application, it last worked properly on 19th July. I tried it after couple months of gap and now gcloud app deploy throws error:
ERROR: failed to initialize cache: failed to create image cache: accessing cache image "asia.gcr.io//app-engine-tmp/build-cache/ttl-7d/users/buildpack-cache:latest": connect to repo store 'asia.gcr.io//app-engine-tmp/build-cache/ttl-7d/users/buildpack-cache:latest': GET https://asia.gcr.io/v2//app-engine-tmp/build-cache/ttl-7d/users/buildpack-cache/manifests/latest: DENIED: Permission denied for "latest" from request "/v2//app-engine-tmp/build-cache/ttl-7d/users/buildpack-cache/manifests/latest".
it was related to billing, the payment could not be processed and GCP started showing the cryptic message. However it took few hours after successful clearance of previous payment for google to start allowing build.
In my case, when disabling and enabling the Container Registry service again (https://console.cloud.google.com/apis/library/containerregistry.googleapis.com) GCP showed me an error indicating that I should check my account billing. After reviewing it, I was able to upload it as usual.
Regards
I'm trying to connect to PubSub from within one of my GKE cluster by using the nodejs client but get permission issues, no matter the IAM service account or list of OAuth scopes used in the cluster configuration.
While testing locally, everything works fine. I've tried various things, such as using the same account that I was using to successfully connect to PubSub while testing locally in my remote GKE, using a service account with the project owner permission, or setting the scopes manually, but nothing seems to do the trick, and I always face the same permission denied error:
Error: 7 PERMISSION_DENIED: User not authorized to perform this action.
at Object.callErrorFromStatus (/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/node_modules/#grpc/grpc-js/build/src/client.js:176:52)
at Object.onReceiveStatus (/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:342:141)
at Object.onReceiveStatus (/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:305:181)
at /node_modules/#grpc/grpc-js/build/src/call-stream.js:124:78 at processTicksAndRejections (internal/process/task_queues.js:75:11)
I am quite lost given that I can connect to my other services (Redis, SQL, BigTable, ...) without any issue from this GKE instance.
Any help would be greatly appreciated.
The issue was actually on my end. I was overriding the GOOGLE_APPLICATION_CREDENTIALS env variable in my YAML file, causing libraries that directly used this variable to use the wrong service account file. The cluster would still show the correct file, but another one would be used in the background.
I'm trying to report Node.js errors to Google Error Reporting, from one of our kubernetes deployments running on a GCP/GKE cluster with RBAC. (i.e. permissions defined in a service account associated to the cluster)
const googleCloud = require('#google-cloud/error-reporting');
const googleCloudErrorReporting = new googleCloud.ErrorReporting();
googleCloudErrorReporting.report('[test] dummy error message');
This works only in certain environments:
it works when run on my laptop, using a service account that has the "Errors Writer" role
it works when running in my cluster as a K8S job, after having added the "Errors Writer" role to that cluster's service account
it causes the following error when called from my Node.js application running in one of my K8S deployments:
ERROR:#google-cloud/error-reporting: Encountered an error while attempting to transmit an error to the Stackdriver Error Reporting API.
Error: Request had insufficient authentication scopes.
It feels like the job did pick up the permission changes of the cluster's service account, whereas my deployment did not.
I did try to re-create the deployment to make it refresh its auth token, but the error is still happening...
Any ideas?
UPDATE: I ended up following Jérémie Girault's suggestion: create a service account and bind it to my deployment. It works!
The error message has to do with the access scopes set on the cluster when using the default service account. You must enable access to the appropriate API.
As you mentioned, creating a separate service account, providing it the appropriate IAM permissions and linking it to your cluster or workload will bypass this error as well.