When I try and call file.getSignedUrl({ action: 'read', expires: '03-01 2500'
}), I am returned the error of Failure from metadata server.
This is code that was previously working (a few days ago) so my gut says something funky happened in the permissions? Is there something I'm missing here?
I found the real cause of the problem.
In the IAM & Admin portal within Google Cloud, the member representing my Firebase Project (i.e. myproject-memberId#myproject.iam.gserviceaccount.com) only had the Owner permission.
I had a flawed understanding, and assumed that this was the highest level of auth, which it evidently is not. At some point, I must have changed it to this "higher" permission, without realizing I removed the Editor permission.
To SOLVE my problem I had to simply add the Editor permission back to my member in the IAM & Admin portal in Google cloud for my project.
Another instance, of human error.
Related
If I enter both an ApplicationID and key into the Advanced Settings of the dnn.azureadb2cprovider I get a generic error with no explanation. I've gone through the setup documentation (which seems to be outdated) numerous times. The error gives no clue as to what the issue is.
If I enter only the app id or only key by itself, there is no error. Obviously this wont allow Graph to work, but I am noting it anyway.
Went thought the setup process located at https://github.com/intelequia/dnn.azureadb2cprovider#requirements. I can get users to sign in successfully through B2C so it's partially working. Just the advanced features are having trouble.
You can check the log4net log files under /Portals/_default/Logs folder for more details on the issue. This is probably caused by the permissions of the App registration on the Graph API. Ensure that you have set permissions on these Application scopes and have given consent to them (the documentation will be updated soon):
Application.Read.All
Group.Read.All
GroupMember.Read.All
User.Read.All
PS: in the future please create this type of issues on the GitHub repository to concentrate all the help and documentation on the same location.
I'm just created a new bot and try to make an integration with dialogflow. Yesterday, 2-3 days ago everything was OK.
Today i get this error. Can't find anything similar in internet.
IAM permission 'dialogflow.agents.get' on 'projects/xxxxxxx-06xx-49xx-xx31-2313x47x2xxx1' (not my real but examle of ID) denied.
I have the same exact problem.
The problem is not just in one account, it's happening on all my Dialogflow account.
There is no solution yet.
It's a very annoying problem.
But based on the error message you can try to give a Project owner role to your Dialogflow service account. Maybe it will affect differently for your case.
An additional role for Dialogflow service account:
A similar problem:
This issue should be resolved. You can try to start your integrations again to verify.
I have several client apps registered in the Azure portal. Each app has different scopes that are enabled/disabled. I used to be able to modify the scopes and save the updates for each of the register apps. Now I get the following error from the Azure portal:
Failed to update {my app} application. Error detail: Property identifierUris is invalid. [mURNc]
I also get this same error even if all I try to do is rename the client app. If I create a brand new app there are no issues. This appears to be a bug in the azure portal, but I'm looking for a workaround as I don't want to redefine all the scopes again, there are quite a few!
I've tried to rename things, change the client app ID, etc, but nothing seems to fix the issue, I get the same error. Again, this all used to work fine and now suddenly with no changes I get this issue.
The error says the identifierUris is invalid, but it isn't descriptive at all on which URI it is referring to. Any suggestions on how to correct this?
As junnas said, click try out the new experience in the Authentication tab of App registration and try again.
Also, when you see the above error, we recommend the following:
1.Edit the attributes individually in the manifest editor instead of uploading a previously downloaded manifest. Use the manifest reference table to understand the syntax and semantics of old and new attributes so that you can successfully edit the attributes you're interested in.
2.If your workflow requires you to save the manifests in your source repository for use later, we suggest rebasing the saved manifests in your repository with the one you see in the App registrations experience.
Hope this helps.
The google cloud error reporting documentation states that the service account permission should be 'project > owner', this does work however this seems to be a possible security issue and bad practice. However I could not find any other permission level that allows to report errors with the error reporting library. If somebody has a more secure solution I would love to hear it.
You are right is a bad practice to use a project owner role in a service account. If you want to use your service account to report errors with the error reporting library you can update your service account in the console going to "IAM & admin" -> "ROLES" -> click to edit your service account and adding the "Error Reporting Admin" role, this should work for you.
The issue seems similar to another post, but It's different for me. Because I check the testIamPermission, and the returns showed that I got all permission I needed and still receive insufficient permission.
This is what I received:
{'storage.buckets.get' : true}
{'storage.buckets.getIamPolicy' : true}
{'storage.objects.create' : true}
{'storage.objects.delete' : true}
{'storage.objects.get' : true}
{'storage.objects.getIamPolicy' : true}
{'storage.objects.list' : true}
{'storage.objects.setIamPolicy' : true}
{'storage.objects.update' : true}
The code I used to test:
googleBucket.iam.testPermissions([testPermissions], function(err, permissions) {
if(!err)
console.log(permissions);
})
Permission I missed:
'storage.buckets.create',
'storage.buckets.delete',
'storage.buckets.list',
'storage.buckets.setIamPolicy',
'storage.buckets.update',
It's really confused that I got all permission on create objects, but still throw an insufficient permission. What I used for api is just uploading a file on bucket. Is there any permission I missed? (Server is located at Google Compute Engine, on the same project of Google Cloud Storage)
it would be rather interesting which user runs the script.
because, it seems that the user/service which runs the script only has the viewer, but not the editor role. check in the IAM, if you have the proper roles assigned to the proper service-account. you also might need to login to that GCE instance with cloud shell and add those service-account credentials. in cloud shell, there is a tiny "upload" button top-right, which can be used to upload the credentials json file into the VM. the documentation also explains this, step by step.
I have found the answer precisely. There is an option, called Identity and API Access, in Create a new instance page. Just Switch from Default to whatever access option(do config properly, tho), and the problem sloved!
For the answer provided by Martin Zeitler, It's not what GCE works on running the script, GCE automatically connect Its email to another API as Editor permission, and no need to Hook any json to Instance that established on GCE. As I mentioned that Server is located at Google Compute Engine, on the same project of Google Cloud Storage.
However, Documentation Link is fairly helpful, Thanks Martin Zeitler, give you an upvote for quick answer :)
The Service account of your Google Cloud Compute Engine instance should match the one being used to access the Google Cloud Storage Bucket.
If it doesn't match, stop the instance, change the service account by selecting the correct service account from the drop box (Service accounts linked to only current project would be visible in the dropdown list).
Also, make sure that the selected Service account has correct Google Cloud Storage access.