Google Cloud Storage - insufficient permission - node.js

The issue seems similar to another post, but It's different for me. Because I check the testIamPermission, and the returns showed that I got all permission I needed and still receive insufficient permission.
This is what I received:
{'storage.buckets.get' : true}
{'storage.buckets.getIamPolicy' : true}
{'storage.objects.create' : true}
{'storage.objects.delete' : true}
{'storage.objects.get' : true}
{'storage.objects.getIamPolicy' : true}
{'storage.objects.list' : true}
{'storage.objects.setIamPolicy' : true}
{'storage.objects.update' : true}
The code I used to test:
googleBucket.iam.testPermissions([testPermissions], function(err, permissions) {
if(!err)
console.log(permissions);
})
Permission I missed:
'storage.buckets.create',
'storage.buckets.delete',
'storage.buckets.list',
'storage.buckets.setIamPolicy',
'storage.buckets.update',
It's really confused that I got all permission on create objects, but still throw an insufficient permission. What I used for api is just uploading a file on bucket. Is there any permission I missed? (Server is located at Google Compute Engine, on the same project of Google Cloud Storage)

it would be rather interesting which user runs the script.
because, it seems that the user/service which runs the script only has the viewer, but not the editor role. check in the IAM, if you have the proper roles assigned to the proper service-account. you also might need to login to that GCE instance with cloud shell and add those service-account credentials. in cloud shell, there is a tiny "upload" button top-right, which can be used to upload the credentials json file into the VM. the documentation also explains this, step by step.

I have found the answer precisely. There is an option, called Identity and API Access, in Create a new instance page. Just Switch from Default to whatever access option(do config properly, tho), and the problem sloved!
For the answer provided by Martin Zeitler, It's not what GCE works on running the script, GCE automatically connect Its email to another API as Editor permission, and no need to Hook any json to Instance that established on GCE. As I mentioned that Server is located at Google Compute Engine, on the same project of Google Cloud Storage.
However, Documentation Link is fairly helpful, Thanks Martin Zeitler, give you an upvote for quick answer :)

The Service account of your Google Cloud Compute Engine instance should match the one being used to access the Google Cloud Storage Bucket.
If it doesn't match, stop the instance, change the service account by selecting the correct service account from the drop box (Service accounts linked to only current project would be visible in the dropdown list).
Also, make sure that the selected Service account has correct Google Cloud Storage access.

Related

Is it safe if I turn all my security rules of firebase storage to read and write by all?

I have been trying for quite some time to develop an authentication system using firebase/auth, however this has yielded unsatisfactory results and I have noticed then whenever I log a user in the website, then I access the website from another device I can see all the data of the previous user, without needing to logIn at all. I have researched online on how to solve this however even after having done everything the firebase docs are saying, I still encounter the same problem. I can't leave it like this since it is of course a HUGE security risk. So I took the matters to my own hands and created an authentication system with json-web-tokens, this works very well, however due to my configurations of the firebase storage security rules It is impossible for me to access the data since I am not logged in with firebase/auth.
I have done my best not to show any links or configuration to my account in firebase, all images are fetched and converted to base64 in the server-side then later rendered on the page, so if I keep my firebase configurations hidden and I dont show any firebase storage links on my website is it safe for me to allow read and write without checking for the user to have been logged in using firebase/auth
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if request.auth != null;
}
}
}
If all access to the files in Cloud Storage comes from the Node.js SDK, it bypasses the security rules that you set anyway. So in that case, you might as well disallow all access:
allow read, write: if false;
Yes, this can be very dangerous.
To find your configuration is easily done, since they need to be public for your connection to even work on your frontend.
If someone wants to find your config, they can. This is why security rules are very important and you should not allow everyone to access your database.
Based on your current security rules:
If someone gain access of your config, They can simply register an account and then delete your entire database with a few lines of code
If you only communicate with Firebase via a service account on a server. i.e. you are not using the web SDK, you can disable it like this:
Go to Firebase console > Project settings > General
At the bottom of the page you will see your active web apps.
Select the on you want to disable and choose Remove this app
Now noone can access your app via the web SDK

AWS nodejs SDK check if can access DynamoDB table

Using the AWS SDK I can make a get request and fetch a document, I will then know if I have the IAM access to access the database.
Is there a way to test with the NodeJS AWS SDK to see if I have allow access for the action dynamodb:getItem. Of course I can just write a query but is there a way without me having to spend time writing a meaningless query?
The easiest way I can think of right this moment is to try a simple getItem like you mentioned with a primary key, but do it with the low level API. Then you are not writing a "meaningless query." If I find another way, I'll add it here.
You can simply check through the user's role through console and CLI as well with the get-user-policy command.
CLI Approach:
aws iam get-user-policy --user-name Bob --policy-name ExamplePolicy
With the help of this command, you check the rights you have on that user. For detail look into this DOC.
Console Approach:
Login with AWS Console and Search IAM service
Under the User Section, Search your user whose permission need's to check.
Then in the permission section, you can watch all the permission.

Google Cloud Scheduler Access

I need to schedule two cloud functions to run at a predefined time using Cloud Scheduler. However, when I click on the Cloud Scheduler tab it shows the below error message.
You don't have permission to enable Cloud Scheduler (appengine.applications.create, serviceusage.services.enable)
So I asked the project owner to grant me access to the below roles:
Cloud Scheduler admin
AppEngine Admin
Service Usage Admin
However, even after this I'm still getting the same message as before.
Below are the current roles that I have access to:
App Engine Admin
BigQuery Data Viewer
BigQuery User
Cloud Scheduler Admin
Cloud SQL Admin
Editor
Service Usage Admin
Storage Admin
Kindly let me know if I'm missing something here.
You don't need to be the project Owner.
You need these permission:
appengine.applications.create
serviceusage.services.enable
Predefined roles for first permission:
roles/owner
roles/appengine.appCreator
Predefined roles for second permission:
roles/owner
roles/editor
roles/serviceusage.serviceUsageAdmin
Since you already are an Editor, you only need to request App Engine Creator role for the first permission.
For you to be able to perform the configuration of Cloud Scheduler, you need to be the Project Owner.
Could you please give it a try asking your administrator to make you the Project Owner?
Understanding roles
This should fix your issue and solve your case. In case it doesn't, let me know if you are facing the same error.
Please, let me know if it worked!
If you are using target HTTP Method in your Cloud Scheduler, you can add Auth Header (Add OAuth token) with a particular or spesific service account.

Signing error in Google Cloud via Firebase

When I try and call file.getSignedUrl({ action: 'read', expires: '03-01 2500'
}), I am returned the error of Failure from metadata server.
This is code that was previously working (a few days ago) so my gut says something funky happened in the permissions? Is there something I'm missing here?
I found the real cause of the problem.
In the IAM & Admin portal within Google Cloud, the member representing my Firebase Project (i.e. myproject-memberId#myproject.iam.gserviceaccount.com) only had the Owner permission.
I had a flawed understanding, and assumed that this was the highest level of auth, which it evidently is not. At some point, I must have changed it to this "higher" permission, without realizing I removed the Editor permission.
To SOLVE my problem I had to simply add the Editor permission back to my member in the IAM & Admin portal in Google cloud for my project.
Another instance, of human error.

error on command "gsutil notification watchbucket... "

I am trying to setup object change notification. i am trying to run below command :
gsutil notification watchbucket https://<project_id>.appspot.com/ gs://bucket_name and getting error :
ServiceException: 401 Unauthorized WebHook callback channel: https://<project_id>.appspot.com
I have created service account and gave all permissions:
gsutil acl ch -u <project_id>#appspot.gserviceaccount.com:FC gs://bucket_name
Successgfully verified the domain as an owner of url on webmaster. But still getting same problem.
Have seen similar question on StackOverflow but could not find answers. Can anyone please help ?
It sounds like you haven't correctly verified the domain. There are a couple of easy mistakes you may have made:
It is possible you verified http://project.appspot.com instead of https://project.appspot.com. You can check and fix this on the search console.
It is possible you verified the domain using your own Google account but then attempted to use gsutil with a different account, such as a service account. This is particularly easy on a GCE instance as the default credentials will be the instance's service account. I recommend running gsutil in cloud shell as this uses your real credentials.

Resources