I want to upload a file from my web site (made with nodejs) to my Google Cloud Storage.
But i get this error:
starting-account-67n988tuygj7#sonproje-1533259273248.iam.gserviceaccount.com does not have storage.objects.create access to yiginlabilgi/sefer.jpg.'
You have to set up the right permissions for the bucket. Add service account starting-account-67n988tuygj7#sonproje-1533259273248.iam.gserviceaccount.com as member to the bucket yiginlabilgi with Storage Object Creator role.
Follow the Adding a member to a bucket-level policy in order to achieve that.
Related
I am trying to create a read-only Blob Container with Azure AD Authentication. My application will upload files to this Blob container and an email will be sent to users inside my organization with a link to a file that they will download with a browser.
I have created a simple storage account and created a blob container inside it.
Access level is currently set to: Private (no anonymous access)
I have given my test user Reader permission on the Storage Account and Storage Data Blob Reader on the blob container
Permissions given to my demouser account that will only have Reader permissions to blob files:
I've uploaded a test file with my admin account:
With my demouser logged into my Azure Organization via Azure Storage Explorer, I can download the file just fine:
However when I try to download the file with a direct link https://sareadonly01.blob.core.windows.net/myreadonlycontainer01/TestFile01.txt from a browser, which will be the way the users will be downloading these files with a link in an email, I get this error: "The specified resource does not exist."
If I change the Access Level of the blob container to: Blob (anonymous read only access for blobs only), then my demouser can download the file in a browser, but so can everyone else outside my organization and this doesn't use AAD Authentication.
So how do I create a Read-Only Blob container with AAD Authentication and the ability to download files with a direct URL from a browser?
When I used a bucket a key file was downloaded and it said keep this file safe ?
now I Cannot use .env to encrypt because in the following code you have to link the json file directly to gain access to GCS bucket.
const {Storage} = require('#google-cloud/storage');
const storage = new Storage({
keyFilename:path.join(__dirname,'/<keyfilename>.json'),
projectId:'<project ID>'
});
Now I am concerned when i deploy my app on the app engine this file may be accessed by someone somehow
that is a serious threat because it gives direct access to my GCS bucket
Should I be concerned about that file being accessed by anyone??
Instead of using the Service Account JSON file in AppEngine, You can use the App Engine default service. account to access the GCS buckets or any other service in GCP. By default, the App Engine default service account has the Editor role in the project, Any user account with sufficient permissions to deploy changes to the Cloud project can also run code with read/write access to all resources within that project. However, you can change the service account permissions through the Console.
Open the Cloud Console.
In the Members list, locate the ID of the App Engine default
service account.
The App Engine default service account uses the member ID:
YOUR_PROJECT_ID#appspot.gserviceaccount.com
Use the dropdown menu to modify the roles assigned to the service
account.
I have a file of around 16mb in size and am using python boto3 upload_file api to upload this file into the S3 bucket. However, I believe the API is internally choosing multipart upload and gives me an "Anonymous users cannot initiate multipart upload" error.
In some of the runs of the application, the file generated may be smaller (few KBs) in size.
What's the best way to handle this scenario in general or fix the error I mentioned above?
I currently have a Django application that generates a file when run and uploads this file directly into an S3 bucket.
Ok, so unless you've opened your S3 bucket up for the world to upload to (which is very much NOT recommended), it sounds like you need to setup the permissions for access to your S3 bucket correctly.
How to do that will vary a little depending on how you're running this application - so let's cover off a few options - in all cases you will need to do two things:
Associate your script with an IAM Principal (an IAM User or an IAM Role depending on where / how this script is being run).
Add permissions for that principal to access the bucket (this can be accomplished either through an IAM Policy, or via the S3 Bucket Policy)
Lambda Function - You'll need to create an IAM Role for your application and associate it with your Lambda function. Boto3 should be able to assume this role transparently for you once configured.
EC2 Instance or ECS Task - You'll need to create an IAM Role for your application and associate it with your EC2 instance/ECS Task. Boto3 will be able to access the credentials for the role via instance metadata and should automatically assume the role.
Local Workstation Script - If you're running this script from your local workstation, then boto3 should be able to find and use the credentials you've setup for the AWS CLI. If those aren't the credentials you want to use you'll need to generate an access key and secret access key (be careful how you secure these if you go this route, and definitely follow least privilege).
Now, once you've got your principal you can either attach an IAM policy that grants Allow permissions to upload to the bucket to the IAM User or Role, or you can add a clause to the Bucket Policy that grants that IAM User or Role access. You only need to do one of these.
Multi-part uploads are performed via the same S3:PutObject call as single part uploads (though if your files are small I'd be surprised it was using multi-part for them). If you're using KMS one small trick to be aware of is that you need permission to use the KMS key for both Encrypt and Decrypt permissions if encrypting a multi-part upload.
I am trying to write a cloud function in python that would read a collection in Google Cloud Firestore (Native) [not the Realtime Database or Datastore].
I have created a Service Account that has below Roles for the project:
- Project Owner
- Firebase Admin
- Service Account User
- Cloud Functions Developer
- Project Editor
When run on my local I am setting the service account credential in my environment: GOOGLE_APPLICATION_CREDENTIALS
My cloud function is able to access Cloud Storage. I am only having issues with Cloud Firestore.
I have tried using both the Client Python SDK and the Admin SDK (Python). The Admin SDK seems to only be available for the realtime database as it requires a Database URL to connect.
I have tried running both from my dev machine and as a cloud function.
I also changed the Firestore access rules to below for unrestricted access:
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if true;
}
}
}
I am trying to run the same code in the Google Documentation..
from google.cloud import firestore
def process_storage_file(data, context):
# Add a new document
db = firestore.Client()
doc_ref = db.collection(u'users').document(u'alovelace')
doc_ref.set({
u'first': u'Ada',
u'last': u'Lovelace',
u'born': 1815
})
# Then query for documents
users_ref = db.collection(u'users')
docs = users_ref.get()
for doc in docs:
print(u'{} => {}'.format(doc.id, doc.to_dict()))
I am not able to get the Cloud Function to connect to Google Cloud Firestore. I get the error:
line 3, in raise_from google.api_core.exceptions.PermissionDenied: 403 Missing or insufficient permissions.
Both the cloud function and Firestore are in the same GCP Project.
The service account you specified on the cloud function UI configuration needs to have the Datastore User Role
First, check if you uploaded the service account's credential JSON file along with your code, and that the GOOGLE_APPLICATION_CREDENTIALS environment variable is also set in your Cloud Function's configuration page. (I know uploading credentials is a bad idea, but you need to put the JSON file somewhere, if you don't want to use the Compute Engine default service account.)
Second, you might want to provide a Cloud Datastore User role (or a similar one) to your service account, instead of Firebase Admin. It seems that the new Firestore can be accessed with the Cloud Datastore X roles, rather than the Firebase ones.
I am looking at using azure Containers and Blobs to store images and videos for my website. I found http://msdn.microsoft.com/en-us/library/windowsazure/dd179354.aspx which talks about the different ALC settings but it did not answer one of my questions. If a Container/Blob are set to "No public read access" the site says that only the account owner can read the data. Would this mean that people could not access it by the URL but my MVC Web App hosted on an Azure VM would be able to access it via URL?
Please bear with me if the answer sounds a bit preachy & unnecessary lengthy :)
Essentially each resource (Blob Container, Blob) in Windows Azure has a unique URL and is accessible via REST API (thus accessible over http/https protocol). Wit ACL, you basically tell storage service whether or not to honor the request sent to serve the resource. To read more about authentication mechanism, you may find this link useful: http://msdn.microsoft.com/en-us/library/windowsazure/dd179428.aspx.
When you set the ACL as No public read access, you're instructing storage service not to honor any anonymous requests. Only authenticated requests will be honored. To create an authenticated request, you would require your account name and key and create an authorization header which gets passed along with the request to access the request. If this authorization header is not present in your request, the request will be rejected.
So long story short, to answer your question even your MVC application won't be able to access the blob via URL unless that authorization header is included in the request. One possibility would be to explore Shared Access Signature (SAS) functionality in blob storage. This would give time-bound restricted permissions to blobs in your storage. So what you would do is create a SAS URL for your blob in your MVC app using your account name and key and use that SAS URL in the application.
To further explain the concept of ACL, let's say you have a blob container called mycontainer and it has a blob called myblob.txt in a storage account named myaccount. For listing blobs in the container, the container URL would be http://myaccount.blob.core.windows.net/mycontainer?restype=container&comp=list and the blob URL would be http://myaccount.blob.core.windows.net/mycontainer/myblob.txt. Following will be the behavior when you try to access these URLs directly through the browser with different ACL:
No public read access
Container URL - Error
Blob URL - Error
Public read access for blobs only
Container URL - Error
Blob URL - Success (will download the blob)
Full public read access
Container URL - Success (will show an XML document containing information about all blobs in the container)
Blob URL - Success (will download the blob)