We are using NodeJS for Rest API's and ReactJS for an App, trying to fetch the AWS s3 images from nodeJS using aws-sdk, then planning to place into the react, the thing is the AWS Bucket does not have public access, and it should not be a public access, how to solve this problem?
From the nodeJS, we are getting s3 listObjects, can we access the image using the below object from ReactJS?
We have read a few more docs, suggested to use a signed URL but will it work in the browser to display images to the clients?
{
"Key": "public/5db0476246e0fb0004r4rbff5/s3-c0c79f542f3c.jpg",
"LastModified": "2019-10-23T12:30:32.000Z",
"ETag": "\"269b2c5455h220bccc374f4f4rfee\"",
"Size": 510811,
"StorageClass": "STANDARD",
"Owner": {
"ID": "dad9f9dfk39dfijir93irjfiejfidjfjdfdfdfr3r3r3r3fef3"
}
}
You can put the bucket behind CloudFront CDN. Distribute your content using signed URLs / limit access to some origins/ and anything else that might fit your use-case.
My place of work uses Cloudfront with signed URLs for the same use case.
I think this AWS help doc would help more.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
//You can use this code to make an object public while uploading it.
const AWS = require('aws-sdk');
const fs = require('fs');
const path = require('path');
//configuring the AWS environment
AWS.config.update({
accessKeyId: "<Access Key Here>",
secretAccessKey: "<Secret Access Key Here>"
});
var s3 = new AWS.S3();
var filePath = "";
//configuring parameters
var params = {
Bucket: '<Bucket Name Here>',
Body : fs.createReadStream(filePath),
Key : "folder/"+Date.now()+"_"+path.basename(filePath),
ACL :'public-read'
};
s3.upload(params, function (err, data) {
//handle error
if (err) {
console.log("Error", err);
}
//success
if (data) {
console.log("Uploaded in:", data.Location);
}
});
Just add this to the call when you upload the image from your code. ACL :'public-read' (Ignore this if you dont have any upload facility).
Unfortunately for fetching objects already uploaded you cannot change the permission to public programmatically. For that please refer to this documentation https://aws.amazon.com/premiumsupport/knowledge-center/read-access-objects-s3-bucket/.
Highlighting the best possible approach for you (you can still refer
the document)
Use a bucket policy that grants public read access to a specific prefix
To grant public read access to a specific object prefix, add a bucket policy similar to the following:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::awsexamplebucket/publicprefix/*"]
}
]
}
Then, copy the objects into the prefix with public read access. You can copy an object into the prefix by running a command similar to the following:
aws s3 cp s3://awsexamplebucket/exampleobject s3://awsexamplebucket/publicprefix/exampleobject
Related
I have two image files uploaded to firebase storage:
capsule house.jpg was uploaded through the UI (clicking the Upload file button).
upload_64e8fd... was uploading from my backend server (node.js) using this:
const bucket = fbAdmin.storage().bucket('gs://assertivesolutions2.appspot.com');
const result = await bucket.upload(files.image.path);
capsule house.jps is recognized as a jpeg and a link to it is supplied in the right hand margin. If I click on it, I see my image in a new tab. You can see for yourself:
https://firebasestorage.googleapis.com/v0/b/assertivesolutions2.appspot.com/o/capsule%20house.jpg?alt=media&token=f5e0ccc4-7916-4245-b813-dbdf1838556f
upload_64e8fd... is not recognized as any kind of image file and no link it provided.
The result returned on the backend is a huge json object with the following fields:
"selfLink": "https://www.googleapis.com/storage/v1/b/assertivesolutions2.appspot.com/o/upload_64e8fd09f787acfe2728ae73158e20ab"
"mediaLink": "https://storage.googleapis.com/download/storage/v1/b/assertivesolutions2.appspot.com/o/upload_64e8fd09f787acfe2728ae73158e20ab?generation=1590547279565389&alt=media"
The first one sends me to a page that says this:
{
"error": {
"code": 401,
"message": "Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.",
"errors": [
{
"message": "Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.",
"domain": "global",
"reason": "required",
"locationType": "header",
"location": "Authorization"
}
]
}
}
The second one gives me something similar:
Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.
The rules for my storage bucket are as follows:
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if true;
}
}
}
I'm allowing all reads and writes.
So why does it say I don't have access to see my image when it's uploaded through my backend server?
I'd also like to know why it doesn't recognize it as a jpeg when it's uploaded through my backend server, but it does when uploaded through the UI, but I'd like to focus on the access issue for this question.
Thanks.
By default, the files are uploaded as private, unless you change your bucket settings, as mentioned here. The below code is an example of how to change the visibility of your documents.
/**
* {#inheritdoc}
*/
public function setVisibility($path, $visibility)
{
$object = $this->getObject($path);
if ($visibility === AdapterInterface::VISIBILITY_PRIVATE) {
$object->acl()->delete('allUsers');
} elseif ($visibility === AdapterInterface::VISIBILITY_PUBLIC) {
$object->acl()->add('allUsers', Acl::ROLE_READER);
}
$normalised = $this->normaliseObject($object);
$normalised['visibility'] = $visibility;
return $normalised;
}
You can check how to set that via console, following the tutorial in the official documentation: Making data public
Besides that, as indicated in the comment by #FrankvanPuffelen, you won't have a generated URL for the file to be accessed. You can find more information about it here.
Let me know if the information helped you!
The other answer helped me! I have no idea why the Console had me make those security rules if they won't apply...
Based on nodejs docs (and probably other languages) there is a simple way to make the file public during upload:
const result = await bucket.upload(files.image.path, {public: true});
This same option works for bucket.file().save() and similar APIs.
< premise>
I'm new cloud computing in general, AWS specifically, and REST API, and am trying to cobble together a "big-picture" understanding.
I am working with LocalStack - which, by my understanding, simulates the real AWS by responding identically to (a subset of) the AWS API if you specify the endpoint address/port that LocalStack listens at.
Lastly, I've been working from this tutorial: https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
< /premise>
Using the noted tutorial, and per its guidance, I successfully creating a S3 bucket using the AWS CLI.
To demonstrate uploading a local file to the S3 bucket, though, the tutorial switches to node.js, which I think demonstrates the AWS node.js SDK:
# aws.js
# This code segment comes from https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
#
const AWS = require('aws-sdk')
require('dotenv').config()
const credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_KEY,
}
const useLocal = process.env.NODE_ENV !== 'production'
const bucketName = process.env.AWS_BUCKET_NAME
const s3client = new AWS.S3({
credentials,
/**
* When working locally, we'll use the Localstack endpoints. This is the one for S3.
* A full list of endpoints for each service can be found in the Localstack docs.
*/
endpoint: useLocal ? 'http://localhost:4572' : undefined,
/**
* Including this option gets localstack to more closely match the defaults for
* live S3. If you omit this, you will need to add the bucketName to the `Key`
* property in the upload function below.
*
* see: https://github.com/localstack/localstack/issues/1180
*/
s3ForcePathStyle: true,
})
const uploadFile = async (data, fileName) =>
new Promise((resolve) => {
s3client.upload(
{
Bucket: bucketName,
Key: fileName,
Body: data,
},
(err, response) => {
if (err) throw err
resolve(response)
},
)
})
module.exports = uploadFile
.
# test-upload.js
# This code segment comes from https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
#
const fs = require('fs')
const path = require('path')
const uploadFile = require('./aws')
const testUpload = () => {
const filePath = path.resolve(__dirname, 'test-image.jpg')
const fileStream = fs.createReadStream(filePath)
const now = new Date()
const fileName = `test-image-${now.toISOString()}.jpg`
uploadFile(fileStream, fileName).then((response) => {
console.log(":)")
console.log(response)
}).catch((err) => {
console.log(":|")
console.log(err)
})
}
testUpload()
Invocation :
$ node test-upload.js
:)
{ ETag: '"c6b9e5b1863cd01d3962c9385a9281d"',
Location: 'http://demo-bucket.localhost:4572/demo-bucket/test-image-2019-03-11T21%3A22%3A43.511Z.jpg',
key: 'demo-bucket/test-image-2019-03-11T21:22:43.511Z.jpg',
Key: 'demo-bucket/test-image-2019-03-11T21:22:43.511Z.jpg',
Bucket: 'demo-bucket' }
I do not have prior experience with node.js, but my understanding of the above code is that it uses the AWS.S3.upload() AWS node.js SDK method to copy a local file to a S3 bucket, and prints the HTTP response (is that correct?).
Question: I observe that the HTTP response includes a "Location" key whose value looks like a URL I can copy/paste into a browser to view the image directly from the S3 bucket; is there a way to get this location using the AWS CLI?
Am I correct to assume that AWS CLI commands are analogues of the AWS SDK?
I tried uploading a file to my S3 bucket using the aws s3 cp CLI command, which I thought would be analogous to the AWS.S3.upload() method above, but it didn't generate any output, and I'm not sure what I should have done - or should do - to get a Location the way the HTTP response to the AWS.S3.upload() AWS node SDK method did.
$ aws --endpoint-url=http://localhost:4572 s3 cp ./myFile.json s3://myBucket/myFile.json
upload: ./myFile.json to s3://myBucket/myFile.json
Update: continued study makes me now wonder whether it is implicit that a file uploaded to a S3 bucket by any means - whether by CLI command aws s3 cp or node.js SDK method AWS.S3.upload(), etc. - can be accessed at http://<bucket_name>.<endpoint_without_http_prefix>/<bucket_name>/<key> ? E.g. http://myBucket.localhost:4572/myBucket/myFile.json?
If this is implicit, I suppose you could argue it's unnecessary to ever be given the "Location" as in that example node.js HTTP response.
Grateful for guidance - I hope it's obvious how painfully under-educated I am on all the involved technologies.
Update 2: It looks like the correct url is <endpoint>/<bucket_name>/<key>, e.g. http://localhost:4572/myBucket/myFile.json.
AWS CLI and the different SDKs offer similar functionality but some add extra features and some format the data differently. It's safe to assume that you can do what the CLI does with the SDK and vice-versa. You might just have to work for it a little bit sometimes.
As you said in your update, not every file that is uploaded to S3 is publicly available. Buckets have policies and files have permissions. Files are only publicly available if the policies and permissions allow it.
If the file is public then you can just construct the URL as you described. If you have the bucket setup for website hosting, you can also use the domain you setup.
But if the file is not public or you just want a temporary URL, you can use aws presign s3://myBucket/myFile.json. This will give you a URL that can be used by anyone to download the file with the permissions of whoever executed the command. The URL will be valid for one hour unless you choose a different time with --expires-in. The SDK has similar functionality as well but you have to work a tiny bit harder to use it.
Note: Starting with version 0.11.0, all APIs are exposed via a single edge service, which is accessible on http://localhost:4566 by default.
Considering that you've added some files to your bucket
aws --endpoint-url http://localhost:4566 s3api list-objects-v2 --bucket mybucket
{
"Contents": [
{
"Key": "blog-logo.png",
"LastModified": "2020-12-28T12:47:04.000Z",
"ETag": "\"136f0e6acf81d2d836043930827d1cc0\"",
"Size": 37774,
"StorageClass": "STANDARD"
}
]
}
you should be able to access your file with
http://localhost:4566/mybucket/blog-logo.png
I have a use case to keep the AWS S3 Bucket Private as default but,
Make certain objects Public while uploading to AWS S3.
I am using the following code to sign the AWS S3 url using and ACL setting as public-read -
module.exports.generateS3PostSignedUrl = async (bucketName, bucketKey, objectExpiry) => {
let s3Client = new AWS.S3({
region: 'some-region'
});
let signingParams = {
Expires: objectExpiry,
Bucket: bucketName,
Fields: {
key: bucketKey,
},
Conditions: [
['acl', 'public-read']
],
ACL: 'public-read'
}
let s3createPresignedPost = util.promisify(s3Client.createPresignedPost).bind(s3Client);
let signedUrl = await s3createPresignedPost(signingParams);
return signedUrl;
};
Request while uploading -
I am able to upload the file to AWS S3, if I remove the conditions array in signing params,
but the file is still not public when I click its url.
I believe I have done something wrong code on signingParams part.
Ref -
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
Upload file to s3 with POST
The order of parameters matters here. Put acl parameter before the file and it should work; otherwise S3 just ignores the value you provided.
Below are the example screenshots with different placement of parameters in form-data.
Also, be sure to give execute the createPresignedPost by a user with s3:PutObjectAcl and s3:PutObject permissions.
The correct order of form-data parameters
The same request but with acl parameter being placed after file (Ignored by S3)
I'm trying to create an S3 bucket and immediately assign a lambda notification event to it.
Here's the node test script I wrote:
const aws = require('aws-sdk');
const uuidv4 = require('uuid/v4');
aws.config.update({
accessKeyId: 'key',
secretAccessKey:'secret',
region: 'us-west-1'
});
const s3 = new aws.S3();
const params = {
Bucket: `bucket-${uuidv4()}`,
ACL: "private",
CreateBucketConfiguration: {
LocationConstraint: 'us-west-1'
}
};
s3.createBucket(params, function (err, data) {
if (err) {
throw err;
} else {
const bucketUrl = data.Location;
const bucketNameRegex = /bucket-[a-z0-9\-]+/;
const bucketName = bucketNameRegex.exec(bucketUrl)[0];
const params = {
Bucket: bucketName,
NotificationConfiguration: {
LambdaFunctionConfigurations: [
{
Id: `lambda-upload-notification-${bucketName}`,
LambdaFunctionArn: 'arn:aws:lambda:us-west-1:xxxxxxxxxx:function:respondS3Upload',
Events: ['s3:ObjectCreated:CompleteMultipartUpload']
},
]
}
};
// Throws "Unable to validate the following destination configurations" until an event is manually added and deleted from the bucket in the AWS UI Console
s3.putBucketNotificationConfiguration(params, function(err, data) {
if (err) {
console.error(err);
console.error(this.httpResponse.body.toString());
} else {
console.log(data);
}
});
}
});
The creation works fine but calling s3.putBucketNotificationConfiguration from the aws-sdk throws:
{ InvalidArgument: Unable to validate the following destination configurations
at Request.extractError ([...]/node_modules/aws-sdk/lib/services/s3.js:577:35)
at Request.callListeners ([...]/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit ([...]/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit ([...]/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition ([...]/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo ([...]/node_modules/aws-sdk/lib/state_machine.js:14:12)
at [...]/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> ([...]/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> ([...]/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners ([...]/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
message: 'Unable to validate the following destination configurations',
code: 'InvalidArgument',
region: null,
time: 2017-11-10T02:55:43.004Z,
requestId: '9E1CB35811ED5828',
extendedRequestId: 'tWcmPfrAu3As74M/0sJL5uv+pLmaD4oBJXwjzlcoOBsTBh99iRAtzAloSY/LzinSQYmj46cwyfQ=',
cfId: undefined,
statusCode: 400,
retryable: false,
retryDelay: 4.3270874729153475 }
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InvalidArgument</Code>
<Message>Unable to validate the following destination configurations</Message>
<ArgumentName1>arn:aws:lambda:us-west-1:xxxxxxxxxx:function:respondS3Upload, null</ArgumentName1>
<ArgumentValue1>Not authorized to invoke function [arn:aws:lambda:us-west-1:xxxxxxxxxx:function:respondS3Upload]</ArgumentValue1>
<RequestId>9E1CB35811ED5828</RequestId>
<HostId>tWcmPfrAu3As74M/0sJL5uv+pLmaD4oBJXwjzlcoOBsTBh99iRAtzAloSY/LzinSQYmj46cwyfQ=</HostId>
</Error>
I've run it with a role assigned to lambda with what I think are all the policies it needs. I could be missing something. I'm using my root access keys to run this script.
I've thought it might be a timing error where S3 needs time to create the bucket before adding the event, but I've waited a while, hardcoded the bucket name, and run my script again which throws the same error.
The weird thing is that if I create the event hook in the S3 UI and immediately delete it, my script works if I hardcode that bucket name into it. It seems like creating the event in the UI adds some needed permissions but I'm not sure what that would be in the SDK or in the console UI.
Any thoughts or things to try? Thanks for your help
You are getting this message because your s3 bucket is missing permissions for invoking your lambda function.
According to AWS documentation! there are two types of permissions required:
Permissions for your Lambda function to invoke services
Permissions for Amazon S3 to invoke your Lambda function
You should create an object of type 'AWS::Lambda::Permission' and it should look similar to this:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "<optional>",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "<ArnToYourFunction>",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<YourAccountId>"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::<YourBucketName>"
}
}
}
]
}
Finally looked at this again after a year. This was a hackathon project from last year that we revisted. #davor.obilinovic's answer was very helpful in pointing me to the Lambda permission I needed to add. Still took me a little bit to figure out exactly what I needed it to look like.
Here are the AWS JavaScript SDK and Lambda API docs
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html#addPermission-property
https://docs.aws.amazon.com/lambda/latest/dg/API_AddPermission.html
The JS SDK docs have this line:
SourceArn: "arn:aws:s3:::examplebucket/*",
I couldn't get it working for the longest time and was still getting the Unable to validate the following destination configurations error.
Changing it to
SourceArn: "arn:aws:s3:::examplebucket",
fixed that issue. The /* was apparently wrong and I should have looked at the answer I got here more closely but was trying to follow the AWS docs.
After developing for a while and creating lots of buckets, Lambda permissions and S3 Lambda notifications, calling addPermission started throwing a The final policy size (...) is bigger than the limit (20480). Adding new, individual, permissions for each bucket adds them to the bottom of the Lambda Function Policy and apparently that policy has a max size.
The policy doesn't seem editable in the AWS Management Console so I had fun deleting each entry with the SDK. I copied the policy JSON, pulled the Sids out and called removePermission in a loop (which threw rate limit errors and I had to run it many times).
Finally I discovered that omitting the SourceArn key will give Lambda permission to all S3 buckets.
Here's my final code using the SDK to add the permission I needed. I just ran this once for my function.
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: process.env.AWS_ACCESS,
secretAccessKey: process.env.AWS_SECRET,
region: process.env.AWS_REGION,
});
// Creates Lambda Function Policy which must be created once for each Lambda function
// Must be done before calling s3.putBucketNotificationConfiguration(...)
function createLambdaPermission() {
const lambda = new aws.Lambda();
const params = {
Action: 'lambda:InvokeFunction',
FunctionName: process.env.AWS_LAMBDA_ARN,
Principal: 's3.amazonaws.com',
SourceAccount: process.env.AWS_ACCOUNT_ID,
StatementId: `example-S3-permission`,
};
lambda.addPermission(params, function (err, data) {
if (err) {
console.log(err);
} else {
console.log(data);
}
});
}
If it still useful for someone, this is how I add the permission to the lambda function using java:
AWSLambda client = AWSLambdaClientBuilder.standard().withRegion(clientRegion).build();
AddPermissionRequest requestLambda = new AddPermissionRequest()
.withFunctionName("XXXXX")
.withStatementId("XXXXX")
.withAction("lambda:InvokeFunction")
.withPrincipal("s3.amazonaws.com")
.withSourceArn("arn:aws:s3:::XXXXX" )
.withSourceAccount("XXXXXX");
client.addPermission(requestLambda);
Please check
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/lambda/AWSLambda.html#addPermission-com.amazonaws.services.lambda.model.AddPermissionRequest-
The Console is another way to allow s3 to invoke a lambda:
Note
When you add a trigger to your function with the Lambda console, the
console updates the function's resource-based policy to allow the
service to invoke it. To grant permissions to other accounts or
services that aren't available in the Lambda console, use the AWS CLI.
So you just need to add and configure an s3 trigger to your lambda from aws console
https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html
For me it was Lambda expecting permission over the entire bucket and not the bucket and keys
It may be helpful to look here: AWS Lambda : creating the trigger
Theres kind of an unclear error when you have conflicting events in a bucket. You need to purge the other events to create a new one.
You have to add s3 resource based policy on lambda too. From your Lambda, go to Configurations > Permissions > Resource-based policy .
Explained in more details here
If the accepted answer by #davor.obilinovic (https://stackoverflow.com/a/47674337) still doesn't fix it, note that you'll get this error even if an existing event notification is now invalid.
Ex. Let's say you configured bucketA/prefix1 to trigger lambda1, bucketA/prefix2 to trigger lambda2. After a while you decide to delete lambda2 but don't remove the event notification for bucketA/prefix2.
Now if you try to configure bucketA/prefix3 to trigger lambda3, you'll see this error "Unable to validate the following destination configurations" even though you are trying to only add lambda3 and lambda3 is configured correctly as #davor.obilinovic answered.
Additional Context:
The reason for this behavior is because AWS does not have a "add_event_notification" api. They only have a "put_bucket_notification" which takes in the complete list of all the old event notifications plus the new one that we want to add. So each time we want to add an event notification, we have to send the entire list and they validate the entire list. It would have been easier/clearer had they specified which "following destination" they were referring to in their error message.
For me it was a totally different issue causing the "Unable to validate the following destination configurations" error.
Apparently, there was an old Lambda function on the same bucket, that was dropped a while before. However, AWS does not always remove the event notification from the S3 bucket, even if the old Lambda and its trigger are long gone.
This causes the conflict and the strange error message.
Resolution -
Navigate to the S3 bucket => Properties => Event notifications,
and drop any old remaining events still defined.
After this, everything went back to normal and worked like a charm.
Good luck!
I faced the same com.amazonaws.services.s3.model.AmazonS3Exception: Unable to validate the following destination configurations error when I tried to execute putBucketNotificationConfiguration
Upon checking around, I found that every time you update the bucket notification configuration, AWS will do a test notification check on all the existing notification configurations. If any of the test fails, say for the reason such as you removed the destination lambda or SNS topic of an older configuration, AWS will fail the entire bucket notification configuration request with above exception.
To address this, either identify/fix the configuration that is failing the test or remove all the existing configurations(if plausible) in the bucket using aws s3api put-bucket-notification-configuration --bucket=myBucketName --notification-configuration="{}" and then try updating the bucket configuration.
There is no node.js Firebase Storage client at the moment (too bad...), so I'm turning to gcloud-node with the parameters found in Firebase's console.
I'm trying :
var firebase = require('firebase');
var gcloud = require('gcloud')({
keyFilename: process.env.FB_JSON_PATH,
projectId: process.env.FB_PROJECT_ID
});
firebase.initializeApp({
serviceAccount: process.env.FB_JSON_PATH,
databaseURL: process.env.FB_DATABASE_URL
});
var fb = firebase.database().ref();
var gcs = gcloud.storage();
var bucket = gcs.bucket(process.env.FB_PROJECT_ID);
bucket.exists(function(err, exists) {
console.log('err', err);
console.log('exists', exists);
});
Where :
FB_JSON_PATH is the path to the JSON file generated in order to use the Firebase Server SDK
FB_DATABASE_URL is something like https://app-a36e5.firebaseio.com/
FB_PROJECT_ID is the name of the firebase project in Google's console : "app-a36e5"
The id of the bucket is FB_PROJECT_ID (in Firebase's console the storage tab displays gs://app-a36e5.appspot.com)
When I run this code I get :
err null
exists false
But no other errors.
I'm expecting exists true at least.
Some additional info : I can query the database (so I imagine the JSON file is correct), and I have set the storage rules as follow :
service firebase.storage {
match /b/app-a36e5.appspot.com/o {
match /{allPaths=**} {
allow read: if true;
allow write: if request.auth != null;
}
}
}
So that everything on the storage is readable.
Any ideas how to get it to work ? Thank you.
The issue here is that you aren't naming your storage bucket correctly. The bucket initialization should be:
var bucket = gcs.bucket('app-a36e5.appspot.com'); // full name of the bucket includes the .appspot.com
I would assume that process.env.FB_PROJECT_ID is just the your-bucket part, and you'd need to get the full bucket name, not just the project id (though the bucket name may be process.env.FB_PROJECT_ID + '.appspot.com').
Also, sorry about not providing Storage integrated with Firebase--GCS has a high quality library that you've already found (gcloud-node), and we figured that this provides the best story for developers (Firebase for mobile, Google Cloud Platform for server side development), and didn't want to muddy the waters further.