Amazon s3 bucket image access issue: Access Denied - node.js

I am getting the following error on putting the image src
I am using following modules to upload an image in node
aws = require('aws-sdk'),
multer = require('multer'),
multerS3 = require('multer-s3'),
Image is uploading successfully in the bucket but when I put the same url in <img src="https://meditationimg.s3.us-east-2.amazonaws.com/profilepic/1507187706799Penguins.jpg" /> it returns the above error
Anyone who knows the solution??

No Such Key is S3's way of saying "404 Not Found."
The request was authorized and syntactically valid, but there's no file in the bucket at the specified path.
You may want to inspect the contents of your bucket from the AWS console.
Make sure you access image using the same case as it was uploaded and is stored on S3.(Generally it should be lower case.)

Related

How to upload downloaded file to s3 bucket using Lambda function

I saw different questions/answers but I could not find the one that worked for me. Hence, I am really new to AWS, I need your help. I am trying to download gzip file and load it to the json file then upload it to the S3 bucket using Lambda function. I wrote the code to download the file and convert it to json but having problem while uploading it to the s3 bucket. Assume that file is ready as x.json. What should I do then?
I know it is really basic question but still help needed :)
This code will upload to Amazon S3:
import boto3
s3_client = boto3.client('s3', region_name='us-west-2') # Change as appropriate
s3._client.upload_file('/tmp/foo.json', 'my-bucket', 'folder/foo.json')
Some tips:
In Lambda functions you can only write to /tmp/
There is a limit of 512MB
At the end of your function, delete the files (zip, json, etc) because the container can be reused and you don't want to run out of disk space
If your lambda has proper permission to write a file into S3, then simply use boto3 package which is an AWS SDK for python.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html
Be aware that if the lambda locates inside of VPC then lambda cannot access to the public internet, and also boto3 API endpoint. Thus, you may require a NAT gateway to proxy lambda to the public.

How TensorFlow read file from s3 bytestream

I have done a deep learning model in TensorFlow for image recognition, and this one works reading an image file from local directory with tf.read_file() method, but I need now that the file be read by TensorFlow since a variable that is a Byte-Streaming that extract the image file since an S3 Bucket of Amazon without storage the streaming in local directory
You should be able to pass in the fully formed s3 path to tf.read_file(), like:
s3://bucket-name/path/to/file.jpeg where bucket-name is the name of your s3 bucket, and path/to/file.jpeg is where it's stored in your bucket. It seems possible you might be running into some access permissions issue, depending on if your bucket is private. You can follow https://github.com/tensorflow/examples/blob/master/community/en/docs/deploy/s3.md to set up your credentials
Is there an error you ran into when doing this?

Node.js: multi-part file upload via REST API

I would like to upload invoking a REST endpoint in multi-part.
In particular, I am looking at this API: Google Cloud Storage: Objects: insert
I did read about using multer, however I did not find any complete example showing me how to perform this operation.
Could someone help me with that?
https://cloud.google.com/nodejs/getting-started/using-cloud-storage#uploading_to_cloud_storage
^^ this is a a good example of how to use multer to upload a single image to Google Cloud Storage. Use multer to create filestream for each file ( storage: multer.memoryStorage() ), and handle the file stream by sending it to your GCS bucket in your callback.
However link only shows an example for one image. If you want to do an array of images, create a for-loop, where you create a stream for each file in your request, but only put the next() function after the for loop ends. If you keep the next(); in each loop cycle you will get the error: Error: Can't set headers after they are sent.
There is an example for uploading files with the nodejs client library and multer. You can modify this example and set the multipart option:
Download the sample code and cd into the folder:
git clone https://github.com/GoogleCloudPlatform/nodejs-docs-samples/
cd nodejs-docs-samples/appengine/storage
Edit the app.yaml file and include your bucket name:
GCLOUD_STORAGE_BUCKET: YOUR_BUCKET_NAME
Then in the source code, you can modify the publicUrl variable according to Objects: insert example:
const publicUrl = format(`https://www.googleapis.com/upload/storage/v1/b/${bucket.name}/o?uploadType=multipart`);
Download a key file for your service account and set the environment variable:
Go to the Create service account key page in the GCP Console.
From the Service account drop-down list, select New service account.
Input a name into the Service account name field.
From the Role drop-down list, select Project > Owner.
Click Create. A JSON file that contains your key downloads to your computer. And finally export the environment variable:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/key/file
After that, yo're ready to run npm start and go to the app's frontend and upload your file:

Amazon S3 403 Forbidden Error for KML files but not JPG files

I am able to successfully upload (put object) jpg files to S3 with a particular code path, but receive a 403 forbidden error when using the same code path to upload a KML file. I am not restricting file types explicitly with "bucket policy," but feel that this must somehow be tied to bucket policy or CORS configuration.
I was using code based off the Heroku tutorial for uploading images to Amazon S3. The issue ended up being that the '+' symbol in the appropriate mime type is "application/vnd.google-earth.kml+xml" and the + symbol was being replaced with a space when fetching the file-type query parameter for our own S3 endpoint to generate signed requests. We were able to quickly fix this by just forcing the ContentType to be "application/vnd.google-earth.kml+xml" for all kml files going to our endpoint for generating signed S3 requests.

getting ENOENT Error when extracting the path of an image in S3 Bucket

My requirement is to resize an image present in s3 bucket by 50%. I found an npm package named lwip which does image resizing for local images.
My code is as follows
var lwip=require('lwip');
lwip.open(imagePath //Format of the path is as follows "https://s3bucketName.s3.amazonaws.com/filename.jpg"
, function (err, image) {
if (err) {
//error handling
} else{
//Some logic for resizing the image
}
});
I am getting the following error
ENOENT, open 'https://s3bucketName.s3.amazonaws.com/filename.jpg'
Can Somebody help with this issue as I am unable to understand why I am getting this error?
I have also made my s3 bucket as public so that anybody can use the image paths.
You are trying to pass the S3 URL of an image, i.e. a remote file. lwip works with local files.
Instead, you need to:
copy the file from S3 on to your local disk
perform any resizing operations
upload the resized file to S3
... or use a module that supports resizing directly on S3.
ENOENT just means "file does not exist", which is correct.

Resources