How to display s3 image in browser - node.js

How to display s3 image in the browser, now it is getting downloading every time when I try to open it in a browser. I have set the content type but still, I am facing the same issue.
here is my code
var params = {
Key: 'upload/' + req.file.originalname,
Body: data,
ContentType:'image/jpeg',
ACL: 'public-read'
};
s3bucket.upload(params, function(err, aws_images) {
console.log(aws_images)
})

It appears that you are wanting to have images served from Amazon S3 cached in browsers.
To do this, you can set cache-control metadata on the objects.
See:
Amazon S3 images cache-control not being applied
How to add cache control in AWS S3?

Related

aws presigned url request signature mismatch while upload

I am trying to upload an image using presigned url
const s3Params = {
Bucket: config.MAIN_BUCKET,
Key: S3_BUCKET + '/'+fileName,
ContentType: fileType,
Expires: 900,
ACL: 'public-read'
};
const s3 = new AWS.S3({
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
'region': config.region
});
const url = await s3.getSignedUrlPromise('putObject', s3Params)
return url
i get a url something like
https://s3.eu-west-1.amazonaws.com/bucket/folder/access.JPG?AWSAccessKeyId=xxxx&Content-Type=multipart%2Fform-data&Expires=1580890085&Signature=xxxx&x-amz-acl=public-read
i have tried uploading file with content type image/jpg, multipart/form-data.
Tried generating url without filetype and upload.
tried put and post method
but nothing seems to work
Error always :
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Access credentials have appropriate permissions because these upload files fine when trying though s3 putobject upload (though api instead of presigned url)
Edit:
It seems that postman is sending content-type as multipart/form-data; boundary=--------------------------336459561795502380899802. here boundary is added extra. how to fix this?
As per the AWS S3 documentation Signing and Authenticating REST request, S3 is using SignatureVersion4 by default.
But the nodejs AWS-SDK is using SignatureVersion2 by default.
So you have to explicitly specify SignatureVersion4 in request header
Add this code in S3 config
s3 = new AWS.S3({
'signatureVersion':'v4'
});
I was testing through form-data on postman. but getsignedUrl() function does not support that. Tried using binary and it worked fine. For multipart there seems to be a different function in aws sdk

Upload base64 file into GCS using signedURL

I'm trying to upload base64 file/image into Google cloud storage using the signed URL. My server side code (NodeJS) is something like this:
let {Storage} = require('#google-cloud/storage');
storage = new Storage({
projectId,
keyFilename: gcloudServiceAccountFilePath,
});
function generateSignedUrl(){
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
//contentType: 'application/octet-stream'
};
}
const [url] = await storage.bucket(gcloudBucket)
.file(`${fileRelativePath}`).getSignedUrl(options);
return url;
}
Now when I try with POSTMAN with below configuration,
Request: PUT
URL: https://storage.googleapis.com/my-signed-url.. //generated from above code
Headers:
x-goog-acl: 'public-read'
Content-Type: 'image/jpeg'
Body:
raw : 'base64-file-conent'
My uploaded file in GCS stays as base64 and file size is also different as you can see in the storage.
1st image is directly uploaded into GCS with drag & drop.
2nd image is uploaded with POSTMAN
Not sure if I'm missing something while generating signed-url or any headers while uploading file through postman.
Thanks :)
The reason for the difference in the Object sizes uploaded to the Google Cloud Storage is actually the difference in the Metadata of the Object. When you upload the image Object with POSTMAN by using REST APIs, the API header is added as part of the image's metadata. This Google Documentation clearly states that “the Cloud Storage stores these headers as part of the object's metadata”.
The first line of the Object metadata Introduction also confirms that Objects stored in Cloud Storage have metadata associated with them. Hence, the API headers are added as the Metadata of your Image object and consequently increase the Size of the Object.
Image Objects uploaded via the Console do not have Object metadata, except they are explicitly set.

Google Cloud Storage creating content links with inconsistent behavior

I'm working on a project using Google Cloud Storage to allow users to upload media files into a predefined bucket using Node.js. I've been testing with small .jpg files. I also used gsutil to set bucket permissions to public.
At first, all files generated links that downloaded the file. Upon investigation of the docs, I learned that I could explicitly set the Content-Type of each file after upload using the gsutil CLI. When I used this procedure to set the filetype to 'image/jpeg', the link behavior changed to display the image in the browser. But this only worked if the link had not been previously clicked prior to updating the metadata with gsutil. I thought that this might be due to browser caching, but the behavior was duplicated in an incognito browser.
Using gsutil to set the mime type would be impractical at any rate, so I modified the code in my node server POST function to set the metadata at upload time using an npm module called mime. Here is the code:
app.post('/api/assets', multer.single('qqfile'), function (req, res, next) {
console.log(req.file);
if (!req.file) {
return ('400 - No file uploaded.');
}
// Create a new blob in the bucket and upload the file data.
var blob = bucket.file(req.file.originalname);
var blobStream = blob.createWriteStream();
var metadata = {
contentType: mime.lookup(req.file.originalname)
};
blobStream.on('error', function (err) {
return next(err);
});
blobStream.on('finish', function () {
blob.setMetadata(metadata, function(err, response){
console.log(response);
// The public URL can be used to directly access the file via HTTP.
var publicUrl = format(
'https://storage.googleapis.com/%s/%s',
bucket.name, blob.name);
res.status(200).send(
{
'success': true,
'publicUrl': publicUrl,
'mediaLink': response.mediaLink
});
});
});
blobStream.end(req.file.buffer);
});
This seems to work, from the standpoint that it does actually set the Content-Type on upload, and that is correctly reflected in the response object as well as the Cloud Storage console. The issue is that some of the links returned as publicUrl cause a file download, and others cause a browser load of the image. Ideally I would like to have both options available, but I am unable to see any difference in the stored files or their metadata.
What am I missing here?
Google Cloud Storage makes no assumptions about the content-type of uploaded objects. If you don't specify, GCS will simply assign a type of "application/octet-stream".
The command-line tool gsutil, however, is smarter, and will attach the right Content-Type to files being uploaded in most cases, JPEGs included.
Now, there are two reasons why your browser is likely to download images rather than display them. First, if the Content-Type is set to "application/octet-stream", most browsers will download the results as a file rather than display them. This was likely happening in your case.
The second reason is if the server responds with a 'Content-Disposition: attachment' header. This doesn't generally happen when you fetch GCS objects from the host "storage.googleapis.com" as you are doing above, but it can if you, for instance, explicitly specified a contentDisposition for the object that you've uploaded.
For this reason I suspect that some of your objects don't have an "image/jpeg" content type. You could go through and set them all with gsutil like so: gsutil -m setmeta 'Content-Type:image/jpeg' gs://myBucketName/**

How to upload a file with the same name to Amazon S3 and overwrite existing file?

s3.putObject({
Bucket: bucketName,
Key: fileName,
Body: file,
ACL: 'bucket-owner-full-control'
}, function(err, data) {
if (err) {
console.log(err);
}
console.log(data)
}
);
I use this code to upload image to my Amazon S3 cloud storage. But I can't upload a file with the same name (this name exist on the server S3 already).
How can I upload a file with the same name and overwrite the already existing one in S3?
Thanks for any help :)
By default, when you upload the file with same name. It will overwrite the existing file. In case you want to have the previous file available, you need to enable versioning in the bucket.
I guess you've encountered the default caching mechanism which is 24 hours, this results in not receiving the latest stored object.
To override this, add parameter to putObject():
CacheControl: "no-cache"
or Expires: new Date()

How to add ACL and Content-Type parameters when using skipper-s3?

I am using skipper-s3 to upload files. I find out that all files uploaded to S3 have been set to ACL:private and Content-Type:binary/octet-stream by default. I would like to know if it is possible to set these parameters before uploading to S3.
Maybe something like this:
req.file('image').upload({
adapter: require('skipper-s3'),
key: KEY,
secret: SECRET,
bucket: BUCKET_NAME,
headers: {
ContentType: 'image/png',
ACL: 'public-read'
}
}
I have read the issue, but there is still no answer. In addition, is there any way to get the Content-Type of files sent from client?
UPDATE: The issue was closed. It seems like it is a knox-mpu issue.
Thanks to a pull request, this is now possible. If you don't specify a content-type header, it will now be guessed based on the filename. Also note that you should use the headers specified by the S3 docs; for example, to do ACL you would set the x-amz-acl to public-read.

Resources