Large pdf files are not opening in browser with aws signed url - node.js

I am using NodeJS to upload a file to s3, and I am setting up the proper contentType for pdf while uploading. It looks something like this,
const params = {
Bucket: "noob_bucket",
Key: newFileName,
Body: fs.createReadStream(path),
ContentType: 'application/pdf',
ACL: 'private',
};
now the problem is when I am trying to show the pdfon browser using signed URL it is opening in browser only if the file size is approximately less then 25MB, else it is simply downloading the file.
Can anyone help me how to fix this issue. I have files that is > 50MB as well.
Please help me to fix this. Thanks in advance

The header which controls if a PDF file is displayed or downloaded as attachment is the content-disposition header. It should be inline if you want the content to be displayed in the browser.
You can set it explicitly in the parameters when you upload a file:
const params = {
Bucket: "noob_bucket",
Key: newFileName,
Body: fs.createReadStream(path),
ContentType: 'application/pdf',
ContentDisposition: 'inline',
ACL: 'private',
};
Also, you would want to set it when you request a presigned url:
const command = new GetObjectCommand({
Bucket: 'noob_bucket',
Key: newFileName,
ResponseContentDisposition: 'inline',
ResponseContentType: 'application/pdf',
});
const url = await getSignedUrl(s3Client, command, { expiresIn: 3600 });

Related

Broken image from image upload to Amazon s3 via base64 string

I'm having issues with getting the full image back from amazon s3 after sending a base64 string(about 2.43MB when converted to an image).
if I compress this image via https://compressnow.com/, and upload, this works fine and I get the full image.
Is it possible for me to compress the base64 string before sending to Amazon s3?
Here is logic to upload to amazon s3
await bucket
.upload({
Bucket: "test",
Key: "test",
Body: "test",
ContentEncoding: 'base64',
Metadata: { MimeType: "png },
})
Similar issue here Node base64 upload to AWS S3 bucket makes image broken
The ContentEncoding parameter specifies the header that S3 should send along with the HTTP response, not the encoding of the object as far as what is passed to the AWS SDK. According to the documentation the Body parameter is simply the "Object data". In other words, you should probably just drop the ContentEncoding parameter unless you have a specific need for it and pass along raw bytes:
const fs = require('fs');
var AWS = require('aws-sdk');
s3 = new AWS.S3({apiVersion: '2006-03-01'});
// Read the contents of a local file
const buf = fs.readFileSync('source_image.jpg')
// Or, if the contents are base64 encoded, then decode them into buffer of raw data:
// const buf = new Buffer.from(fs.readFileSync('source_image.b64', 'utf-8'), 'base64')
var params = {
Bucket: '-example-bucket-',
Key: "path/to/example.jpg",
ContentType: `image/jpeg`,
ACL: 'public-read',
Body: buf,
ContentLength: buf.length,
};
s3.putObject(params, function(err, data){
if (err) {
console.log(err);
console.log('Error uploading data: ', data);
} else {
console.log('succesfully uploaded the image!');
}
});

How to upload a file from an export in Google Drive API without saving the file

I'm exporting a file sheet file from drive and uploading it back to drive in pdf format. The problem is that in order to upload it I need to save it to file on a server first.
I've tried to read a response from drive.files.export in fs.createReadStream, but it didn't work. Is there another way?
const res = await drive.files.export(
{ fileId, mimeType: "application/pdf" }
);
var media = {
mimeType: 'application/pdf',
body: fs.createReadStream(res) // TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string or an instance of Buffer or URL. Received an instance of Object
};
const resCreate = await drive.files.create({
uploadType: "media",
media: media,
resource: fileMetadata,
fields: "id"
}, function (err, file) {...});
I believe your goal as follows.
You want to export a Google Spreadsheet as the PDF data and want to upload it to Google Drive.
At that time, you want to achieve this without creating a file.
You want to achieve this using googleapis for Node.js.
In this case, how about the following modification?
Modified script:
const res = await drive.files.export(
{ fileId, mimeType: "application/pdf" },
{ responseType: "stream" }
);
var media = {
mimeType: "application/pdf",
body: res.data,
};
const resCreate = await drive.files.create({
uploadType: "media",
media: media,
resource: fileMetadata,
fields: "id",
});
console.log(resCreate.data.id);
Before you use this modified script, please set fileId and fileMetadata.
The exported file is retrieved the stream data with responseType: "stream". By this, the returned data can be used for media.
Reference:
google-api-nodejs-client

createReadStream without path

I'm new to programming and web development, and even I'm not native English speaker so my explanation might be hard to understand.
I'm using aws sdk, aws s3, apollo server, apollo client, react and node
when file is sending to a apollo server from a client, a server destructure file to create readable stream so I can upload file to s3.
in node filesystem module docs fs.createReadStream method need path but, my code works without path
I just did createReadStream() without any argument. And it works fine so I can upload the file to s3
let { createReadStream, filename, mimetype, encoding } = await file;
let stream = createReadStream();
// don't mind Bucket field
s3.upload({
Bucket: 'myBucket',
Key: 'images/' + filename,
Body: stream,
ContentType: mimetype
});
Why this works without path argument?
Am I missing something?
try this.
let { createReadStream, filename, mimetype, encoding } = await file;
let stream = createReadStream();
// don't mind Bucket field
s3.upload({
Bucket: 'myBucket',
Key: 'images/' + filename,
Body: createReadStream(),
ContentType: mimetype
});

Node image upload works well on local but returns 408 timeout error on aws-ec2

Image upload from react to s3 throgh hapi.js works perfectly on localhost but failes on aws ec2 with status code 408 (timeout).
I've tried with disabling aws timeout and increasing api timeout. It works for small size images but not for images having size more than about 5mb
React Code -
data.append('file', imagesToUpload[0]);
await axios.post('/hall/images', data, {
headers: {
'content-type': 'multipart/form-data'
}
})
Hapi api code -
const uploadImages = {
payload: {
allow: 'multipart/form-data',
maxBytes: 1048576*120
},
validate: {
payload: {
file: joi.any().required(),
},
},
handler: async (req, h) => {
const {file} = req.payload;
const options = {queueSize: 1};
const params = {
ACL: 'public-read',
Body: file,
ContentType: 'multipart/form-data',
Bucket: `***`,
Key: Date.now().toString()
};
return s3.upload(params, options).promise();
}
If smaller size images upload fine then your ports and security groups are likely fine. If it's failing on larger images have you tried Multi Part uploads?
https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

Why is my S3 upload not uploading correctly?

I upload an image file using the following format:
var body = fs.createReadStream(tempPath).pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: myBucket, Key: myKey}});
var params = {
Body: body,
ACL: 'public-read',
ContentType: 'image/png'
};
s3obj.upload(params, function(err, data) {
if (err) console.log("An error occurred with S3 fig upload: ", err);
console.log("Uploaded the image file at: ", data.Location);
});
The image successfully uploads to my S3 bucket (there are no error messages and I see it in the S3-console), but when I try to display it on my website, it returns a broken img icon. When I download the image using the S3-console file downloader I am unable to open it with the error that the file is "damaged or corrupted".
If I upload a file manually using the S3-console, I can correctly display it on my website, so I'm pretty sure there's something wrong with how I'm uploading.
What is going wrong?
I eventually found the answer to my question. I needed to post one more parameter because the file is gzip'd (from using var body = ...zlib.createGzip()). This fixed my problem:
var params = {
Body: body,
ACL: 'public-read',
ContentType: 'image/png',
ContentEncoding: 'gzip'
};
Theres a very nice node module s3-upload-stream to upload (and first compress) images to S3, here's their example code which is very well documented:
var AWS = require('aws-sdk'),
zlib = require('zlib'),
fs = require('fs');
s3Stream = require('s3-upload-stream')(new AWS.S3()),
// Set the client to be used for the upload.
AWS.config.loadFromPath('./config.json');
// or do AWS.config.update({accessKeyId: 'akid', secretAccessKey: 'secret'});
// Create the streams
var read = fs.createReadStream('/path/to/a/file');
var compress = zlib.createGzip();
var upload = s3Stream.upload({
"Bucket": "bucket-name",
"Key": "key-name"
});
// Optional configuration
upload.maxPartSize(20971520); // 20 MB
upload.concurrentParts(5);
// Handle errors.
upload.on('error', function (error) {
console.log(error);
});
/* Handle progress. Example details object:
{ ETag: '"f9ef956c83756a80ad62f54ae5e7d34b"',
PartNumber: 5,
receivedSize: 29671068,
uploadedSize: 29671068 }
*/
upload.on('part', function (details) {
console.log(details);
});
/* Handle upload completion. Example details object:
{ Location: 'https://bucketName.s3.amazonaws.com/filename.ext',
Bucket: 'bucketName',
Key: 'filename.ext',
ETag: '"bf2acbedf84207d696c8da7dbb205b9f-5"' }
*/
upload.on('uploaded', function (details) {
console.log(details);
});
// Pipe the incoming filestream through compression, and up to S3.
read.pipe(compress).pipe(upload);

Resources