I ran into a problem while trying to upload a file to my S3 bucket. Everything works except that my file paramters do not seem appropriate. I am using Amazon S3 sdk to upload from nodejs to s3.
These are my routes settings:
var multiparty = require('connect-multiparty'),
multipartyMiddleware = multiparty();
app.route('/api/items/upload').post(multipartyMiddleware, items.upload);
This is items.upload() function:
exports.upload = function(req, res) {
var file = req.files.file;
var s3bucket = new AWS.S3({params: {Bucket: 'mybucketname'}});
s3bucket.createBucket(function() {
var params = {
Key: file.name,
Body: file
};
s3bucket.upload(params, function(err, data) {
console.log("PRINT FILE:", file);
if (err) {
console.log('ERROR MSG: ', err);
} else {
console.log('Successfully uploaded data');
}
});
});
};
Setting Body param to a string like "hello" works fine. According to doc, Body param must take (Buffer, Typed Array, Blob, String, ReadableStream) Object data. However, uploading a file object fails with the following error message:
[Error: Unsupported body payload object]
This is the file object:
{ fieldName: 'file',
originalFilename: 'second_fnp.png',
path: '/var/folders/ps/l8lvygws0w93trqz7yj1t5sr0000gn/T/26374-7ttwvc.png',
headers:
{ 'content-disposition': 'form-data; name="file"; filename="second_fnp.png"',
'content-type': 'image/png' },
ws:
{ _writableState:
{ highWaterMark: 16384,
objectMode: false,
needDrain: true,
ending: true,
ended: true,
finished: true,
decodeStrings: true,
defaultEncoding: 'utf8',
length: 0,
writing: false,
sync: false,
bufferProcessing: false,
onwrite: [Function],
writecb: null,
writelen: 0,
buffer: [],
errorEmitted: false },
writable: true,
domain: null,
_events: { error: [Object], close: [Object] },
_maxListeners: 10,
path: '/var/folders/ps/l8lvygws0w93trqz7yj1t5sr0000gn/T/26374-7ttwvc.png',
fd: null,
flags: 'w',
mode: 438,
start: undefined,
pos: undefined,
bytesWritten: 261937,
closed: true },
size: 261937,
name: 'second_fnp.png',
type: 'image/png' }
Any help will be greatly appreciated!
So it looks like there are a few things going wrong here. Based on your post it looks like you are attempting to support file uploads using the connect-multiparty middleware. What this middleware does is take the uploaded file, write it to the local filesystem and then sets req.files to the the uploaded file(s).
The configuration of your route looks fine, the problem looks to be with your items.upload() function. In particular with this part:
var params = {
Key: file.name,
Body: file
};
As I mentioned at the beginning of my answer connect-multiparty writes the file to the local filesystem, so you'll need to open the file and read it, then upload it, and then delete it on the local filesystem.
That said you could update your method to something like the following:
var fs = require('fs');
exports.upload = function (req, res) {
var file = req.files.file;
fs.readFile(file.path, function (err, data) {
if (err) throw err; // Something went wrong!
var s3bucket = new AWS.S3({params: {Bucket: 'mybucketname'}});
s3bucket.createBucket(function () {
var params = {
Key: file.originalFilename, //file.name doesn't exist as a property
Body: data
};
s3bucket.upload(params, function (err, data) {
// Whether there is an error or not, delete the temp file
fs.unlink(file.path, function (err) {
if (err) {
console.error(err);
}
console.log('Temp File Delete');
});
console.log("PRINT FILE:", file);
if (err) {
console.log('ERROR MSG: ', err);
res.status(500).send(err);
} else {
console.log('Successfully uploaded data');
res.status(200).end();
}
});
});
});
};
What this does is read the uploaded file from the local filesystem, then uploads it to S3, then it deletes the temporary file and sends a response.
There's a few problems with this approach. First off, it's not as efficient as it could be, as for large files you will be loading the entire file before you write it. Secondly, this process doesn't support multi-part uploads for large files (I think the cut-off is 5 Mb before you have to do a multi-part upload).
What I would suggest instead is that you use a module I've been working on called S3FS which provides a similar interface to the native FS in Node.JS but abstracts away some of the details such as the multi-part upload and the S3 api (as well as adds some additional functionality like recursive methods).
If you were to pull in the S3FS library your code would look something like this:
var fs = require('fs'),
S3FS = require('s3fs'),
s3fsImpl = new S3FS('mybucketname', {
accessKeyId: XXXXXXXXXXX,
secretAccessKey: XXXXXXXXXXXXXXXXX
});
// Create our bucket if it doesn't exist
s3fsImpl.create();
exports.upload = function (req, res) {
var file = req.files.file;
var stream = fs.createReadStream(file.path);
return s3fsImpl.writeFile(file.originalFilename, stream).then(function () {
fs.unlink(file.path, function (err) {
if (err) {
console.error(err);
}
});
res.status(200).end();
});
};
What this will do is instantiate the module for the provided bucket and AWS credentials and then create the bucket if it doesn't exist. Then when a request comes through to upload a file we'll open up a stream to the file and use it to write the file to S3 to the specified path. This will handle the multi-part upload piece behind the scenes (if needed) and has the benefit of being done through a stream, so you don't have to wait to read the whole file before you start uploading it.
If you prefer, you could change the code to callbacks from Promises. Or use the pipe() method with the event listener to determine the end/errors.
If you're looking for some additional methods, check out the documentation for s3fs and feel free to open up an issue if you are looking for some additional methods or having issues.
I found the following to be a working solution::
npm install aws-sdk
Once you've installed the aws-sdk , use the following code replacing values with your where needed.
var AWS = require('aws-sdk');
var fs = require('fs');
var s3 = new AWS.S3();
// Bucket names must be unique across all S3 users
var myBucket = 'njera';
var myKey = 'jpeg';
//for text file
//fs.readFile('demo.txt', function (err, data) {
//for Video file
//fs.readFile('demo.avi', function (err, data) {
//for image file
fs.readFile('demo.jpg', function (err, data) {
if (err) { throw err; }
params = {Bucket: myBucket, Key: myKey, Body: data };
s3.putObject(params, function(err, data) {
if (err) {
console.log(err)
} else {
console.log("Successfully uploaded data to myBucket/myKey");
}
});
});
I found the complete tutorial on the subject here in case you're looking for references ::
How to upload files (text/image/video) in amazon s3 using node.js
Or Using promises:
const AWS = require('aws-sdk');
AWS.config.update({
accessKeyId: 'accessKeyId',
secretAccessKey: 'secretAccessKey',
region: 'region'
});
let params = {
Bucket: "yourBucketName",
Key: 'someUniqueKey',
Body: 'someFile'
};
try {
let uploadPromise = await new AWS.S3().putObject(params).promise();
console.log("Successfully uploaded data to bucket");
} catch (e) {
console.log("Error uploading data: ", e);
}
Using aws SDK v3
npm install #aws-sdk/client-s3
Upload code
import { S3Client, PutObjectCommand } from "#aws-sdk/client-s3";
/**
* advisable to save your AWS credentials and configurations in an environmet file. Not inside the code
* AWS lib will automatically load the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if available in your environment
*/
const s3Client = new S3Client({ region: process.env.AWS_S3_REGION });
/**
* upload a file
* #param file the file object to be uploaded
* #param fileKey the fileKey. could be separated with '/' to nest the file into a folder structure. eg. members/user1/profile.png
*/
export function uploadFile(file, fileKey){
s3Client.send(new PutObjectCommand({
Bucket: process.env.MY_AWS_S3_BUCKET,
Key: fileKey,
Body: file
}));
}
And if you want to download
import { GetObjectCommand } from "#aws-sdk/client-s3";
/**
* download a file from AWS and send to your rest client
*/
app.get('/download', function(req, res, next){
var fileKey = req.query['fileKey'];
var bucketParams = {
Bucket: 'my-bucket-name',
Key: fileKey,
};
res.attachment(fileKey);
var fileStream = await s3Client.send(new GetObjectCommand(bucketParams));
// for TS you can add: if (fileStream.Body instanceof Readable)
fileStream.Body.pipe(res)
});
Uploading a file to AWS s3 and sending the url in response for accessing the file.
Multer is a node.js middleware for handling multipart/form-data, which is primarily used for uploading files. It is written on top of busboy for maximum efficiency. check this npm module here.
When you are sending the request, make sure the headers, have Content-Type is multipart/form-data.
We are sending the file location in the response, which will give the url, but if you want to access that url, make the bucket public or else you will not be able to access it.
upload.router.js
const express = require('express');
const router = express.Router();
const AWS = require('aws-sdk');
const multer = require('multer');
const storage = multer.memoryStorage()
const upload = multer({storage: storage});
const s3Client = new AWS.S3({
accessKeyId: 'your_access_key_id',
secretAccessKey: 'your_secret_access_id',
region :'ur region'
});
const uploadParams = {
Bucket: 'ur_bucket_name',
Key: '', // pass key
Body: null, // pass file body
};
router.post('/api/file/upload', upload.single("file"),(req,res) => {
const params = uploadParams;
uploadParams.Key = req.file.originalname;
uploadParams.Body = req.file.buffer;
s3Client.upload(params, (err, data) => {
if (err) {
res.status(500).json({error:"Error -> " + err});
}
res.json({message: 'File uploaded successfully','filename':
req.file.originalname, 'location': data.Location});
});
});
module.exports = router;
app.js
const express = require('express');
const app = express();
const router = require('./app/routers/upload.router.js');
app.use('/', router);
// Create a Server
const server = app.listen(8080, () => {
console.log("App listening at 8080");
})
Upload CSV/Excel
const fs = require('fs');
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
accessKeyId: XXXXXXXXX,
secretAccessKey: XXXXXXXXX
});
const absoluteFilePath = "C:\\Project\\test.xlsx";
const uploadFile = () => {
fs.readFile(absoluteFilePath, (err, data) => {
if (err) throw err;
const params = {
Bucket: 'testBucket', // pass your bucket name
Key: 'folderName/key.xlsx', // file will be saved in <folderName> folder
Body: data
};
s3.upload(params, function (s3Err, data) {
if (s3Err) throw s3Err
console.log(`File uploaded successfully at ${data.Location}`);
debugger;
});
});
};
uploadFile();
Works for me :)
const fileContent = fs.createReadStream(`${fileName}`);
return new Promise(function (resolve, reject) {
fileContent.once('error', reject);
s3.upload(
{
Bucket: 'test-bucket',
Key: `${fileName + '_' + Date.now().toString()}`,
ContentType: 'application/pdf',
ACL: 'public-read',
Body: fileContent
},
function (err, result) {
if (err) {
reject(err);
return;
}
resolve(result.Location);
}
);
});```
var express = require('express')
app = module.exports = express();
var secureServer = require('http').createServer(app);
secureServer.listen(3001);
var aws = require('aws-sdk')
var multer = require('multer')
var multerS3 = require('multer-s3')
aws.config.update({
secretAccessKey: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
accessKeyId: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
region: 'us-east-1'
});
s3 = new aws.S3();
var upload = multer({
storage: multerS3({
s3: s3,
dirname: "uploads",
bucket: "Your bucket name",
key: function (req, file, cb) {
console.log(file);
cb(null, "uploads/profile_images/u_" + Date.now() + ".jpg"); //use
Date.now() for unique file keys
}
})
});
app.post('/upload', upload.single('photos'), function(req, res, next) {
console.log('Successfully uploaded ', req.file)
res.send('Successfully uploaded ' + req.file.length + ' files!')
})
Thanks to David as his solution helped me come up with my solution for uploading multi-part files from my Heroku hosted site to S3 bucket. I did it using formidable to handle incoming form and fs to get the file content. Hopefully, it may help you.
api.service.ts
public upload(files): Observable<any> {
const formData: FormData = new FormData();
files.forEach(file => {
// create a new multipart-form for every file
formData.append('file', file, file.name);
});
return this.http.post(uploadUrl, formData).pipe(
map(this.extractData),
catchError(this.handleError));
}
}
server.js
app.post('/api/upload', upload);
app.use('/api/upload', router);
upload.js
const IncomingForm = require('formidable').IncomingForm;
const fs = require('fs');
const AWS = require('aws-sdk');
module.exports = function upload(req, res) {
var form = new IncomingForm();
const bucket = new AWS.S3(
{
signatureVersion: 'v4',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-east-1'
}
);
form.on('file', (field, file) => {
const fileContent = fs.readFileSync(file.path);
const s3Params = {
Bucket: process.env.AWS_S3_BUCKET,
Key: 'folder/' + file.name,
Expires: 60,
Body: fileContent,
ACL: 'public-read'
};
bucket.upload(s3Params, function(err, data) {
if (err) {
throw err;
}
console.log('File uploaded to: ' + data.Location);
fs.unlink(file.path, function (err) {
if (err) {
console.error(err);
}
console.log('Temp File Delete');
});
});
});
// The second callback is called when the form is completely parsed.
// In this case, we want to send back a success status code.
form.on('end', () => {
res.status(200).json('upload ok');
});
form.parse(req);
}
upload-image.component.ts
import { Component, OnInit, ViewChild, Output, EventEmitter, Input } from '#angular/core';
import { ApiService } from '../api.service';
import { MatSnackBar } from '#angular/material/snack-bar';
#Component({
selector: 'app-upload-image',
templateUrl: './upload-image.component.html',
styleUrls: ['./upload-image.component.css']
})
export class UploadImageComponent implements OnInit {
public files: Set<File> = new Set();
#ViewChild('file', { static: false }) file;
public uploadedFiles: Array<string> = new Array<string>();
public uploadedFileNames: Array<string> = new Array<string>();
#Output() filesOutput = new EventEmitter<Array<string>>();
#Input() CurrentImage: string;
#Input() IsPublic: boolean;
#Output() valueUpdate = new EventEmitter();
strUploadedFiles:string = '';
filesUploaded: boolean = false;
constructor(private api: ApiService, public snackBar: MatSnackBar,) { }
ngOnInit() {
}
updateValue(val) {
this.valueUpdate.emit(val);
}
reset()
{
this.files = new Set();
this.uploadedFiles = new Array<string>();
this.uploadedFileNames = new Array<string>();
this.filesUploaded = false;
}
upload() {
this.api.upload(this.files).subscribe(res => {
this.filesOutput.emit(this.uploadedFiles);
if (res == 'upload ok')
{
this.reset();
}
}, err => {
console.log(err);
});
}
onFilesAdded() {
var txt = '';
const files: { [key: string]: File } = this.file.nativeElement.files;
for (let key in files) {
if (!isNaN(parseInt(key))) {
var currentFile = files[key];
var sFileExtension = currentFile.name.split('.')[currentFile.name.split('.').length - 1].toLowerCase();
var iFileSize = currentFile.size;
if (!(sFileExtension === "jpg"
|| sFileExtension === "png")
|| iFileSize > 671329) {
txt = "File type : " + sFileExtension + "\n\n";
txt += "Size: " + iFileSize + "\n\n";
txt += "Please make sure your file is in jpg or png format and less than 655 KB.\n\n";
alert(txt);
return false;
}
this.files.add(files[key]);
this.uploadedFiles.push('https://gourmet-philatelist-assets.s3.amazonaws.com/folder/' + files[key].name);
this.uploadedFileNames.push(files[key].name);
if (this.IsPublic && this.uploadedFileNames.length == 1)
{
this.filesUploaded = true;
this.updateValue(files[key].name);
break;
}
else if (!this.IsPublic && this.uploadedFileNames.length == 3)
{
this.strUploadedFiles += files[key].name;
this.updateValue(this.strUploadedFiles);
this.filesUploaded = true;
break;
}
else
{
this.strUploadedFiles += files[key].name + ",";
this.updateValue(this.strUploadedFiles);
}
}
}
}
addFiles() {
this.file.nativeElement.click();
}
openSnackBar(message: string, action: string) {
this.snackBar.open(message, action, {
duration: 2000,
verticalPosition: 'top'
});
}
}
upload-image.component.html
<input type="file" #file style="display: none" (change)="onFilesAdded()" multiple />
<button mat-raised-button color="primary"
[disabled]="filesUploaded" (click)="$event.preventDefault(); addFiles()">
Add Files
</button>
<button class="btn btn-success" [disabled]="uploadedFileNames.length == 0" (click)="$event.preventDefault(); upload()">
Upload
</button>
Related
I'm trying to create a lambda function to read a zip file from s3 and to serve it. But after downloading this file in the browser I can't unzip it, getting the error "Unable to extract, it is in an unsupported format". What can be a problem?
const file = await s3.getObject({
Bucket: 'mybucket',
Key: `file.zip`
}).promise();
return {
statusCode: 200,
isBase64Encoded: true,
body: Buffer.from(file.Body).toString('base64'),
headers: {
'Content-Type': 'application/zip',
'Content-Disposition': `attachment; filename="file.zip"`,
},
}
Your file.Body should already be a Buffer, so Buffer.from(file.Body) should be unnecessary but unharmful.
I think your problem is that you're doing toString('base64') there. The documentation says:
If body is a binary blob, you can encode it as a Base64-encoded string by setting isBase64Encoded to true and configuring / as a Binary Media Type.
This makes me believe that it actually means that AWS will automatically convert your (non-base64) body into base64 in the response body. If that's the case, due to you doing .toString('base64'), your body is being base64'd twice. You could un-base64 your resulting file.zip and see what it gives.
The solution for me was to set 'Content-Encoding': 'base64' response header.
you can follow this code below
"use strict";
const AWS = require("aws-sdk");
const awsOptions = {
region: "us-east-1",
httpOptions: {
timeout: 300000 // Matching Lambda function timeout
}
};
const s3 = new AWS.S3(awsOptions);
const archiver = require("archiver");
const stream = require("stream");
const request = require("request");
const streamTo = (bucket, key) => {
var passthrough = new stream.PassThrough();
s3.upload(
{
Bucket: bucket,
Key: key,
Body: passthrough,
ContentType: "application/zip",
ServerSideEncryption: "AES256"
},
(err, data) => {
if (err) throw err;
}
);
return passthrough;
};
// Kudos to this person on GitHub for this getStream solution
// https://github.com/aws/aws-sdk-js/issues/2087#issuecomment-474722151
const getStream = (bucket, key) => {
let streamCreated = false;
const passThroughStream = new stream.PassThrough();
passThroughStream.on("newListener", event => {
if (!streamCreated && event == "data") {
const s3Stream = s3
.getObject({ Bucket: bucket, Key: key })
.createReadStream();
s3Stream
.on("error", err => passThroughStream.emit("error", err))
.pipe(passThroughStream);
streamCreated = true;
}
});
return passThroughStream;
};
exports.handler = async (event, context, callback) => {
var bucket = event["bucket"];
var destinationKey = event["destination_key"];
var files = event["files"];
await new Promise(async (resolve, reject) => {
var zipStream = streamTo(bucket, destinationKey);
zipStream.on("close", resolve);
zipStream.on("end", resolve);
zipStream.on("error", reject);
var archive = archiver("zip");
archive.on("error", err => {
throw new Error(err);
});
archive.pipe(zipStream);
for (const file of files) {
if (file["type"] == "file") {
archive.append(getStream(bucket, file["uri"]), {
name: file["filename"]
});
} else if (file["type"] == "url") {
archive.append(request(file["uri"]), { name: file["filename"] });
}
}
archive.finalize();
}).catch(err => {
throw new Error(err);
});
callback(null, {
statusCode: 200,
body: { final_destination: destinationKey }
});
};
If you're not restricted to using the same URI as the URI that is serving your API, you could also create a pre-signed URL and return it as a redirection result. However, this will redirect to a different domain (S3 domain) so won't work out-of-the-box if you have to serve from the same domain name (e.g., because of firewall restrictions).
I am hitting the following error: Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
The following server.js contains a route which pulls some data from MySQL and if the data size exceeds a certain limit (currently 1 byte for testing purposes) then instead of returning the data, it needs to upload it to an S3 and created a signed URL. In order to do this, it calls on my s3DataMitigation.js file. Once it has the signed URL, it should redirect with a 303 SEE OTHER header to the signed URL on S3 (actually using res.writeHead currently) and it actually does redirect. However, in my console log I am still seeing a build fail because of this error.
On a side note, I may have included too much code. Feel free to edit it down if so.
const cors = require("cors");
const mysql = require("mysql");
//Setup paths to database connection pools
const nawfprojectsDB = require("../lib/naWfProjectsDb.js");
const queries = require("./queries.js");
//Setup a timestamp for logging
const timestamp = new Date().toString();
// create the server and setup routes
const app = express();
// S3 Data Mitigation is needed when a data set exceeds 5 MB in size.
// This is a restriction of Lambda itself (they say 6 MB but want to ensure we dont ever hit the limit)
const s3DataMitigation = require("../lib/s3DataMitigation.js");
//Here we config CORS and enable for all routes
var whitelist = [
"localhost",
"url1", //In my code I use actual URLs here
"url2",
"url3",
];
var corsOptions = {
origin: whitelist,
credentials: true,
};
// Enable CORS for all routes
app.use(cors(corsOptions));
// Size conversion
const dataSizeLimit = 1;
//
// Setup routes
//
app.get("/", (req, res) => res.send("Nothing avail at root"));
//brabbit data table for workgroup data
app.get("/wg_data", (req, res, callback) => {
const dataSet = "wg_data";
nawfprojectsDB.query(queries.wg_data, (err, result) => {
if (err) {
console.log(err);
}
//Stringify our results for S3 to understand
const data = JSON.stringify(result);
if (Buffer.byteLength(data, "utf-8") > dataSizeLimit) {
console.log(timestamp, "Running s3DataMitigation...");
s3DataMitigation({ dataSet, data, res, callback });
} else {
res.send(result);
}
console.log(
timestamp,
"Returned " + result.length + " rows from " + dataSet
);
});
// const user = req.query.user;
// usageLog({ dataSet, user });
});
And here is s3DataMitigation.js
const { v4: uuidv4 } = require("uuid");
const s3DataMitigation = ({ dataSet, data, res, callback }) => {
//Setup a timestamp for logging
const timestamp = new Date().toString();
aws.config = {
accessKeyId: "accessKey",
secretAccessKey: "secretKey",
region: "us-east-1",
};
// Setup S3
const s3 = new aws.S3();
// Build the file name using UUID to make the file name unique
const fileName = dataSet + uuidv4() + ".json";
const bucket = "data-mitigation";
// Setup S3 parameters for upload
const s3UploadParams = {
Bucket: bucket,
Key: fileName,
Body: data,
ContentType: "application/json",
};
// Using aws-sdk we programatically create the file in our S3
s3.putObject(s3UploadParams)
.promise()
.then((data) => {
console.log(timestamp, "complete:PUT Object", data);
// We want to wait until we can confirm the file exists in S3 before proceeding, thus we continue code within this block
var signedUrlParams = {
Bucket: bucket,
Key: fileName,
Expires: 60 * 5,
ResponseContentType: "application/json",
};
s3.getSignedUrl("getObject", signedUrlParams, function (err, url) {
if (err) {
console.log(err);
}
console.log(url);
res.writeHead(302, {
Location: url,
//add other headers here...
});
res.end();
});
callback(null, data);
})
.catch((err) => {
console.log(timestamp, "failure:PUT Object", err);
callback(err);
});
};
module.exports = s3DataMitigation;
In s3DataMitigation, there is a statement you're calling callback, try my code below. I have include some explanatory comments.
const { v4: uuidv4 } = require("uuid");
const s3DataMitigation = ({ dataSet, data, res, callback }) => {
//Setup a timestamp for logging
const timestamp = new Date().toString();
aws.config = {
accessKeyId: "accessKey",
secretAccessKey: "secretKey",
region: "us-east-1",
};
// Setup S3
const s3 = new aws.S3();
// Build the file name using UUID to make the file name unique
const fileName = dataSet + uuidv4() + ".json";
const bucket = "data-mitigation";
// Setup S3 parameters for upload
const s3UploadParams = {
Bucket: bucket,
Key: fileName,
Body: data,
ContentType: "application/json",
};
// Using aws-sdk we programatically create the file in our S3
s3.putObject(s3UploadParams)
.promise()
.then((data) => {
console.log(timestamp, "complete:PUT Object", data);
// We want to wait until we can confirm the file exists in S3 before proceeding, thus we continue code within this block
var signedUrlParams = {
Bucket: bucket,
Key: fileName,
Expires: 60 * 5,
ResponseContentType: "application/json",
};
s3.getSignedUrl("getObject", signedUrlParams, function (err, url) {
if (err) {
console.log(err);
}
console.log(url);
res.writeHead(302, {
Location: url,
//add other headers here...
});
res.end();
});
// callback(null, data); // this line is causing you all the trouble
// what it does is to bubble down the request to default handlers (error handlers etc )
// but at this stage, the request has already been resolved by redirect above
// hence the error, cause it will try to send response for request which has been resolved
})
.catch((err) => {
console.log(timestamp, "failure:PUT Object", err);
callback(err);
});
};
Hope it helps!
I am having trouble adjusting my code to allow ajax uploads. I have the direct upload to S3 working but would like to use ajax to show upload progress.
js
$('#upload-input').on('change', function(){
var files = $(this).get(0).files;
if (files.length > 0){
var formData = new FormData();
for (var i = 0; i < files.length; i++) {
var file = files[i];
formData.append('uploads[]', file, file.name);
}
$.ajax({
type: 'POST',
processData: false,
contentType: false,
data: formData,
url: '/upload',
success : function(data){
getDetail();
console.log('picture uploaded ', data);
},
xhr: function() {
var xhr = new XMLHttpRequest();
xhr.upload.addEventListener('progress', function(evt) {
if (evt.lengthComputable) {
var percentComplete = evt.loaded / evt.total;
percentComplete = parseInt(percentComplete * 100);
$('.progress-bar').text(percentComplete + '%');
$('.progress-bar').width(percentComplete + '%');
if (percentComplete === 100) {
$('.progress-bar').html('Done');
}
}
}, false);
return xhr;
}
});
}
});
backend
setting up the variables
var aws = require('aws-sdk');
aws.config.loadFromPath('./data/s3config.json');
var multer = require('multer');
var multerS3 = require('multer-s3');
var s3 = new aws.S3({});
var upload = multer({
storage: multerS3({
s3: s3,
bucket: 'roe.pictures',
acl: 'public-read',
metadata: function (req, file, cb) {
cb(null, {fieldName: file.fieldname});
},
key: function (req, file, cb) {
cb(null, Date.now().toString())
}
})
});
Setting up post call
The issue is that I don't know how to add the files to the 'upload.array' parameter. If I load the files directly from the html form, I can just use the name of the input field. Since there is no 'req' object, I can't use req.body. Do I have to wrap this into another function?
Thanks!!
app.post('/upload', upload.array(this.body, 3), function(req, res, next) {
var newPics = [];
if(req.files) {
req.files.forEach(function(p) {
newPics.push({key : p.key, originalName : p.originalname, mimeType : p.mimetype, size : p.size})
});
Room.update({_id : req.signedCookies.prop},
{ $push: { pPics : { $each : newPics }}})
.exec(function(err, r) {
if(err) return err;
console.log('Successfully uploaded ' + req.files.length + ' files! with result ' + r);
res.redirect(req.get('referer'));
});
} else {
console.log('nothing to upload');
res.redirect(req.get('referer'));
}
});
I need to create a Zip file that consists of a selection of files (videos and images) located in my s3 bucket.
The problem at the moment using my code below is that I quickly hit the memory limit on Lambda.
async.eachLimit(files, 10, function(file, next) {
var params = {
Bucket: bucket, // bucket name
Key: file.key
};
s3.getObject(params, function(err, data) {
if (err) {
console.log('file', file.key);
console.log('get image files err',err, err.stack); // an error occurred
} else {
console.log('file', file.key);
zip.file(file.key, data.Body);
next();
}
});
},
function(err) {
if (err) {
console.log('err', err);
} else {
console.log('zip', zip);
content = zip.generateNodeStream({
type: 'nodebuffer',
streamFiles:true
});
var params = {
Bucket: bucket, // name of dest bucket
Key: 'zipped/images.zip',
Body: content
};
s3.upload(params, function(err, data) {
if (err) {
console.log('upload zip to s3 err',err, err.stack); // an error occurred
} else {
console.log(data); // successful response
}
});
}
});
Is this possible using Lambda, or should I look at a different
approach?
Is it possible to write to a compressed zip file on the fly, therefore eliminating the memory issue somewhat, or do I need to have the files collected before compression?
Any help would be much appreciated.
Okay, I got to do this today and it works. Direct Buffer to Stream, no disk involved. So memory or disk limitation won't be an issue here:
'use strict';
const AWS = require("aws-sdk");
AWS.config.update( { region: "eu-west-1" } );
const s3 = new AWS.S3( { apiVersion: '2006-03-01'} );
const _archiver = require('archiver');
//This returns us a stream.. consider it as a real pipe sending fluid to S3 bucket.. Don't forget it
const streamTo = (_bucket, _key) => {
var stream = require('stream');
var _pass = new stream.PassThrough();
s3.upload( { Bucket: _bucket, Key: _key, Body: _pass }, (_err, _data) => { /*...Handle Errors Here*/ } );
return _pass;
};
exports.handler = async (_req, _ctx, _cb) => {
var _keys = ['list of your file keys in s3'];
var _list = await Promise.all(_keys.map(_key => new Promise((_resolve, _reject) => {
s3.getObject({Bucket:'bucket-name', Key:_key})
.then(_data => _resolve( { data: _data.Body, name: `${_key.split('/').pop()}` } ));
}
))).catch(_err => { throw new Error(_err) } );
await new Promise((_resolve, _reject) => {
var _myStream = streamTo('bucket-name', 'fileName.zip'); //Now we instantiate that pipe...
var _archive = _archiver('zip');
_archive.on('error', err => { throw new Error(err); } );
//Your promise gets resolved when the fluid stops running... so that's when you get to close and resolve
_myStream.on('close', _resolve);
_myStream.on('end', _resolve);
_myStream.on('error', _reject);
_archive.pipe(_myStream); //Pass that pipe to _archive so it can push the fluid straigh down to S3 bucket
_list.forEach(_itm => _archive.append(_itm.data, { name: _itm.name } ) ); //And then we start adding files to it
_archive.finalize(); //Tell is, that's all we want to add. Then when it finishes, the promise will resolve in one of those events up there
}).catch(_err => { throw new Error(_err) } );
_cb(null, { } ); //Handle response back to server
};
I formated the code according to #iocoker.
main entry
// index.js
'use strict';
const S3Zip = require('./s3-zip')
const params = {
files: [
{
fileName: '1.jpg',
key: 'key1.JPG'
},
{
fileName: '2.jpg',
key: 'key2.JPG'
}
],
zippedFileKey: 'zipped-file-key.zip'
}
exports.handler = async event => {
const s3Zip = new S3Zip(params);
await s3Zip.process();
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'Zip file successfully!'
}
)
};
}
Zip file util
// s3-zip.js
'use strict';
const fs = require('fs');
const AWS = require("aws-sdk");
const Archiver = require('archiver');
const Stream = require('stream');
const https = require('https');
const sslAgent = new https.Agent({
KeepAlive: true,
rejectUnauthorized: true
});
sslAgent.setMaxListeners(0);
AWS.config.update({
httpOptions: {
agent: sslAgent,
},
region: 'us-east-1'
});
module.exports = class S3Zip {
constructor(params, bucketName = 'default-bucket') {
this.params = params;
this.BucketName = bucketName;
}
async process() {
const { params, BucketName } = this;
const s3 = new AWS.S3({ apiVersion: '2006-03-01', params: { Bucket: BucketName } });
// create readstreams for all the output files and store them
const createReadStream = fs.createReadStream;
const s3FileDwnldStreams = params.files.map(item => {
const stream = s3.getObject({ Key: item.key }).createReadStream();
return {
stream,
fileName: item.fileName
}
});
const streamPassThrough = new Stream.PassThrough();
// Create a zip archive using streamPassThrough style for the linking request in s3bucket
const uploadParams = {
ACL: 'private',
Body: streamPassThrough,
ContentType: 'application/zip',
Key: params.zippedFileKey
};
const s3Upload = s3.upload(uploadParams, (err, data) => {
if (err) {
console.error('upload err', err)
} else {
console.log('upload data', data);
}
});
s3Upload.on('httpUploadProgress', progress => {
// console.log(progress); // { loaded: 4915, total: 192915, part: 1, key: 'foo.jpg' }
});
// create the archiver
const archive = Archiver('zip', {
zlib: { level: 0 }
});
archive.on('error', (error) => {
throw new Error(`${error.name} ${error.code} ${error.message} ${error.path} ${error.stack}`);
});
// connect the archiver to upload streamPassThrough and pipe all the download streams to it
await new Promise((resolve, reject) => {
console.log("Starting upload of the output Files Zip Archive");
streamPassThrough.on('close', resolve());
streamPassThrough.on('end', resolve());
streamPassThrough.on('error', reject());
archive.pipe(streamPassThrough);
s3FileDwnldStreams.forEach((s3FileDwnldStream) => {
archive.append(s3FileDwnldStream.stream, { name: s3FileDwnldStream.fileName })
});
archive.finalize();
}).catch((error) => {
throw new Error(`${error.code} ${error.message} ${error.data}`);
});
// Finally wait for the uploader to finish
await s3Upload.promise();
}
}
The other solutions are great for not so many files (less than ~60). If they handle more files, they just quit into nothing with no errors. This is because they open too many streams.
This solution is inspired by https://gist.github.com/amiantos/16bacc9ed742c91151fcf1a41012445e
It is a working solution, which works well even with many files (+300) and returns a presigned URL to the zip which contains the files.
Main Lambda:
const AWS = require('aws-sdk');
const S3 = new AWS.S3({
apiVersion: '2006-03-01',
signatureVersion: 'v4',
httpOptions: {
timeout: 300000 // 5min Should Match Lambda function timeout
}
});
const archiver = require('archiver');
import stream from 'stream';
const UPLOAD_BUCKET_NAME = "my-s3-bucket";
const URL_EXPIRE_TIME = 5*60;
export async function getZipSignedUrl(event) {
const prefix = `uploads/id123123/}`; //replace this with your S3 prefix
let files = ["12314123.png", "56787567.png"] //replace this with your files
if (files.length == 0) {
console.log("No files to zip");
return result(404, "No pictures to download");
}
console.log("Files to zip: ", files);
try {
files = files.map(file => {
return {
fileName: file,
key: prefix + '/' + file,
type: "file"
};
});
const destinationKey = prefix + '/' + 'uploads.zip'
console.log("files: ", files);
console.log("destinationKey: ", destinationKey);
await streamToZipInS3(files, destinationKey);
const presignedUrl = await getSignedUrl(UPLOAD_BUCKET_NAME, destinationKey, URL_EXPIRE_TIME, "uploads.zip");
console.log("presignedUrl: ", presignedUrl);
if (!presignedUrl) {
return result(500, null);
}
return result(200, presignedUrl);
}
catch(error) {
console.error(`Error: ${error}`);
return result(500, null);
}
}
Helper functions:
export function result(code, message) {
return {
statusCode: code,
body: JSON.stringify(
{
message: message
}
)
}
}
export async function streamToZipInS3(files, destinationKey) {
await new Promise(async (resolve, reject) => {
var zipStream = streamTo(UPLOAD_BUCKET_NAME, destinationKey, resolve);
zipStream.on("error", reject);
var archive = archiver("zip");
archive.on("error", err => {
throw new Error(err);
});
archive.pipe(zipStream);
for (const file of files) {
if (file["type"] == "file") {
archive.append(getStream(UPLOAD_BUCKET_NAME, file["key"]), {
name: file["fileName"]
});
}
}
archive.finalize();
})
.catch(err => {
console.log(err);
throw new Error(err);
});
}
function streamTo(bucket, key, resolve) {
var passthrough = new stream.PassThrough();
S3.upload(
{
Bucket: bucket,
Key: key,
Body: passthrough,
ContentType: "application/zip",
ServerSideEncryption: "AES256"
},
(err, data) => {
if (err) {
console.error('Error while uploading zip')
throw new Error(err);
reject(err)
return
}
console.log('Zip uploaded')
resolve()
}
).on("httpUploadProgress", progress => {
console.log(progress)
});
return passthrough;
}
function getStream(bucket, key) {
let streamCreated = false;
const passThroughStream = new stream.PassThrough();
passThroughStream.on("newListener", event => {
if (!streamCreated && event == "data") {
const s3Stream = S3
.getObject({ Bucket: bucket, Key: key })
.createReadStream();
s3Stream
.on("error", err => passThroughStream.emit("error", err))
.pipe(passThroughStream);
streamCreated = true;
}
});
return passThroughStream;
}
export async function getSignedUrl(bucket: string, key: string, expires: number, downloadFilename?: string): Promise<string> {
const exists = await objectExists(bucket, key);
if (!exists) {
console.info(`Object ${bucket}/${key} does not exists`);
return null
}
let params = {
Bucket: bucket,
Key: key,
Expires: expires,
};
if (downloadFilename) {
params['ResponseContentDisposition'] = `inline; filename="${encodeURIComponent(downloadFilename)}"`;
}
try {
const url = s3.getSignedUrl('getObject', params);
return url;
} catch (err) {
console.error(`Unable to get URL for ${bucket}/${key}`, err);
return null;
}
};
Using streams may be tricky as I'm not sure how you could pipe multiple streams into an object. I've done this several times using standard file object. It's a multistep process and it's quite fast. Remember that Lambda operates in Linux so you have all Linux resources at hand including the system /tmp directory.
Create a sub-directory in /tmp call "transient" or whatever works for you
Use s3.getObject() and write file objects to /tmp/transient
Use the GLOB package to generate an array[] of paths from /tmp/transient
Loop the array and zip.addLocalFile(array[i]);
zip.writeZip('tmp/files.zip');
I've used a similar approach, but I'm facing the issue that some of the files in the generated ZIP file don't have the correct size (and corresponding data). Is there any limitation on the size of the files this code can manage? In my case I'm zipping large files (a few larger than 1GB) and the overall amount of data may reach 10GB.
I do not get any error/warning message, so it seems it all works fine.
Any idea what may be hapenning?
My goal:
Display a dialog box prompting the user to save a file being downloaded from aws.
My problem:
I am currently using awssum-amazon-s3 to create a download stream. However I've only managed to save the file to my server or stream it to the command line... As you can see from my code my last attempt was to try and manually set the content disposition headers which failed. I cannot use res.download() as the headers have already been set?
How can I achieve my goal?
My code for node:
app.post('/dls/:dlKey', function(req, res, next){
// download the file via aws s3 here
var dlKey = req.param('dlKey');
Dl.findOne({key:dlKey}, function(err, dl){
if (err) return next(err);
var files = dl.dlFile;
var options = {
BucketName : 'xxxx',
ObjectName : files,
};
s3.GetObject(options, { stream : true }, function(err, data) {
// stream this file to stdout
fmt.sep();
data.Headers['Content-Disposition'] = 'attachment';
console.log(data.Headers);
data.Stream.pipe(fs.createWriteStream('test.pdf'));
data.Stream.on('end', function() {
console.log('File Downloaded!');
});
});
});
res.end('Successful Download Post!');
});
My code for angular:
$scope.dlComplete = function (dl) {
$scope.procDownload = true;
$http({
method: 'POST',
url: '/dls/' + dl.dlKey
}).success(function(data/*, status, headers, config*/) {
console.log(data);
$location.path('/#!/success');
}).error(function(/*data, status, headers, config*/) {
console.log('File download failed!');
});
};
The purpose of this code it to let users use a generated key to download a file once.
This is the entire code using streaming on the latest version of aws-sdk
var express = require('express');
var app = express();
var fs = require('fs');
app.get('/', function(req, res, next){
res.send('You did not say the magic word');
});
app.get('/s3Proxy', function(req, res, next){
// download the file via aws s3 here
var fileKey = req.query['fileKey'];
console.log('Trying to download file', fileKey);
var AWS = require('aws-sdk');
AWS.config.update(
{
accessKeyId: "....",
secretAccessKey: "...",
region: 'ap-southeast-1'
}
);
var s3 = new AWS.S3();
var options = {
Bucket : '/bucket-url',
Key : fileKey,
};
res.attachment(fileKey);
var fileStream = s3.getObject(options).createReadStream();
fileStream.pipe(res);
});
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('S3 Proxy app listening at http://%s:%s', host, port);
});
This code worked for me with the most recent library:
var s3 = new AWS.S3();
var s3Params = {
Bucket: 'your bucket',
Key: 'path/to/the/file.ext'
};
s3.getObject(s3Params, function(err, res) {
if (err === null) {
res.attachment('file.ext'); // or whatever your logic needs
res.send(data.Body);
} else {
res.status(500).send(err);
}
});
Simply create a ReadStream from S3 and WriteStream to the location were u want to download. Find the code below. Works perfectly for me:
var AWS = require('aws-sdk');
var path = require('path');
var fs = require('fs');
AWS.config.loadFromPath(path.resolve(__dirname, 'config.json'));
AWS.config.update({
accessKeyId: AWS.config.credentials.accessKeyId,
secretAccessKey: AWS.config.credentials.secretAccessKey,
region: AWS.config.region
});
var s3 = new AWS.S3();
var params = {
Bucket: '<your-bucket>',
Key: '<path-to-your-file>'
};
let readStream = s3.getObject(params).createReadStream();
let writeStream = fs.createWriteStream(path.join(__dirname, 's3data.txt'));
readStream.pipe(writeStream);
You've already figured what's most important to solve your issue: you can pipe the file stream coming from S3 to any writable stream, be it a filestream… or the response stream that will be sent to the client!
s3.GetObject(options, { stream : true }, function(err, data) {
res.attachment('test.pdf');
data.Stream.pipe(res);
});
Note the use of res.attachment that will set the correct headers. You can also check out this answer regarding streams and S3.
Using aws SDK v3
npm install #aws-sdk/client-s3
download code
import { GetObjectCommand } from "#aws-sdk/client-s3";
/**
* download a file from AWS and send to your rest client
*/
app.get('/download', function(req, res, next){
var fileKey = req.query['fileKey'];
var bucketParams = {
Bucket: 'my-bucket-name',
Key: fileKey,
};
res.attachment(fileKey);
var fileStream = await s3Client.send(new GetObjectCommand(bucketParams));
// for TS you can add: if (fileStream.Body instanceof Readable)
fileStream.Body.pipe(res)
});
For this I use React frontend and node js backend. Frontend I use Axios. I used this click the button download file.
==== Node js backend code (AWS S3) ======
//inside GET method I called this function
public download = (req: Request, res: Response) => {
const keyName = req.query.keyName as string;
if (!keyName) {
throw new Error('key is undefined');
}
const downloadParams: AWS.S3.GetObjectRequest = {
Bucket: this.BUCKET_NAME,
Key: keyName
};
this.s3.getObject(downloadParams, (error, data) => {
if (error) {
return error;
}
res.send(data.Body);
res.end();
});
};
====== React js frontend code ========
//this function handle download button onClick
const downloadHandler = async (keyName: string) => {
const response = await axiosInstance.get( //here use axios interceptors
`papers/paper/download?keyName=${keyName}`,{
responseType:'blob', //very very important dont miss (if not downloaded file unsupported to view)
}
);
const url = window.URL.createObjectURL(new Blob([response.data]));
const link = document.createElement("a");
link.href = url;
link.setAttribute("download", "file.pdf"); //change "file.pdf" according to saved name you want, give extension according to filetype
document.body.appendChild(link);
link.click();
link.remove();
};
------ OR (if you are using normal axios and not axios interceptors) -----
axios({
url: 'http://localhost:5000/static/example.pdf',
method: 'GET',
responseType: 'blob', // very very important
}).then((response) => {
const url = window.URL.createObjectURL(new Blob([response.data]));
const link = document.createElement('a');
link.href = url;
link.setAttribute('download', 'file.pdf');
document.body.appendChild(link);
link.click();
});
For more refer below articles
1. article 1
2. article 2
Using express, based on Jushua's answer and https://docs.aws.amazon.com/AmazonS3/latest/userguide/example_s3_GetObject_section.html
public downloadFeedFile = (req: IFeedUrlRequest, res: Response) => {
const downloadParams: GetObjectCommandInput = parseS3Url(req.s3FileUrl.replace(/\s/g, ''));
logger.info("requesting S3 file " + JSON.stringify(downloadParams));
const run = async () => {
try {
const fileStream = await this.s3Client.send(new GetObjectCommand(downloadParams));
if (fileStream.Body instanceof Readable){
fileStream.Body.once('error', err => {
console.error("Error downloading s3 file")
console.error(err);
});
fileStream.Body.pipe(res);
}
} catch (err) {
logger.error("Error", err);
}
};
run();
};