I need to upload a File from a SFTP to a S3 Bucket.
I'm using aws-sdk, defining a bucket S3 and then upload a Stream.
The problem is the file on the bucket is only 128 KB of the starting 8 MB.
Am i missing something?
I think the problem is that the upload method doesn't pass through the whole stream and stops at the first "step" of 128 KB.
Maybe some "await" put wrong?
export const saveFileToS3 = (stream: any, filePath : string) => {
return new Promise<string>(async (resolve: Function, reject: Function) => {
try {
const fileName = filePath.split('/')[3];
const s3 = new AWS.S3({
endpoint: 's3-eu-central-1.amazonaws.com',
signatureVersion: 'v4',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
});
let params = {
Bucket: process.env.S3_BUCKET_NAME,
Key: process.env.S3_PATH + fileName,
Body: stream
};
var options = {
partSize: 1500 * 1024 * 1024,
queueSize: 100
};
await s3.upload(params, options, function(s3Err, data) {
if (s3Err) throw s3Err
console.log(`File uploaded successfully at ${data.Location}`)
});
resolve(params);
} catch (error) {
reject(error);
}
});
};
export const getFileFromFtp = (filePath: string, ftpConf: Appp.FTPConf) => {
return new Promise<Stream>(async (resolve, reject) => {
logger.info(`getFileFromFtp - file path: ${filePath}`);
let client: SFTP;
try {
client = new SFTP();
const msg = await client.connect(ftpConf);
logger.info(`ftp client connected to ${ftpConf.host}`, msg);
const stream: Stream = await client.get(filePath);
client.end();
resolve(stream);
} catch (error) {
logger.error("error: " + error.message);
client && client.end();
reject(error);
}
});
};
I expect the s3.upload to upload all the stream.
Related
I am trying to upload Images to aws s3 bucket. but i got unsupportedMediaType 415 error in postman. Not getting, where Its going wrong. Could you help me out?
postman formdata request => taking file, fileName, and Id
In service.js file:
const aws = require('aws-sdk');
const fs = require('fs');
const bucketName = process.env.bucketName;
// Initiating S3 instance
const s3 = new aws.S3({
secretAccessKey: process.env.bucketSecretAccessKey,
accessKeyId: process.env.bucketAccessKeyId,
region: process.env.region
});
// Options you can choose to set to accept files upto certain size limit
// const options = {partSize: 10 * 1024 * 1024, queueSize: 1};
module.exports = {
upload: async function(payload) {
let fileContent = fs.readFileSync(payload.file)
const params = {
Bucket: bucketName,
ContentType: 'image/jpeg',
Key: payload.client_id/payload.familyName/payload.fileName,
Body: fileContent
};
let fileResp = null;
await s3.upload(params, function(err, data) {
if (err) {
console.log(err)
throw err;
}
fileResp = data;
console.log(`File uploaded successfully. ${data.Location}`);
});
return fileResp;
},
};
I am trying to upload Images to aws s3 bucket. but i got unsupportedMediaType 415 error in postman. Not getting, where Its going wrong. Could you help me out?
postman formdata request => taking file, fileName, and Id
Api call through url:
uploadImage : async function(req, h) {
try{
let responseFile = null;
const uploadImage = await s3.upload(req.payload).then( async (resp) => {
responseFile = {fileUrl: resp.Location};
if(!!responseFile){
const updateFileName = await Family.updateOne({_id: req.payload.id},{
image: req.payload.fileName,
fileLocation : resp.Location
});
console.log(updateFileName)
}
}).catch((err) => {
console.log(err)
responseFile = err.message;
});
} catch (error) {
console.error(error.message);
return h.response({ error: responseMessages.internal_server_error }).code(500);
}
},
})
``````````````````````````````````````````
postman response:
`````````````````````
{
"statusCode": 415,
"error": "Unsupported Media Type",
"message": "Unsupported Media Type"
}
[1]: https://i.stack.imgur.com/ggHCj.png
Use below
const params = {
Bucket: bucketName,
Key: `${payload.client_id}/${payload.familyName}/${payload.fileName}`,
Body: fileContent
};
There are two issues
Key. path is not correct
Content-type is not required
follow above snippet and try
payload: {
output: 'stream',
parse: true,
allow: 'multipart/form-data',
multipart: true,
timeout: false,
},
I have added this payload in my route.js file, then it worked for me.
My lambda is triggered by a request from the browser. The browser sends an image as multipart/form-data.
The lambda uses busboy to parse the request:
function parseForm(event: IHttpEvent) {
return new Promise(
(resolve, reject) => {
const busboy = new Busboy({
headers: event.headers,
limits: { files: 10 },
});
const imageResponse = new Map<string, IImageParseResponse>();
busboy.on("file", (id, file, filename, encoding, mimeType) => {
imageResponse.set(id, { file, filename, mimeType });
});
busboy.on("error", (error) => reject(`Parse error: ${error}`));
busboy.on("finish", () => resolve(imageResponse));
busboy.write(event.body, event.isBase64Encoded ? "base64" : "binary");
busboy.end();
}
);
}
When I parsed the request I want to upload the file to AWS S3.
export async function handler(event: IHttpEvent) {
var res = await parseForm(event);
const s3 = new S3Client({ region: "eu-central-1" });
for (const [k, v] of res) {
console.log(`File ${v.filename} ${v.mimeType} streaming`);
const stream = new Readable().wrap(v.file);
const upload = new Upload({
client: s3,
params: {
Key: v.filename,
Bucket: "my-image-bucket",
Body: stream,
ContentType: v.mimeType,
},
});
upload.on("httpUploadProgress", (p) => console.log(p));
const result = await upload.done();
console.log(result);
return result;
}
}
This does not work. However the Browser will receive a 200 OK with a null body response. What confuses me even more is that console.log(result); does not log anything to console.
Where is my mistake? I dont't fully understand the mechanics of streams. But as far as I understand it will be more memory-efficient. In the future I plan to upload multiple images at once. And in order to save cost I want my method to be as efficient as possible.
In general I did 2 mistakes.
Tried to upload the stream when it was already read to the end by busboy
I did not properly wait for the completion of the upload to s3 before terminating the function.
In the end i ended up with the following:
const s3 = new S3Client({ region: "eu-central-1" });
const { BUCKET_NAME, MAX_IMAGE_SIZE } = process.env;
export async function handler(event: IHttpEvent) {
const results = await parseForm(event);
const response = [];
for (const r of results) {
if (r.status === "fulfilled") {
const value: any = r.value.result;
response.push({
id: r.value.id,
key: value.Key,
url: value.Location,
});
}
if (r.status === "rejected")
response.push({ id: r.reason.id, reason: r.reason.error });
}
return response;
}
async function doneHandler(
id: string,
uploadMap: Map<string, Upload>
): Promise<{ id: string; result: ServiceOutputTypes }> {
try {
var result = await uploadMap.get(id).done();
} catch (e: any) {
var error = e;
} finally {
uploadMap.delete(id);
if (error) throw { id, error };
return { id, result };
}
}
function parseForm(event: IHttpEvent) {
return new Promise( (resolve, reject) => {
const busboy = new Busboy({
headers: event.headers,
limits: { files: 1, fileSize: parseInt(MAX_IMAGE_SIZE) },
});
const responses: Promise<{
id: string;
result: ServiceOutputTypes;
}>[] = [];
const uploads = new Map<string, Upload>();
busboy.on("file", (id, file, filename, encoding, mimeType) => {
uploads.set(
id,
new Upload({
client: s3,
params: {
Bucket: BUCKET_NAME,
Body: new Readable().wrap(file),
Key: filename,
ContentType: mimeType,
ContentEncoding: encoding,
},
})
);
responses.push(doneHandler(id, uploads));
file.on("limit", async () => {
const aborts = [];
for (const [k, upload] of uploads) {
aborts.push(upload.abort());
}
await Promise.all(aborts);
return reject(new Error("File is too big."));
});
});
busboy.on("error", (error: any) => {
reject(new Error(`Parse error: ${error}`));
});
busboy.on("finish", async () => {
const res = await Promise.allSettled(responses);
resolve(res);
});
busboy.write(event.body, event.isBase64Encoded ? "base64" : "binary");
busboy.end();
}
);
}
This solution also handles file-limits and tries to abort all pending uploads to S3
I'm struggling finding a solution to upload two files to s3. I can upload one file with multer and I have learnt how to do it, but when I try to do a map inside all files in the formdata and upload each file, I push into an array each location URL which is the one I save in my database. Then I try to print each url but for my surprise they are print inside the if statement but not when I save it in the database outside the if. Could it be for an asychronous problem?.
Thanks.
tournamentsCtrl.createTournament = async (req, res) => {
var files_upload = []
if (req.files) {
aws.config.setPromisesDependency();
aws.config.update({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
//region: process.env.REGION
});
const s3 = new aws.S3();
req.files.map((item) => {
var params = {
ACL: 'public-read',
Bucket: process.env.AWS_BUCKET_NAME,
Body: fs.createReadStream(item.path),
Key: `tournament_img/${uuidv4()/* +req.file.originalname */}`
};
await s3.upload(params, (err, data) => {
if (err) {
console.log('Error occured while trying to upload to S3 bucket', err);
}
if (data) {
fs.unlinkSync(item.path); // Empty temp folder
const locationUrl = data.Location;
files_upload.push(locationUrl);
console.log(files_upload)
}
});
});
}
console.log(files_upload)
const new_data = { ...JSON.parse(req.body.values), img_source: files_upload[0], info_url: files_upload[1] }
console.log(new_data)
const newUser = new Tournaments(new_data);
newUser
.save()
.then(user => {
res.json({ message: 'User created successfully', user });
})
.catch(err => {
console.log('Error occured while trying to save to DB');
});
};
If you look at the docs for upload it does not return a promise so you should not call await on it. The default map method is not compatible with async code in this form. You need to either use async.map or wrap the async code in a promise like
return await new Promise((resolve, reject) => {
...
if (data) {
fs.unlinkSync(item.path);
resolve(data.location);
}
}
Your other code has some issues as well. A map function should return a value. If you dont want to return anything you should use foreach.
This is a bad place to ask for code advice but something like the following
function uploadFile(s3, element) {
return new Promise((resolve, reject) => {
let folder;
if (element.fieldname.includes('img')) {
folder = 'club_images'
} else if (element.fieldname.inlcudes('poster')) {
folder = 'poster_tournament'
} else {
folder = 'info_tournament'
}
const params = {
ACL: 'public-read',
Bucket: process.env.AWS_BUCKET_NAME,
Body: fs.createReadStream(element.path),
Key: `${folder + '/' + uuidv4() + element.fieldname}`
};
s3.upload(params, (err, data) => {
if (err) {
return reject(err);
}
if (data) {
return fs.unlink(element.path, err=> {
if(err) {
console.error("Failed to unlink file", element.path);
}
return resolve({[element.fieldname]: data.Location});
}); // Empty temp folder
}
return resolve();
});
})
}
tournamentsCtrl.createTournament = async (req, res) => {
aws.config.setPromisesDependency();
aws.config.update({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
//region: process.env.REGION
});
const s3 = new aws.S3();
try {
const uploadData = await Promise.all(req.files.map(element => uploadFile(s3, element)));
const returnData = Object.assign({}, ...uploadData);
console.log(Object.assign(JSON.parse(req.body.values), returnData));
} catch(e) {
console.error('Failed to upload file', e);
return res.sendStatus(500);
}
const newUser = new Tournaments(Object.assign(JSON.parse(req.body.values), files_upload));
console.log(newUser)
try {
const user = await newUser.save()
res.json({message: 'User created successfully', user});
} catch(e) {
console.error('Error occured while trying to save to DB');
return res.sendStatus(500);
}
};
I'm trying to read a file from an sftp server and stream that file into an s3 bucket. I'm not able to stream the file into the s3 bucket. Yes the file path is exactly correct. I'm not sure what I am doing wrong. When I run the code, it doesn't even try to upload the stream into the bucket because I don't get any upload console logs.
const aws = require('aws-sdk');
const s3 = new aws.S3();
const Client = require('ssh2').Client;
const conn = new Client();
const connSettings = {
host: event.serverHost,
port: event.port,
username: event.username,
password: event.password
};
exports.handler = function(event) {
conn.on('ready', function() {
conn.sftp(function(err, sftp) {
if (err) {
console.log("Errror in connection", err);
conn.end()
} else {
console.log("Connection established");
let readStream = sftp.createReadStream(remoteFilePath);
console.log(`Read Stream ${readStream}`)
// readStream outputs [object Object] to the console
const uploadParams = {
Bucket: s3Bucket,
Key: 'fileName',
Body: readStream
}
s3.upload (uploadParams, function (err, data) {
if (err) {
console.log("Error", err);
} if (data) {
console.log("Upload Success", data.Location);
}
});
conn.end()
}
});
}).connect(connSettings);
}
I want to be able to stream the readStream from sftp server into s3 bucket.
conn.end() ends the connection immediately. Move that to inside your s3.upload() callback so that your data actually gets transferred before the connection is closed.
This is a working Node 12 example of what I believe you are trying to accomplish:
const aws = require('aws-sdk');
const s3 = new aws.S3();
const Client = require('ssh2').Client;
const conn = new Client();
const { PassThrough } = require('stream');
conn.on('ready', () => {
conn.sftp((err, sftp) => {
const transferStream = new PassThrough();
s3.upload({
Bucket: s3Bucket,
Key: 'test_file.txt',
Body: transferStream
}, (err, data) => {
if (err) {
console.log(`Upload error: ${err}`);
}
if (data) {
console.log(`Uploaded to [${data.Location}].`);
}
});
sftp.createReadStream(remoteFilePath)
.pipe(transferStream)
.on('end', () => {
transferStream.end();
conn.end();
});
});
}).connect(connectionSettings);
I am trying to download a file from s3 and directly put into into a file on the filesystem using a writeStream in nodejs. This is my code:
downloadFile = function(bucketName, fileName, localFileName) {
//Donwload the file
var bucket = new AWS.S3({
params: { Bucket: bucketName },
signatureVersion: 'v4'
});
var file = require('fs').createWriteStream(localFileName);
var request = bucket.getObject({ Key: fileName });
request.createReadStream().pipe(file);
request.send();
return request.promise();
}
Running this function I get this error:
Uncaught Error: write after end
What is happening? Is the file closed before the write is finished? Why?
var s3 = new AWS.S3({
accessKeyId: accessKeyId,
secretAccessKey: secretAccessKey
}),
file = fs.createWriteStream(localFileName);
s3
.getObject({
Bucket: bucketName,
Key: fileName
})
.on('error', function (err) {
console.log(err);
})
.on('httpData', function (chunk) {
file.write(chunk);
})
.on('httpDone', function () {
file.end();
})
.send();
Also AWS notes an example of using promises like this:
const s3 = new aws.S3({apiVersion: '2006-03-01'});
const params = { Bucket: 'yourBucket', Key: 'yourKey' };
const file = require('fs').createWriteStream('./local_file_path');
const s3Promise = s3.getObject(params).promise();
s3Promise.then((data) => {
file.write(data.Body, () => {
file.end();
fooCallbackFunction();
});
}).catch((err) => {
console.log(err);
});
This works perfect for me.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/using-promises.html
EDIT: (15 Feb 2018) Updated the code, as you have to end the write stream (file.end()).
I have combined the above response with a typical gunzip operation in pipe:
var s3DestFile = "./archive.gz";
var s3UnzippedFile = './deflated.csv';
var gunzip = zlib.createGunzip();
var file = fs.createWriteStream( s3DestFile );
s3.getObject({ Bucket: Bucket, Key: Key })
.on('error', function (err) {
console.log(err);
})
.on('httpData', function (chunk) {
file.write(chunk);
})
.on('httpDone', function () {
file.end();
console.log("downloaded file to" + s3DestFile);
fs.createReadStream( s3DestFile )
.on('error', console.error)
.on('end', () => {
console.log("deflated to "+s3UnzippedFile)
})
.pipe(gunzip)
.pipe(fs.createWriteStream( s3UnzippedFile ))
})
.send();