I have readableStream with an encrypted file and I would like to decrypt it. So far I was able to create 2 readableStreams - with 1 I extracted the IV and the second was used to decrypt the rest of the data, but I would like to just pipe it into one stream - both IV extraction AND the decryption. Something like this:
function getDecryptionStream(password, initVectLength = 16) {
const cipherKey = getCipherKey(password);
const unzip = zlib.createUnzip();
return new Transform({
transform(chunk, encoding, callback) {
if (!this.initVect) {
this.initVect = chunk.subarray(0, initVectLength);
chunk = chunk.subarray(initVectLength, chunk.length);
}
const decipher = crypto.createDecipheriv(algorithm, cipherKey, this.initVect);
callback(null, chunk.pipe(decipher).pipe(unzip));
}
});
}
function decrypt({ file, newFile, passphrase }) {
const readStream = fs.createReadStream(file);
const writeStream = fs.createWriteStream(newFile);
const decryptStream = getDecryptionStream(passphrase , IV_LENGTH);
readStream
.pipe(decryptStream)
.pipe(writeStream);
}
However I cannot figure out, how to process the chunk as chunk.pipe(decipher) throws an error. TypeError: chunk.pipe is not a function as it as Buffer
I am able to download the file from s3 bucket like so:
const fileStream = s3.getObject(options).createReadStream();
const writableStream = createWriteStream(
"./files/master_driver_profile_pic/image.jpeg"
);
fileStream.pipe(fileStream).pipe(writableStream);
But the image is not getting written properly. Only a little bit of the image is visible and the rest is blank.
I think you should first createWriteStream and then createReadStream. (Check the docs)
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
var file = require('fs').createWriteStream('/path/to/file.jpg');
s3.getObject(params).createReadStream().pipe(file);
OR
you can go without streams:
// Download file
let content = await (await s3.getObject(params).promise()).Body;
// Write file
fs.writeFile(downloadPath, content, (err) => {
if (err) { console.log(err); }
});
I have put together the below code that creates a CSV called example.csv, using the json2csv library.
I would prefer to not have to save down and store the CSV file before it is passed to the front end to be downloaded.
I can't seem to figure out how to stream or pipe the file to the front end, without saving it first.
How to take the output CSV file of the json2csv library and send it straight tot he front end?
Some of my code
const input = new Readable({ objectMode: true });
input._read = () => {};
input.push(JSON.stringify(parsed));
input.push(null);
var outputPath = "example.csv";
const output = createWriteStream(outputPath, { encoding: "utf8" });
const json2csv = new Transform(opts, transformOpts);
// Pipe to output path
input.pipe(json2csv).pipe(output);
res.setHeader("Content-Type", "text/csv");
res.download(outputPath);
You can simply pipe the json2csv stream to the res object, e.g:
const ReadableStream = require('stream').Readable;
const Json2csvTransform = require('json2csv').Transform;
app.get('/csv', (req, res) => {
const stream = new ReadableStream();
stream.push(JSON.stringify(data));
stream.push(null);
const json2csv = new Json2csvTransform({}, transformOpts);
res.setHeader('Content-disposition', 'attachment; filename=data.csv');
res.writeHead(200, { 'Content-Type': 'text/csv' });
stream.pipe(json2csv).pipe(res);
})
I'm running Node.js code on a readonly file system and I would like to download a file and convert this file directly to a base64 string but without writing the file on the disk.
Now I have the following:
let file = fs.createWriteStream(`file.jpg`);
request({
uri: fileUrl
})
.pipe(file).on('finish', () => {
let buff = fs.readFileSync(file);
let base64data = buff.toString('base64');
})
But this solution is writing on the disk so this is not possible for me.
I would like to do the same but without the need of the temp file on the disk. Is it possible?
You don't pipe() into a variable. You collect the data off the stream into a variable as the data arrives. I think you can do something like this:
const Base64Encode = require('base64-stream').Base64Encode;
const request = require('request');
let base64Data = "";
request({
uri: fileUrl
}).pipe(new Base64Encode()).on('data', data => {
base64Data += data;
}).on('finish', () => {
console.log(base64Data);
}).on('error', err => {
console.log(err);
});
I am using pdfkit on my node server, typically creating pdf files, and then uploading them to s3.
The problem is that pdfkit examples pipe the pdf doc into a node write stream, which writes the file to the disk, I followed the example and worked correctly, however my requirement now is to pipe the pdf doc to a memory stream rather than save it on the disk (I am uploading to s3 anyway).
I've followed some node memory streams procedures but none of them seem to work with pdf pipe with me, I could just write strings to memory streams.
So my question is: How to pipe the pdf kit output to a memory stream (or something alike) and then read it as an object to upload to s3?
var fsStream = fs.createWriteStream(outputPath + fileName);
doc.pipe(fsStream);
An updated answer for 2020. There is no need to introduce a new memory stream because "PDFDocument instances are readable Node streams".
You can use the get-stream package to make it easy to wait for the document to finish before passing the result back to your caller.
https://www.npmjs.com/package/get-stream
const PDFDocument = require('pdfkit')
const getStream = require('get-stream')
const pdf = () => {
const doc = new PDFDocument()
doc.text('Hello, World!')
doc.end()
return await getStream.buffer(doc)
}
// Caller could do this:
const pdfBuffer = await pdf()
const pdfBase64string = pdfBuffer.toString('base64')
You don't have to return a buffer if your needs are different. The get-stream readme offers other examples.
There's no need to use an intermediate memory stream1 – just pipe the pdfkit output stream directly into a HTTP upload stream.
In my experience, the AWS SDK is garbage when it comes to working with streams, so I usually use request.
var upload = request({
method: 'PUT',
url: 'https://bucket.s3.amazonaws.com/doc.pdf',
aws: { bucket: 'bucket', key: ..., secret: ... }
});
doc.pipe(upload);
1 - in fact, it is usually undesirable to use a memory stream because that means buffering the entire thing in RAM, which is exactly what streams are supposed to avoid!
You could try something like this, and upload it to S3 inside the end event.
var doc = new pdfkit();
var MemoryStream = require('memorystream');
var memStream = new MemoryStream(null, {
readable : false
});
doc.pipe(memStream);
doc.on('end', function () {
var buffer = Buffer.concat(memStream.queue);
awsservice.putS3Object(buffer, fileName, fileType, folder).then(function () { }, reject);
})
A tweak of #bolav's answer worked for me trying to work with pdfmake and not pdfkit. First you need to have memorystream added to your project using npm or yarn.
const MemoryStream = require('memorystream');
const PdfPrinter = require('pdfmake');
const pdfPrinter = new PdfPrinter();
const docDef = {};
const pdfDoc = pdfPrinter.createPdfKitDocument(docDef);
const memStream = new MemoryStream(null, {readable: false});
const pdfDocStream = pdfDoc.pipe(memStream);
pdfDoc.end();
pdfDocStream.on('finish', () => {
console.log(Buffer.concat(memStream.queue);
});
My code to return a base64 for pdfkit:
import * as PDFDocument from 'pdfkit'
import getStream from 'get-stream'
const pdf = {
createPdf: async (text: string) => {
const doc = new PDFDocument()
doc.fontSize(10).text(text, 50, 50)
doc.end()
const data = await getStream.buffer(doc)
let b64 = Buffer.from(data).toString('base64')
return b64
}
}
export default pdf
Thanks to Troy's answer, mine worked with get-stream as well. The difference was I did not convert it to base64string, but rather uploaded it to AWS S3 as a buffer.
Here is my code:
import PDFDocument from 'pdfkit'
import getStream from 'get-stream';
import s3Client from 'your s3 config file';
const pdfGenerator = () => {
const doc = new PDFDocument();
doc.text('Hello, World!');
doc.end();
return doc;
}
const uploadFile = async () => {
const pdf = pdfGenerator();
const pdfBuffer = await getStream.buffer(pdf)
await s3Client.send(
new PutObjectCommand({
Bucket: 'bucket-name',
Key: 'filename.pdf',
Body: pdfBuffer,
ContentType: 'application/pdf',
})
);
}
uploadFile()