HTML to PDF with Node.js - node.js

I'm looking to create a printable pdf version of my website webpages. Something like express.render() only render the page as pdf
Does anyone know a node module that does that ?
If not, how would you go about implementing one ? I've seen some methods talk about using headless browser like phantom.js, but not sure whats the flow.

Extending upon Mustafa's answer.
A) Install http://phantomjs.org/ and then
B) install the phantom node module https://github.com/amir20/phantomjs-node
C) Here is an example of rendering a pdf
var phantom = require('phantom');
phantom.create().then(function(ph) {
ph.createPage().then(function(page) {
page.open("http://www.google.com").then(function(status) {
page.render('google.pdf').then(function() {
console.log('Page Rendered');
ph.exit();
});
});
});
});
Output of the PDF:
EDIT: Silent printing that PDF
java -jar pdfbox-app-2.0.2.jar PrintPDF -silentPrint C:\print_mypdf.pdf

Phantom.js is an headless webkit server and it will load any web page and render it in memory, although you might not be able to see it, there is a Screen Capture feature, in which you can export the current view as PNG, PDF, JPEG and GIF. Have a look at this example from phantom.js documentation

If you want to export HTML to PDF. You have many options. without node even
Option 1: Have a button on your html page that calls window.print() function. use the browsers native html to pdf. use media queries to make your html page look good on a pdf. and you also have the print before and after events that you can use to make changes to your page before print.
Option 2. htmltocanvas or rasterizeHTML. convert your html to canvas , then call toDataURL() on the canvas object to get the image . and use a JavaScript library like jsPDF to add that image to a PDF file. Disadvantage of this approach is that the pdf doesnt become editable. If you want data extracted from PDF, there is different ways for that.
Option 3. #Jozzhard answer

Try to use Puppeteer to create PDF from HTML
Example from here https://github.com/chuongtrh/html_to_pdf
Or https://github.com/GoogleChrome/puppeteer

The best solution I found is html-pdf. It's simple and work with big html.
https://www.npmjs.com/package/html-pdf
Its as simple as that:
pdf.create(htm, options).toFile('./pdfname.pdf', function(err, res) {
if (err) {
console.log(err);
}
});
NOTE:
This package has been deprecated
Author message: Please migrate your projects to a newer library like puppeteer

Package
I used html-pdf
Easy to use and allows not only to save pdf as file, but also pipe pdf content to a WriteStream (so I could stream it directly to Google Storage to save there my reports).
Using css + images
It takes css into account. The only problem I faced - it ignored my images. The solution I found was to replace url in src attrribute value by base64, e.g.
<img src="data:image/png;base64,iVBOR...kSuQmCC">
You can do it with your code or to use one of online converters, e.g. https://www.base64-image.de/
Compile valid html code from html fragment + css
I had to get a fragment of my html document (I just appiled .html() method on jQuery selector).
Then I've read the content of the relevant css file.
Using this two values (stored in variables html and css accordingly) I've compiled a valid html code using Template string
var htmlContent = `
<!DOCTYPE html>
<html>
<head>
<style>
${css}
</style>
</head>
<body id=direct-sellers-bill>
${html}
</body>
</html>`
and passed it to create method of html-pdf.

Create PDF from External URL
Here's an adaptation of the previous answers which utilizes html-pdf, but also combines it with requestify so it works with an external URL:
Install your dependencies
npm i -S html-pdf requestify
Then, create the script:
//MakePDF.js
var pdf = require('html-pdf');
var requestify = require('requestify');
var externalURL= 'http://www.google.com';
requestify.get(externalURL).then(function (response) {
// Get the raw HTML response body
var html = response.body;
var config = {format: 'A4'}; // or format: 'letter' - see https://github.com/marcbachmann/node-html-pdf#options
// Create the PDF
pdf.create(html, config).toFile('pathtooutput/generated.pdf', function (err, res) {
if (err) return console.log(err);
console.log(res); // { filename: '/pathtooutput/generated.pdf' }
});
});
Then you just run from the command line:
node MakePDF.js
Watch your beautify pixel perfect PDF be created for you (for free!)

For those who don't want to install PhantomJS along with an instance of Chrome/Firefox on their server - or because the PhantomJS project is currently suspended, here's an alternative.
You can externalize the conversions to APIs to do the job. Many exists and varies but what you'll get is a reliable service with up-to-date features (I'm thinking CSS3, Web fonts, SVG, Canvas compatible).
For instance, with PDFShift (disclaimer, I'm the founder), you can do this simply by using the request package:
const request = require('request')
request.post(
'https://api.pdfshift.io/v2/convert/',
{
'auth': {'user': 'your_api_key'},
'json': {'source': 'https://www.google.com'},
'encoding': null
},
(error, response, body) => {
if (response === undefined) {
return reject({'message': 'Invalid response from the server.', 'code': 0, 'response': response})
}
if (response.statusCode == 200) {
// Do what you want with `body`, that contains the binary PDF
// Like returning it to the client - or saving it as a file locally or on AWS S3
return True
}
// Handle any errors that might have occured
}
);

Use html-pdf
var fs = require('fs');
var pdf = require('html-pdf');
var html = fs.readFileSync('./test/businesscard.html', 'utf8');
var options = { format: 'Letter' };
pdf.create(html, options).toFile('./businesscard.pdf', function(err, res) {
if (err) return console.log(err);
console.log(res); // { filename: '/app/businesscard.pdf' }
});

const fs = require('fs')
const path = require('path')
const utils = require('util')
const puppeteer = require('puppeteer')
const hb = require('handlebars')
const readFile = utils.promisify(fs.readFile)
async function getTemplateHtml() {
console.log("Loading template file in memory")
try {
const invoicePath = path.resolve("./invoice.html");
return await readFile(invoicePath, 'utf8');
} catch (err) {
return Promise.reject("Could not load html template");
}
}
async function generatePdf() {
let data = {};
getTemplateHtml()
.then(async (res) => {
// Now we have the html code of our template in res object
// you can check by logging it on console
// console.log(res)
console.log("Compiing the template with handlebars")
const template = hb.compile(res, { strict: true });
// we have compile our code with handlebars
const result = template(data);
// We can use this to add dyamic data to our handlebas template at run time from database or API as per need. you can read the official doc to learn more https://handlebarsjs.com/
const html = result;
// we are using headless mode
const browser = await puppeteer.launch();
const page = await browser.newPage()
// We set the page content as the generated html by handlebars
await page.setContent(html)
// we Use pdf function to generate the pdf in the same folder as this file.
await page.pdf({ path: 'invoice.pdf', format: 'A4' })
await browser.close();
console.log("PDF Generated")
})
.catch(err => {
console.error(err)
});
}
generatePdf();

In case you arrive here looking for a way to make PDF from view templates in Express, a colleague and I made express-template-to-pdf
which allows you to generate PDF from whatever templates you're using in Express - Pug, Nunjucks, whatever.
It depends on html-pdf and is written to use in your routes just like you use res.render:
const pdfRenderer = require('#ministryofjustice/express-template-to-pdf')
app.set('views', path.join(__dirname, 'views'))
app.set('view engine', 'pug')
app.use(pdfRenderer())
If you've used res.render then using it should look obvious:
app.use('/pdf', (req, res) => {
res.renderPDF('helloWorld', { message: 'Hello World!' });
})
You can pass options through to html-pdf to control the PDF document page size etc
Merely building on the excellent work of others.

In my view, the best way to do this is via an API so that you do not add a large and complex dependency into your app that runs unmanaged code, that needs to be frequently updated.
Here is a simple way to do this, which is free for 800 requests/month:
var CloudmersiveConvertApiClient = require('cloudmersive-convert-api-client');
var defaultClient = CloudmersiveConvertApiClient.ApiClient.instance;
// Configure API key authorization: Apikey
var Apikey = defaultClient.authentications['Apikey'];
Apikey.apiKey = 'YOUR API KEY';
var apiInstance = new CloudmersiveConvertApiClient.ConvertWebApi();
var input = new CloudmersiveConvertApiClient.HtmlToPdfRequest(); // HtmlToPdfRequest | HTML to PDF request parameters
input.Html = "<b>Hello, world!</b>";
var callback = function(error, data, response) {
if (error) {
console.error(error);
} else {
console.log('API called successfully. Returned data: ' + data);
}
};
apiInstance.convertWebHtmlToPdf(input, callback);
With the above approach you can also install the API on-premises or on your own infrastructure if you prefer.

In addition to #Jozzhart Answer, you can make a local html; serve it with express; and use phantom to make PDF from it; something like this:
const exp = require('express');
const app = exp();
const pth = require("path");
const phantom = require('phantom');
const ip = require("ip");
const PORT = 3000;
const PDF_SOURCE = "index"; //index.html
const PDF_OUTPUT = "out"; //out.pdf
const source = pth.join(__dirname, "", `${PDF_SOURCE}.html`);
const output = pth.join(__dirname, "", `${PDF_OUTPUT}.pdf`);
app.use("/" + PDF_SOURCE, exp.static(source));
app.use("/" + PDF_OUTPUT, exp.static(output));
app.listen(PORT);
let makePDF = async (fn) => {
let local = `http://${ip.address()}:${PORT}/${PDF_SOURCE}`;
phantom.create().then((ph) => {
ph.createPage().then((page) => {
page.open(local).then(() =>
page.render(output).then(() => { ph.exit(); fn() })
);
});
});
}
makePDF(() => {
console.log("PDF Created From Local File");
console.log("PDF is downloadable from link:");
console.log(`http://${ip.address()}:${PORT}/${PDF_OUTPUT}`);
});
and index.html can be anything:
<h1>PDF HEAD</h1>
LINK
result:

https://www.npmjs.com/package/dynamic-html-pdf
I use dynamic-html-pdf, this is simple and also able to pass dynamic variable to html.
var html = fs.readFileSync('./uploads/your-html-tpl.html', 'utf8');
var options = {
format: "A4",
orientation: "portrait"
// border: "10mm"
};
var document = {
type: 'file', // 'file' or 'buffer'
template: html,
context: {
'your_key':'your_values'
},
path: '/pdf/1.pdf' // pdf save path
};
pdf.create(document, options)
.then(res => {
console.log(res)
}).catch(error => {
console.error(error)
});
On html you can use {{your_key}}

I've written hpdf lib for generating PDF from HTLM or URL.
It supports configurable pool of headless browsers (as resources) in the background.
import fs from 'fs';
import { PdfGenerator } from './src';
const start = async () => {
const generator = new PdfGenerator({
min: 3,
max: 10,
});
const helloWorld = await generator.generatePDF('<html lang="html">Hello World!</html>');
const github = await generator.generatePDF(new URL('https://github.com/frimuchkov/hpdf'));
await fs.promises.writeFile('./helloWorld.pdf', helloWorld);
await fs.promises.writeFile('./github.pdf', github);
await generator.stop();
}

I wanted to add to this since I did not see the option to created pdfs from liquid templates yet, but the solution also works with normal html or urls as well.
Lets say this is our html template. Which could be anything really but see that the code include double curly braces. The key inside the braces will be looked up in the liquid_data parameter of the request and replaced by the value.
<html>
<body>
<h1>{{heading}}</h1>
<img src="{{img_url}}"/>
</body>
</html>
The corresponding liquid_data object looks like this:
{
"heading":"Hi Stackoverflow!",
"img_url":"https://stackoverflow.design/assets/img/logos/so/logo-stackoverflow.svg"
}
This is the example I want to create a PDF for. Using pdfEndpoint and the Playground creating a pdf from that template from above is very simple.
const axios = require("axios");
const options = {
method: "POST",
url: "https://api.pdfendpoint.com/v1/convert",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer SIGN-UP-FOR-KEY"
},
data: {
"delivery_mode": "json",
"page_size": "A4",
"margin_top": "1cm",
"margin_bottom": "1cm",
"margin_left": "1cm",
"margin_right": "1cm",
"orientation": "vertical",
"html": "<html><body> <h1>{{heading}}</h1> <img src=\"{{img_url}}\"/> </body>\</html>",
"parse_liquid": true,
"liquid_data": "{ \"heading\":\"Hi Stackoverflow!\", \"img_url\":\"https://stackoverflow.design/assets/img/logos/so/logo-stackoverflow.svg\"}"
}
};
axios.request(options).then(function (response) {
console.log(response.data);
}).catch(function (error) {
console.error(error);
});
The service will the return a rendered pdf like this:

You can also use pdf node creator package
Package URL -
https://www.npmjs.com/package/pdf-creator-node

Related

Open Puppeteer with specific configuration (download PDF instead of PDF viewer)

I would like to open Chromium with a specific configuration.
I am looking for the configuration to activate the following option :
Settings => Site Settings => Permissions => PDF documents => "Download PDF files instead of automatically openning them in Chrome"
I searched the tags on this command line switch page but the only parameter that deals with pdf is --print-to-pdf which does not correspond to my need.
Do you have any ideas?
There is no option you can pass into Puppeteer to force PDF downloads. However, you can use chrome-devtools-protocol to add a content-disposition: attachment response header to force downloads.
A visual flow of what you need to do:
I'll include a full example code below. In the example below, PDF files and XML files will be downloaded in headful mode.
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch({
headless: false,
defaultViewport: null,
});
const page = await browser.newPage();
const client = await page.target().createCDPSession();
await client.send('Fetch.enable', {
patterns: [
{
urlPattern: '*',
requestStage: 'Response',
},
],
});
await client.on('Fetch.requestPaused', async (reqEvent) => {
const { requestId } = reqEvent;
let responseHeaders = reqEvent.responseHeaders || [];
let contentType = '';
for (let elements of responseHeaders) {
if (elements.name.toLowerCase() === 'content-type') {
contentType = elements.value;
}
}
if (contentType.endsWith('pdf') || contentType.endsWith('xml')) {
responseHeaders.push({
name: 'content-disposition',
value: 'attachment',
});
const responseObj = await client.send('Fetch.getResponseBody', {
requestId,
});
await client.send('Fetch.fulfillRequest', {
requestId,
responseCode: 200,
responseHeaders,
body: responseObj.body,
});
} else {
await client.send('Fetch.continueRequest', { requestId });
}
});
await page.goto('https://pdf-xml-download-test.vercel.app/');
await page.waitFor(100000);
await client.send('Fetch.disable');
await browser.close();
})();
For a more detailed explanation, please refer to the Git repo I've setup with comments. It also includes an example code for playwright.
Puppeteer currently does not support navigating (or downloading) PDFs
in headless mode that easily. Quote from the docs for the page.goto function:
NOTE Headless mode doesn't support navigation to a PDF document. See the upstream issue.
What you can do though, is detect if the browser is navigating to the PDF file and then download it yourself via Node.js.
Code sample
const puppeteer = require('puppeteer');
const http = require('http');
const fs = require('fs');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
page.on('request', req => {
if (req.url() === '...') {
const file = fs.createWriteStream('./file.pdf');
http.get(req.url(), response => response.pipe(file));
}
});
await page.goto('...');
await browser.close();
})();
This navigates to a URL and monitors the ongoing requests. If the "matched request" is found, Node.js will manually download the file via http.get and pipe it into file.pdf. Please be aware that this is a minimal working example. You want to catch errors when downloading and might also want to use something more sophisticated then http.get depending on the situation.
Future note
In the future, there might be an easier way to do it. When puppeteer will support response interception, you will be able to simply force the browser to download a document, but right now this is not supported (May 2019).

Mysql data to Excel File

I am working on nodeJs and React, I have data in mysql storage.
ultimately i need to let the user to download the data in excel format.
Either we can do in nodeJs or React.
I tried to create a file in Node using excel4node package, The file gets created successfully, but when i send the file, it is not in excel format(some xml files and folders), i used downloadJs in frontend to trigger autoDownload.
router.get('/:year/:month', async (req, res, next) => {
res.setHeader('Content-Type', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
res.setHeader('Content-Disposition', 'attachment; filename=' + 'Report.xlsx');
res.sendFile(path.resolve('downloads/excel.xlsx'));
});
import downloadjs from 'downloadjs';
export const getReport = async (year, month) => {
let res = await fetch(`${url}/get-report/${year}/${month}`, {
method: 'GET',
mode: 'cors',
})
let blob = await res.blob();
await downloadjs(blob);
};
This downloads a zip folder which has list of xml files.
I tried to create in React (client side) by sending json from the backend,
for this i used react-excel-workbook package, but it needs a predefined data, when we click, it suddenly gets downloaded with dummy data and it doesn't wait for async action to resolve.
Any help will be appreciated.
Or should i send the json from backend and on client side (convert it into csv and trigger download.??
Write the file directly to the Response object, instead of going through an intermediate file
var xl = require('excel4node');
var wb = new xl.Workbook();
// sends Excel file to web client requesting the / route
// server will respond with 500 error if excel workbook cannot be generated
var express = require('express');
var app = express();
app.get('/', function(req, res) {
wb.write('ExcelFile.xlsx', res);
});
app.listen(3000, function() {
console.log('Example app listening on port 3000!');
});
Late answer but you should specify content type when you are creating the blob in your frontend, then create a link in your DOM and specify to browser that the file must be downloaded :
axios.get(`${your backend url goes here}/path/to/export`, {
responseType: 'blob',
headers: {
'Authorization': `Bearer ${token}` //Or any auth method
}
}).then(res => {
const url = window.URL.createObjectURL(new Blob([res.data]), {type: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'}); //specify CT
const link = document.createElement('a'); // attach link to DOM
link.href = url;
link.setAttribute('download', 'File.xlsx');
document.body.appendChild(link);
link.click(); // Auto dl the file
link.remove(); // Remove the link
}).catch(err => {
console.log(err);
})

Efficient way to read file in NodeJS

I am receiving an image file sent from an Ajax request:
var data = canvas.toDataURL('image/jpg', 1.0);
$.post({
url: "/upload-image",
data: {
file: data
}
}).done(function(response) {
....
})
}
And on the server side, I want to transmit the image file to an API
function getOptions(buffer) {
return {
url: '.../face_detection',
headers: headers,
method: 'POST',
formData: {
filename: buffer
}
}
}
router.post('/upload-image', function(req, res, next) {
console.log('LOG 0' + Date.now());
var data_url = req.body.file;
var matches = data_url.match(/^data:.+\/(.+);base64,(.*)$/);
var ext = matches[1];
var base64_data = matches[2];
var buffer = new Buffer(base64_data, 'base64');
console.log('LOG 1' + Date.now());
request(getOptions(buffer), function(error, response, body) {
res.json(body);
console.log(Date.now());
});
});
The problem that I have is that the lines between LOG 0 and LOG 1 are very slow, a few seconds. But the image is only 650kb. Is there a way to accelerate this?
Using another method to read the header, avoid the buffer, change the uploading process. I don't know but I'd like to be faster.
Thank you very much.
I would suggest using a library to handle some of this logic. If you would prefer to keep a lean dependency list, you can take a look at the source of some of these modules and base your own solution off of them.
For converting a data URI to a buffer: data-uri-to-buffer
For figuring out a file type: file-type
I would especially recommend the file-type solution. A safer (can't say safest) way to ensure what kind of file a Buffer is is to inspect aspects of the file. file-type seems to at least take a look at the Magic Number of the file to check type. Not foolproof, but if you are accepting files from users, you have to accept the risks involved.
Also have a look at Security Stack Exchange questions for good practices. Although the following say PHP, all server software runs the risk of being vulnerable to user input:
Hacker used picture upload to get PHP code into my site
Can simply decompressing a JPEG image trigger an exploit?
Risks of a PHP image upload form
"use strict";
const dataUriToBuffer = require('data-uri-to-buffer'),
fileType = require("file-type"),
express = require("express"),
router = express.Router(),
util = require("util"),
fs = require("fs"),
path = require("path");
const writeFileAsync = util.promisify(fs.writeFile);
// Keep track of file types you support
const supportedTypes = [
"png",
"jpg",
"gif"
];
// Handle POSTs to upload-image
router.post("/upload-image", function (req, res, next) {
// Did they send us a file?
if (!req.body.file) {
// Unprocessable entity error
return res.sendStatus(422);
}
// Get the file to a buffer
const buff = dataUriToBuffer(req.body.file);
// Get the file type
const bufferMime = fileType(buff); // {ext: 'png', mime: 'image/png'}
// Is it a supported file type?
if (!supportedTypes.contains(bufferMime.ext)) {
// Unsupported media type
return res.sendStatus(415);
}
// Save or do whatever with the file
writeFileAsync(path.join("imageDir", `userimage.${bufferMime.ext}`), buff)
// Tell the user that it's all done
.then(() => res.sendStatus(200))
// Log the error and tell the user the save failed
.catch((err) => {
console.error(err);
res.sendStatus(500);
});
});

node.js - streaming upload to cloud storage (busboy, request)

I'm new to node.js. What I'm trying to do is to stream the upload of a file from web browser to a cloud storage through my node.js server.
I'm using 'express', 'request' and 'busboy' modules.
var express = require("express");
var request = require("request");
var BusBoy = require("busboy");
var router = express.Router();
router.post("/upload", function(req, res, next) {
var busboy = new BusBoy({ headers: req.headers });
var json = {};
busboy.on("file", function (fieldname, file, filename, encoding, mimetype) {
file.on("data", function(data) {
console.log(`streamed ${data.length}`);
});
file.on("end", function() {
console.log(`finished streaming ${filename}`);
});
var r = request({
url: "http://<my_cloud_storage_api_url>",
method: "POST",
headers: {
"CUSTOM-HEADER": "Hello",
},
formData: {
"upload": file
}
}, function(err, httpResponse, body) {
console.log("uploaded");
json.response = body;
});
});
busboy.on("field", function(name, val) {
console.log(`name: ${name}, value: ${value}`);
});
busboy.on("finish", function() {
res.send(json);
});
req.pipe(busboy);
});
module.exports = router;
But I keep getting the following error on the server. What am I doing wrong here? Any help is appreciated.
Error: Part terminated early due to unexpected end of multipart data
at node_modules\busboy\node_modules\dicer\lib\Dicer.js:65:36
at nextTickCallbackWith0Args (node.js:420:9)
at process._tickCallback (node.js:349:13)
I realize this question is some 7 months old, but I shall answer it here in an attempt help anyone else currently banging their head against this.
You have two options, really: Add the file size, or use something other than Request.
Note: I edited this shortly after first posting it to hopefully provide a bit more context.
Using Something Else
There are some alternatives you can use instead of Request if you don't need all the baked in features it has.
form-data can be used by itself in simple cases, or it can be used with, say, got. request uses this internally.
bhttp advertises Streams2+ support, although in my experience Streams2+ support has not been an issue for me. No built in https support, you have to specify a custom agent
got another slimmed down one. Doesn't have any special handling of form data like request does, but is trivially used with form-data or form-data2. I had trouble getting it working over a corporate proxy, though, but that's likely because I'm a networking newb.
needle seems pretty light weight, but I haven't actually tried it.
Using Request: Add the File Size
Request does not (as of writing) have any support for using transfer-encoding: chunked so to upload files with it, you need to add the file's size along with the file, which if you're uploading from a web client means that client needs to send that file size to your server in addition to the file itself.
The way I came up with to do this is to send the file metadata in its own field before the file field.
I modified your example with comments describing what I did. Note that I did not include any validation of the data received, but I recommend you do add that.
var express = require("express");
var request = require("request");
var BusBoy = require("busboy");
var router = express.Router();
router.post("/upload", function(req, res, next) {
var busboy = new BusBoy({ headers: req.headers });
var json = {};
// Use this to cache any fields which are file metadata.
var fileMetas = {};
busboy.on("file", function (fieldname, file, filename, encoding, mimetype) {
// Be sure to match this prop name here with the pattern you use to detect meta fields.
var meta = fileMetas[fieldname + '.meta'];
if (!meta) {
// Make sure to dump the file.
file.resume();
// Then, do some sort of error handling here, because you cannot upload a file
// without knowing it's length.
return;
}
file.on("data", function(data) {
console.log(`streamed ${data.length}`);
});
file.on("end", function() {
console.log(`finished streaming ${filename}`);
});
var r = request({
url: "http://<my_cloud_storage_api_url>",
method: "POST",
headers: {
"CUSTOM-HEADER": "Hello",
},
formData: {
// value + options form of a formData field.
"upload": {
value: file,
options: {
filename: meta.name,
knownLength: meta.size
}
}
}
}, function(err, httpResponse, body) {
console.log("uploaded");
json.response = body;
});
});
busboy.on("field", function(name, val) {
// Use whatever pattern you want. I used (fileFieldName + ".meta").
// Another good one might be ("meta:" + fileFieldName).
if (/\.meta$/.test(name)) {
// I send an object with { name, size, type, lastModified },
// which are just the public props pulled off a File object.
// Note: Should probably add error handling if val is somehow not parsable.
fileMetas[name] = JSON.parse(val);
console.log(`file metadata: name: ${name}, value: ${value}`);
return;
}
// Otherwise, process field as normal.
console.log(`name: ${name}, value: ${value}`);
});
busboy.on("finish", function() {
res.send(json);
});
req.pipe(busboy);
});
module.exports = router;
On the client, you need to then send the metadata on the so-named field before the file itself. This can be done by ordering an <input type="hidden"> control before the file and updating its value onchange. The order of values sent is guaranteed to follow the order of inputs in appearance. If you're building the request body yourself using FormData, you can do this by appending the appropriate metadata before appending the File.
Example with <form>
<script>
function extractFileMeta(file) {
return JSON.stringify({
size: file.size,
name: file.name,
type: file.type,
lastUpdated: file.lastUpdated
});
}
function onFileUploadChange(event) {
// change this to use arrays if using the multiple attribute on the file input.
var file = event.target.files[0];
var fileMetaInput = document.querySelector('input[name=fileUpload.meta]');
if (fileMetaInput) {
fileMetaInput.value = extractFileMeta(file);
}
}
</script>
<form action="/upload-to-cloud">
<input type="hidden" name="fileUpload.meta">
<input type="file" name="fileUpload" onchange="onFileUploadChange(event)">
</form>
Example with FormData:
function onSubmit(event) {
event.preventDefault();
var form = document.getElementById('my-upload-form');
var formData = new FormData();
var fileUpload = form.elements['fileUpload'];
var fileUploadMeta = JSON.stringify({
size: fileUpload.size,
name: fileUpload.name,
type: fileUpload.type,
lastUpdated: fileUpload.lastUpdated
});
// Append fileUploadMeta BEFORE fileUpload.
formData.append('fileUpload.meta', fileUploadMeta);
formData.append('fileUpload', fileUpload);
// Do whatever you do to POST here.
}

image resize using multiparty stream

How can I use GraphicsMagick or transloadit in my scenario?
I am using expressjs multiparty to upload files to Azure storage:
app.post('/upload', function (req, res) {
var blobService = azure.createBlobService();
var form = new multiparty.Form();
form.on('part', function(part) {
if (part.filename) {
var filename = part.filename;
var size = part.byteCount;
var onError = function(error) {
if (error) {
res.send({ grrr: error });
}
};
blobService.createBlockBlobFromStream('container', filename, part, size, onError);
} else {
form.handlePart(part);
}
});
form.parse(req);
res.send("SWEET");
});
Is there any service that I can use to resize the image and thumbnail before upload to the storage. I don't want to save the file to temp folder, because I am using azure websites.
Disclosure: I work Transloadit, so I'll be diving into that direction.
Option 1, keep handling the upload yourself:
Since you are handling the upload yourself and using Node.js, you could send the files to Transloadit using the Node.js SDK:
// npm install transloadit --save
const TransloaditClient = require('transloadit')
const transloadit = new TransloaditClient({
authKey: 'YOUR_TRANSLOADIT_KEY',
authSecret: 'YOUR_TRANSLOADIT_SECRET'
})
// Likely you'll want to use .addStream(part) here instead!
transloadit.addFile('myfile_1', './chameleon.jpg')
const options = {
params: {
steps: {
thumbed: {
use: ':original',
robot: '/image/resize',
width: 75,
height: 75,
resize_strategy: 'fit',
},
}
}
}
transloadit.createAssembly(options, (err, result) => {
if (err) {
throw err
}
console.log({result})
})
The file is now resized by Transloadit, you could use our /azure/export Robot to send the file to Azure Storage:
"exported": {
"use": ["thumbed"],
"robot": "/azure/store",
"credentials": "YOUR_CREDENTIALS"
}
Option 2, let Tranloadit also handle the upload
Alternatively you could drop our new open source file uploader Uppy as a plugin into your website, and use its Transloadit plugin to send files directly to Transloadit. The encoding and Azure export instructions would be saved inside your Tranloadit account in a Template. You'd refer to that template_id in your Uppy integration, and then you don't have to write any serverside code yourself or deal with multipart uploads. Your uploads will also become resumable. There's a live example on the Uppy website. Here's how I'd adapt it for your usecase (untested):
<!-- Basic Uppy styles. You can use Transloadit's CDN, Edgly:
https://transloadit.edgly.net/releases/uppy/v0.27.4/dist/uppy.min.css -->
<link rel="stylesheet" href="/uppy/uppy.min.css">
<div class="UppyDragDrop"></div>
<!-- Load Uppy pre-built bundled version. You can use Transloadit's CDN, Edgly:
https://transloadit.edgly.net/releases/uppy/v0.27.4/dist/uppy.min.js -->
<script src="/uppy/uppy.min.js"></script>
<script>
var uppy = Uppy.Core();
uppy.use(Uppy.DragDrop, {
target: '.UppyDragDrop',
});
uppy.use(Transloadit, {
params: {
auth: {
key: YOUR_TRANSLOADIT_API_KEY
},
template_id: YOUR_TEMPLATE_ID
},
waitForEncoding: true
});
console.log('--> Uppy pre-built version with Tus, DragDrop & Russian language pack has loaded');
</script>

Resources