I am creating a pdf using JSPDF on server-side, in NodeJS. Once done, I want to create a new folder for the user in Google Drive, upload the pdf to said folder, and also send it to the client-side (browser) for the user to view.
There are two problems that I'm encountering. Firstly, if I send the pdf in the response -via pdf.output()- the images don't display correctly. They are distorted, as though each row of pixels is offset by some amount. A vertical line "|" instead renders as a diagonal "\". An example is shown below.
Before
After
My workaround for this was to instead save it to the filesystem using doc.save() and then send it to the browser using fs.readFileSync(filepath).
However, I've discovered that when running remotely, I don't have file permissions to be saving the pdf and reading it. And after some research and tinkering, I'm thinking that I cannot change these permissions. This is the error I get:
Error: EROFS: read-only file system, open './temp/output.pdf'
at Object.openSync (fs.js:443:3)
at Object.writeFileSync (fs.js:1194:35)
at Object.v.save (/workspace/node_modules/jspdf/dist/jspdf.node.min.js:86:50626)
etc...
So I have this JSPDF object, and I believe I need to either, alter the permissions to allow writing/reading or take the jspdf object or, I guess, change it's format to one accepted by Google drive, such as a stream or buffer object?
The link below leads me to think these permissions can't be altered since it states: "These files are available in a read-only directory".
https://cloud.google.com/functions/docs/concepts/exec#file_system
I also have no idea 'where' the server filesystem is, or how to access it. Thus, I think the best course of action is to look at sending the pdf in different formats.
I've checked jsPDF documentation for types that pdf.output() can return. These include string, arraybuffer, window, blob, jsPDF.
https://rawgit.com/MrRio/jsPDF/master/docs/jsPDF.html#output
My simplified code is as follows:
const express = require('express');
const fs = require('fs');
const app = express();
const { jsPDF } = require('jspdf');
const credentials = require(credentialsFilepath);
const scopes = [scopes in here];
const auth = new google.auth.JWT(
credentials.client_email, null,
credentials.private_key, scopes
);
const drive = google.drive({version: 'v3', auth});
//=========================================================================
app.post('/submit', (req, res) => {
var pdf = new jsPDF();
// Set font, fontsize. Added some text, etc.
pdf.text('blah blah', 10, 10);
// Add image (signature) from canvas, which is passed as a dataURL
pdf.addImage(img, 'JPEG', 10, 10, 50, 20);
pdf.save('./temp/output.pdf');
drive.files.create({
resource: folderMetaData,
fields: 'id'
})
.then(response => {
// Store pdf in newly created folder
var fileMetaData = {
'name': 'filename.pdf',
'parents': [response.data.id],
};
var media = {
mimeType: 'application/pdf',
body: fs.createReadStream('./temp/output.pdf'),
};
drive.files.create({
resource: fileMetaData,
media: media,
fields: 'id'
}, function(err, file) {
if(err){
console.error('Error:', err);
}else{
// I have considered piping 'file' back in the response here but can't figure out how
console.log('File uploaded');
}
});
})
.catch(error => {
console.error('Error:', error);
});
// Finally, I attempt to send the pdf to client/browser
res.setHeader('Content-Type', 'application/pdf');
res.send(fs.readFileSync('./temp/output.pdf'));
})
Edit: After some more searching, I've found a similar question which explains that the fs module is for reading/writing to local filestore.
EROFS error when executing a File Write function in Firebase
I eventually came to a solution after some further reading. I'm not sure who this will be useful for, but...
Turns out the Firebase filesystem only has 1 directory which allows you to write to (the rest are read-only). This directory is named tmp and I accessed it using the tmp node module [installed with: npm i tmp], since trying to manually reference the path with pdf.save('./tmp/output.pdf') didn't work.
So the only changes to my code were to add in the lines:
var tmp = require('tmp');
var tmpPath = tmp.tmpNameSync();
and then replacing all the instances of './temp/output.pdf' with tmpPath
Related
Goal: Try to download a pdf file from Amazon S3 to my local machine via a NodeJS/VueJS application without creating a file on the server's filesystem.
Server: NodeJs(v 18.9.0) Express (4.17.1)
Middleware function that retrieves the file from S3 and converts the stream into a base64 string and sends that string to the client:
const filename = 'lets_go_to_the_snackbar.pdf';
const s3 = new AWS.S3(some access parameters);
const params = {
Bucket: do_not_kick_this_bucket,
Key: `yellowbrickroad/${filename}`
}
try {
const data = await s3
.getObject(params)
.promise();
const byte_string = Buffer.from(data.Body).toString('base64');
res.send(byte_string);
} catch (err) {
console.log(err);
}
Client: VueJS( v 3.2.33)
Function in component receives byte string via an axios (v 0.26.1) GET call to the server. The code to download is as follows:
getPdfContent: async function (filename) {
const resp = await AxiosService.getPdf(filename) // Get request to server made here.
const uriContent = `data:application/pdf;base64,${resp.data}`
const link = document.createElement('a')
link.href = uriContent
link.download = filename
document.body.appendChild(link) // Also tried omitting this line along with...
link.click()
link.remove() // ...omitting this line
}
Expected Result(s):
Browser opens a window to allow a directory to be selected as the file's destination.
Directory Selected.
File is downloaded.
Ice cream and mooncakes are served.
Actual Results(s):
Browser opens a window to allow a directory to be selected as the file's destination
Directory Selected.
Receive Failed - Network Error message.
Lots of crying...
Browser: Chrome (Version 105.0.5195.125 (Official Build) (x86_64))
Read somewhere that Chrome will balk at files larger than 4MB, so I checked the S3 bucket and according to Amazon S3 the file size is a svelte 41.7KB.
After doing some reading, a possible solution was presented that I tried to implement. It involved making a change to the VueJs getPdfContent function as follows:
getPdfContent: async function (filename) {
const resp = await AxiosService.getPdf(filename) // Get request to server made here.
/**** This is the line that was changed ****/
const uriContent = window.URL.createObjectURL(new Blob([resp.data], { type: 'application/pdf' } ))
const link = document.createElement('a')
link.href = uriContent
link.download = filename
document.body.appendChild(link) // Also tried omitting this line along with...
link.click()
link.remove() // ...omitting this line
}
Actual Results(s) for updated code:
Browser opens a window to allow a directory to be selected as the file's destination
Directory Selected.
PDF file downloaded.
Trying to open the file produces the message:
The file “lets_go_to_the_snackbar.pdf” could not be opened.
It may be damaged or use a file format that Preview doesn’t recognize.
I am able to download the file directly from S3 using the AWS S3 console with no problems opening the file.
I've read through similar postings and tried implementing their solutions, but found no joy. I would be highly appreciative if someone can
Give me an idea of where I am going off the path towards reaching the goal
Point me towards the correct path.
Thank you in advance for your help.
After doing some more research I found the problem was how I was returning the data from the server back to the client. I did not need to modify the data received from the S3 service.
Server Code:
let filename = req.params.filename;
const params = {
Bucket: do_not_kick_this_bucket,
Key: `yellowbrickroad/${filename}`
}
try {
const data = await s3
.getObject(params)
.promise();
/* Here I did not modify the information returned */
res.send(data.Body);
res.end();
} catch (err) {
console.log(err);
}
On the client side my VueJS component receives a Blob object as the response
Client Code:
async getFile (filename) {
let response = await AuthenticationService.downloadFile(filename)
const uriContent = window.URL.createObjectURL(new Blob([response.data]))
const link = document.createElement('a')
link.setAttribute('href', uriContent)
link.setAttribute('download', filename)
document.body.appendChild(link)
link.click()
link.remove()
}
In the end the goal was achieved; a file on S3 can be downloaded directly to a user's local machine without the application storing a file on the server.
I would like to mention Sunpun Sandaruwan's answer which gave me the final clue I needed to reach my goal.
I have uploaded a base64 img string to Google Drive via API in node express. After uploading the img, it is not viewable in Drive. I'm not sure on how to resolve this formatting issue. I know I could potentially save the img locally first, then upload the saved img file but I was hoping there is a simpler way.
My code:
const uploadImg = async (folderId,img)=>{
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0
const scopes = [
'https://www.googleapis.com/auth/drive'
];
const auth = new google.auth.JWT(
demoApiCreds.client_email, null,
demoApiCreds.private_key, scopes
);
const drive = google.drive({ version: 'v3', auth });
const fileMetadata = {
'name': 'Client_Design_ScreenShotTest',
'mimeType':'image/jpeg',
'parents':[folderId]
};
const uploadImg = img.split(/,(.+)/)[1];
const media = {
body: uploadImg
}
let res = await drive.files.create({
resource: fileMetadata,
media: media,
fields: 'id',
});
console.log('the response is',res);
console.log('the data is ',res.data);
return res.data;
}
Edit:
The file is stored in drive, as a jpg, but the img is blank and after
the img is clicked google drive complains that the file cannot be
read. The img is still blank after downloading.
The base 64 img string is
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAhAAAADqCAYAAADzlnzfAAAAAXNSR0I...
I remove data:image/png;base64 before uploading as has been suggested in other threads. It fails with or without this prefix.
You want to upload an image to Google Drive using googleapis with node.js.
The image of img is the base64 data.
You have already been able to upload and download files to Google Drive using Drive API.
If my understanding is correct, how about this answer? Please think of this as just one of several answers.
Modification points:
Unfortunately, when the base64 data is uploaded using googleapis, the base64 data is not decoded and the data is upload as the text data. So when you see the uploaded file, you cannot see it as the image. If Content-Transfer-Encoding: base64 can be added to the header of the base64 data in the request body, the base64 data is converted and uploaded as an image. But when googleapis is used, in the current stage, it cannot be achieved.
In order to upload the base64 data encoded from an image as an image to Google Drive, how about the following modification?
Modified script:
In this modification, the base64 image is converted to the stream type, and uploaded. Please modify your script as follows.
From:
const uploadImg = img.split(/,(.+)/)[1];
const media = {
body: uploadImg
}
To:
const stream = require("stream"); // Added
const uploadImg = img.split(/,(.+)/)[1];
const buf = new Buffer.from(uploadImg, "base64"); // Added
const bs = new stream.PassThrough(); // Added
bs.end(buf); // Added
const media = {
body: bs // Modified
};
Note:
Even if 'mimeType':'image/jpeg' is used at fileMetadata, the image file is uploaded as image/png. But for example, if 'mimeType':'application/pdf' is used at fileMetadata, the image file is uploaded as application/pdf. Please be careful this. So I also recommend to modify to 'mimeType':'image/png' as mentioned by 10100111001's answer.
At "googleapis#43.0.0", both patterns of resource: fileMetadata and requestBody: fileMetadata work.
References:
Class Method: Buffer.from(string[, encoding])
Class: stream.PassThrough
Files: create in Drive API
If I misunderstood your question and this was not the direction you want, I apologize.
You need to change your mimeType to image/png.
See here what Mime Types are
Edit:
The property name for the fileMetadata is called requestBody instead of resource.
let res = await drive.files.create({
requestBody: fileMetadata,
media: media,
fields: 'id',
});
https://github.com/googleapis/google-api-nodejs-client/blob/7e2b586e616e757b72f7a9b1adcd7d232c6b1bef/src/apis/drive/v3.ts#L3628
I had the same problem, Solved it by adding "Content-Transfer-Encoding: base64"in body where we write body-request, content-type, etc.
I have uploaded a base64 img string to Google Drive via API in node express. After uploading the img, it is not viewable in Drive. I'm not sure on how to resolve this formatting issue. I know I could potentially save the img locally first, then upload the saved img file but I was hoping there is a simpler way.
My code:
const uploadImg = async (folderId,img)=>{
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0
const scopes = [
'https://www.googleapis.com/auth/drive'
];
const auth = new google.auth.JWT(
demoApiCreds.client_email, null,
demoApiCreds.private_key, scopes
);
const drive = google.drive({ version: 'v3', auth });
const fileMetadata = {
'name': 'Client_Design_ScreenShotTest',
'mimeType':'image/jpeg',
'parents':[folderId]
};
const uploadImg = img.split(/,(.+)/)[1];
const media = {
body: uploadImg
}
let res = await drive.files.create({
resource: fileMetadata,
media: media,
fields: 'id',
});
console.log('the response is',res);
console.log('the data is ',res.data);
return res.data;
}
Edit:
The file is stored in drive, as a jpg, but the img is blank and after
the img is clicked google drive complains that the file cannot be
read. The img is still blank after downloading.
The base 64 img string is
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAhAAAADqCAYAAADzlnzfAAAAAXNSR0I...
I remove data:image/png;base64 before uploading as has been suggested in other threads. It fails with or without this prefix.
You want to upload an image to Google Drive using googleapis with node.js.
The image of img is the base64 data.
You have already been able to upload and download files to Google Drive using Drive API.
If my understanding is correct, how about this answer? Please think of this as just one of several answers.
Modification points:
Unfortunately, when the base64 data is uploaded using googleapis, the base64 data is not decoded and the data is upload as the text data. So when you see the uploaded file, you cannot see it as the image. If Content-Transfer-Encoding: base64 can be added to the header of the base64 data in the request body, the base64 data is converted and uploaded as an image. But when googleapis is used, in the current stage, it cannot be achieved.
In order to upload the base64 data encoded from an image as an image to Google Drive, how about the following modification?
Modified script:
In this modification, the base64 image is converted to the stream type, and uploaded. Please modify your script as follows.
From:
const uploadImg = img.split(/,(.+)/)[1];
const media = {
body: uploadImg
}
To:
const stream = require("stream"); // Added
const uploadImg = img.split(/,(.+)/)[1];
const buf = new Buffer.from(uploadImg, "base64"); // Added
const bs = new stream.PassThrough(); // Added
bs.end(buf); // Added
const media = {
body: bs // Modified
};
Note:
Even if 'mimeType':'image/jpeg' is used at fileMetadata, the image file is uploaded as image/png. But for example, if 'mimeType':'application/pdf' is used at fileMetadata, the image file is uploaded as application/pdf. Please be careful this. So I also recommend to modify to 'mimeType':'image/png' as mentioned by 10100111001's answer.
At "googleapis#43.0.0", both patterns of resource: fileMetadata and requestBody: fileMetadata work.
References:
Class Method: Buffer.from(string[, encoding])
Class: stream.PassThrough
Files: create in Drive API
If I misunderstood your question and this was not the direction you want, I apologize.
You need to change your mimeType to image/png.
See here what Mime Types are
Edit:
The property name for the fileMetadata is called requestBody instead of resource.
let res = await drive.files.create({
requestBody: fileMetadata,
media: media,
fields: 'id',
});
https://github.com/googleapis/google-api-nodejs-client/blob/7e2b586e616e757b72f7a9b1adcd7d232c6b1bef/src/apis/drive/v3.ts#L3628
I had the same problem, Solved it by adding "Content-Transfer-Encoding: base64"in body where we write body-request, content-type, etc.
I am using the react-native-fs and I am trying to save a base64 of a pdf file to my android emulators file system.
I receive base64 encoded pdf from the server.
I then decode the base64 string with the line:
var pdfBase64 = 'data:application/pdf;base64,'+base64Str;
saveFile() function
saveFile(filename, pdfBase64){
// create a path you want to write to
var path = RNFS.DocumentDirectoryPath + '/' + filename;
// write the file
RNFS.writeFile(path, base64Image, 'base64').then((success) => {
console.log('FILE WRITTEN!');
})
.catch((err) => {
console.log("SaveFile()", err.message);
});
}
Error
When I try saving the pdfBase64 the saveFile() function catches the following error:
bad base-64
Question
Can anyone tell where or what I am doing wrong?
Thanks.
For anyone having the same problem, here is the solution.
Solution
react-nativive-pdf-view must take the file path to the pdf_base64.
Firstly, I used the react-native-fetch-blob to request the pdf base64 from the server.(Because RN fetch API does not yet support BLOBs).
Also I discovered that react-native-fetch-blob also has a FileSystem API which is way better documented and easier to understand than the 'react-native-fs' library. (Check out its FileSystem API documentation)
Receiving base64 pdf and saving it to a file path:
var RNFetchBlob = require('react-native-fetch-blob').default;
const DocumentDir = RNFetchBlob.fs.dirs.DocumentDir;
getPdfFromServer: function(uri_attachment, filename_attachment) {
return new Promise((RESOLVE, REJECT) => {
// Fetch attachment
RNFetchBlob.fetch('GET', config.apiRoot+'/app/'+uri_attachment)
.then((res) => {
let base64Str = res.data;
let pdfLocation = DocumentDir + '/' + filename_attachment;
RNFetchBlob.fs.writeFile(pdfLocation, pdf_base64Str, 'base64');
RESOLVE(pdfLocation);
})
}).catch((error) => {
// error handling
console.log("Error", error)
});
}
What I was doing wrong was instead of saving the pdf_base64Str to the file location like I did in the example above. I was saving it like this:
var pdf_base64= 'data:application/pdf;base64,'+pdf_base64Str;
which was wrong.
Populate PDF view with file path:
<PDFView
ref={(pdf)=>{this.pdfView = pdf;}}
src={pdfLocation}
style={styles.pdf}
/>
There is a new package to handle the fetching (based on react-native-fetch-blob) and displaying of the PDF via URL: react-native-pdf.
Remove application type in base64 string and it's working for me
var pdfBase64 = 'data:application/pdf;base64,'+base64Str;
To
var pdfBase64 = base64Str;
I am trying to find some solution to stream file on amazon S3 using node js server with requirements:
Don't store temp file on server or in memory. But up-to some limit not complete file, buffering can be used for uploading.
No restriction on uploaded file size.
Don't freeze server till complete file upload because in case of heavy file upload other request's waiting time will unexpectedly
increase.
I don't want to use direct file upload from browser because S3 credentials needs to share in that case. One more reason to upload file from node js server is that some authentication may also needs to apply before uploading file.
I tried to achieve this using node-multiparty. But it was not working as expecting. You can see my solution and issue at https://github.com/andrewrk/node-multiparty/issues/49. It works fine for small files but fails for file of size 15MB.
Any solution or alternative ?
You can now use streaming with the official Amazon SDK for nodejs in the section "Uploading a File to an Amazon S3 Bucket" or see their example on GitHub.
What's even more awesome, you finally can do so without knowing the file size in advance. Simply pass the stream as the Body:
var fs = require('fs');
var zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body})
.on('httpUploadProgress', function(evt) { console.log(evt); })
.send(function(err, data) { console.log(err, data) });
For your information, the v3 SDK were published with a dedicated module to handle that use case : https://www.npmjs.com/package/#aws-sdk/lib-storage
Took me a while to find it.
Give https://www.npmjs.org/package/streaming-s3 a try.
I used it for uploading several big files in parallel (>500Mb), and it worked very well.
It very configurable and also allows you to track uploading statistics.
You not need to know total size of the object, and nothing is written on disk.
If it helps anyone I was able to stream from the client to s3 successfully (without memory or disk storage):
https://gist.github.com/mattlockyer/532291b6194f6d9ca40cb82564db9d2a
The server endpoint assumes req is a stream object, I sent a File object from the client which modern browsers can send as binary data and added file info set in the headers.
const fileUploadStream = (req, res) => {
//get "body" args from header
const { id, fn } = JSON.parse(req.get('body'));
const Key = id + '/' + fn; //upload to s3 folder "id" with filename === fn
const params = {
Key,
Bucket: bucketName, //set somewhere
Body: req, //req is a stream
};
s3.upload(params, (err, data) => {
if (err) {
res.send('Error Uploading Data: ' + JSON.stringify(err) + '\n' + JSON.stringify(err.stack));
} else {
res.send(Key);
}
});
};
Yes putting the file info in the headers breaks convention but if you look at the gist it's much cleaner than anything else I found using streaming libraries or multer, busboy etc...
+1 for pragmatism and thanks to #SalehenRahman for his help.
I'm using the s3-upload-stream module in a working project here.
There is also some good examples from #raynos in his http-framework repository.
Alternatively you can look at - https://github.com/minio/minio-js. It has minimal set of abstracted API's implementing most commonly used S3 calls.
Here is an example of streaming upload.
$ npm install minio
$ cat >> put-object.js << EOF
var Minio = require('minio')
var fs = require('fs')
// find out your s3 end point here:
// http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
var s3Client = new Minio({
url: 'https://<your-s3-endpoint>',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var outFile = fs.createWriteStream('your_localfile.zip');
var fileStat = Fs.stat(file, function(e, stat) {
if (e) {
return console.log(e)
}
s3Client.putObject('mybucket', 'hello/remote_file.zip', 'application/octet-stream', stat.size, fileStream, function(e) {
return console.log(e) // should be null
})
})
EOF
putObject() here is a fully managed single function call for file sizes over 5MB it automatically does multipart internally. You can resume a failed upload as well and it will start from where its left off by verifying previously upload parts.
Additionally this library is also isomorphic, can be used in browsers as well.