Base64 string img not viewable when uploaded via drive api - node.js

I have uploaded a base64 img string to Google Drive via API in node express. After uploading the img, it is not viewable in Drive. I'm not sure on how to resolve this formatting issue. I know I could potentially save the img locally first, then upload the saved img file but I was hoping there is a simpler way.
My code:
const uploadImg = async (folderId,img)=>{
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0
const scopes = [
'https://www.googleapis.com/auth/drive'
];
const auth = new google.auth.JWT(
demoApiCreds.client_email, null,
demoApiCreds.private_key, scopes
);
const drive = google.drive({ version: 'v3', auth });
const fileMetadata = {
'name': 'Client_Design_ScreenShotTest',
'mimeType':'image/jpeg',
'parents':[folderId]
};
const uploadImg = img.split(/,(.+)/)[1];
const media = {
body: uploadImg
}
let res = await drive.files.create({
resource: fileMetadata,
media: media,
fields: 'id',
});
console.log('the response is',res);
console.log('the data is ',res.data);
return res.data;
}
Edit:
The file is stored in drive, as a jpg, but the img is blank and after
the img is clicked google drive complains that the file cannot be
read. The img is still blank after downloading.
The base 64 img string is
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAhAAAADqCAYAAADzlnzfAAAAAXNSR0I...
I remove data:image/png;base64 before uploading as has been suggested in other threads. It fails with or without this prefix.

You want to upload an image to Google Drive using googleapis with node.js.
The image of img is the base64 data.
You have already been able to upload and download files to Google Drive using Drive API.
If my understanding is correct, how about this answer? Please think of this as just one of several answers.
Modification points:
Unfortunately, when the base64 data is uploaded using googleapis, the base64 data is not decoded and the data is upload as the text data. So when you see the uploaded file, you cannot see it as the image. If Content-Transfer-Encoding: base64 can be added to the header of the base64 data in the request body, the base64 data is converted and uploaded as an image. But when googleapis is used, in the current stage, it cannot be achieved.
In order to upload the base64 data encoded from an image as an image to Google Drive, how about the following modification?
Modified script:
In this modification, the base64 image is converted to the stream type, and uploaded. Please modify your script as follows.
From:
const uploadImg = img.split(/,(.+)/)[1];
const media = {
body: uploadImg
}
To:
const stream = require("stream"); // Added
const uploadImg = img.split(/,(.+)/)[1];
const buf = new Buffer.from(uploadImg, "base64"); // Added
const bs = new stream.PassThrough(); // Added
bs.end(buf); // Added
const media = {
body: bs // Modified
};
Note:
Even if 'mimeType':'image/jpeg' is used at fileMetadata, the image file is uploaded as image/png. But for example, if 'mimeType':'application/pdf' is used at fileMetadata, the image file is uploaded as application/pdf. Please be careful this. So I also recommend to modify to 'mimeType':'image/png' as mentioned by 10100111001's answer.
At "googleapis#43.0.0", both patterns of resource: fileMetadata and requestBody: fileMetadata work.
References:
Class Method: Buffer.from(string[, encoding])
Class: stream.PassThrough
Files: create in Drive API
If I misunderstood your question and this was not the direction you want, I apologize.

You need to change your mimeType to image/png.
See here what Mime Types are
Edit:
The property name for the fileMetadata is called requestBody instead of resource.
let res = await drive.files.create({
requestBody: fileMetadata,
media: media,
fields: 'id',
});
https://github.com/googleapis/google-api-nodejs-client/blob/7e2b586e616e757b72f7a9b1adcd7d232c6b1bef/src/apis/drive/v3.ts#L3628

I had the same problem, Solved it by adding "Content-Transfer-Encoding: base64"in body where we write body-request, content-type, etc.

Related

Lambda to S3 image upload shows a black background with white square

I am using CDK to upload an image file from a form-data multivalue request to S3. There are now no errors in the console but what is saved to S3 is a black background with a white sqaure which im sure is down to a corrupt file or something.
Any thoughts as to what I'm doing wrong.
I'm using aws-lambda-multipart-parser to parse the form data.
In my console the form actual image is getting logged like this.
My upload file function looks like this
const uploadFile = async(image: any) => {
const params = {
Bucket: BUCKET_NAME,
Key: image.filename,
Body: image.content,
ContentType: image.contentType,
}
return await S3.putObject(params).promise()
}
When I log the image.content I get a log of the buffer, which seems to be the format i should be uploading the image to.
My CDK stack initialises the S3 contsruct like so.
const bucket = new s3.Bucket(this, "WidgetStore");
bucket.grantWrite(handler);
bucket.grantPublicAccess();
table.grantStreamRead(handler);
handler.addToRolePolicy(lambdaPolicy);
const api = new apigateway.RestApi(this, "widgets-api", {
restApiName: "Widget Service",
description: "This service serves widgets.",
binaryMediaTypes: ['image/png', 'image/jpeg'],
});
Any ideas what I could be missing?
Thanks in advance

How to display images of products stored on aws s3 bucket

I was practicing on this tutorial
https://www.youtube.com/watch?v=NZElg91l_ms&t=1234s
It is working absolutely like a charm for me but the thing is I am storing images of products I am storing them in bucket and lets say I upload 4 images they all are uploaded.
but when I am displaying them i got access denied error as I am displaying the list and repeated request are maybe detecting it as a spam
This is how i am trying to fetch them on my react app
//rest of data is from mysql datbase (product name,price)
//100+ products
{ products.map((row)=>{
<div className="product-hero"><img src=`http://localhost:3909/images/${row.imgurl}`</div>
<div className="text-center">{row.productName}</div>
})
}
as it fetch 100+ products from db and 100 images from aws it fails
Sorry for such detailed question but in short how can i fetch all product images from my bucket
Note I am aware that i can get only one image per call so how can I get all images one by one in my scenario
//download code in my app.js
const { uploadFile, getFileStream } = require('./s3')
const app = express()
app.get('/images/:key', (req, res) => {
console.log(req.params)
const key = req.params.key
const readStream = getFileStream(key)
readStream.pipe(res)
})
//s3 file
// uploads a file to s3
function uploadFile(file) {
const fileStream = fs.createReadStream(file.path)
const uploadParams = {
Bucket: bucketName,
Body: fileStream,
Key: file.filename
}
return s3.upload(uploadParams).promise()
}
exports.uploadFile = uploadFile
// downloads a file from s3
function getFileStream(fileKey) {
const downloadParams = {
Key: fileKey,
Bucket: bucketName
}
return s3.getObject(downloadParams).createReadStream()
}
exports.getFileStream = getFileStream
It appears that your code is sending image requests to your back-end, which retrieves the objects from Amazon S3 and then serves the images in response to the request.
A much better method would be to have the URLs in the HTML page point directly to the images stored in Amazon S3. This would be highly scalable and will reduce the load on your web server.
This would require the images to be public so that the user's web browser can retrieve the images. The easiest way to do this would be to add a Bucket Policy that grants GetObject access to all users.
Alternatively, if you do not wish to make the bucket public, you can instead generate Amazon S3 pre-signed URLs, which are time-limited URLs that provides temporary access to a private object. Your back-end can calculate the pre-signed URL with a couple of lines of code, and the user's web browser will then be able to retrieve private objects from S3 for display on the page.
I did sililar S3 image handling while I handle my blog's image upload functionality, but I did not use getFileStream() to upload my image.
Because nothing should be done until the image file is fully processed, I used fs.readFile(path, callback) instead to read the data.
My way will generate Buffer Data, but AWS S3 is smart enough to know to intercept this as image. (I have only added suffix in my filename, I don't know how to apply image headers...)
This is my part of code for reference:
fs.readFile(imgPath, (err, data) => {
if (err) { throw err }
// Once file is read, upload to AWS S3
const objectParams = {
Bucket: 'yuyuichiu-personal',
Key: req.file.filename,
Body: data
}
S3.putObject(objectParams, (err, data) => {
// store image link and read image with link
}
}

How can I temporarily store a pdf in Firebase filesystem?

I am creating a pdf using JSPDF on server-side, in NodeJS. Once done, I want to create a new folder for the user in Google Drive, upload the pdf to said folder, and also send it to the client-side (browser) for the user to view.
There are two problems that I'm encountering. Firstly, if I send the pdf in the response -via pdf.output()- the images don't display correctly. They are distorted, as though each row of pixels is offset by some amount. A vertical line "|" instead renders as a diagonal "\". An example is shown below.
Before
After
My workaround for this was to instead save it to the filesystem using doc.save() and then send it to the browser using fs.readFileSync(filepath).
However, I've discovered that when running remotely, I don't have file permissions to be saving the pdf and reading it. And after some research and tinkering, I'm thinking that I cannot change these permissions. This is the error I get:
Error: EROFS: read-only file system, open './temp/output.pdf'
at Object.openSync (fs.js:443:3)
at Object.writeFileSync (fs.js:1194:35)
at Object.v.save (/workspace/node_modules/jspdf/dist/jspdf.node.min.js:86:50626)
etc...
So I have this JSPDF object, and I believe I need to either, alter the permissions to allow writing/reading or take the jspdf object or, I guess, change it's format to one accepted by Google drive, such as a stream or buffer object?
The link below leads me to think these permissions can't be altered since it states: "These files are available in a read-only directory".
https://cloud.google.com/functions/docs/concepts/exec#file_system
I also have no idea 'where' the server filesystem is, or how to access it. Thus, I think the best course of action is to look at sending the pdf in different formats.
I've checked jsPDF documentation for types that pdf.output() can return. These include string, arraybuffer, window, blob, jsPDF.
https://rawgit.com/MrRio/jsPDF/master/docs/jsPDF.html#output
My simplified code is as follows:
const express = require('express');
const fs = require('fs');
const app = express();
const { jsPDF } = require('jspdf');
const credentials = require(credentialsFilepath);
const scopes = [scopes in here];
const auth = new google.auth.JWT(
credentials.client_email, null,
credentials.private_key, scopes
);
const drive = google.drive({version: 'v3', auth});
//=========================================================================
app.post('/submit', (req, res) => {
var pdf = new jsPDF();
// Set font, fontsize. Added some text, etc.
pdf.text('blah blah', 10, 10);
// Add image (signature) from canvas, which is passed as a dataURL
pdf.addImage(img, 'JPEG', 10, 10, 50, 20);
pdf.save('./temp/output.pdf');
drive.files.create({
resource: folderMetaData,
fields: 'id'
})
.then(response => {
// Store pdf in newly created folder
var fileMetaData = {
'name': 'filename.pdf',
'parents': [response.data.id],
};
var media = {
mimeType: 'application/pdf',
body: fs.createReadStream('./temp/output.pdf'),
};
drive.files.create({
resource: fileMetaData,
media: media,
fields: 'id'
}, function(err, file) {
if(err){
console.error('Error:', err);
}else{
// I have considered piping 'file' back in the response here but can't figure out how
console.log('File uploaded');
}
});
})
.catch(error => {
console.error('Error:', error);
});
// Finally, I attempt to send the pdf to client/browser
res.setHeader('Content-Type', 'application/pdf');
res.send(fs.readFileSync('./temp/output.pdf'));
})
Edit: After some more searching, I've found a similar question which explains that the fs module is for reading/writing to local filestore.
EROFS error when executing a File Write function in Firebase
I eventually came to a solution after some further reading. I'm not sure who this will be useful for, but...
Turns out the Firebase filesystem only has 1 directory which allows you to write to (the rest are read-only). This directory is named tmp and I accessed it using the tmp node module [installed with: npm i tmp], since trying to manually reference the path with pdf.save('./tmp/output.pdf') didn't work.
So the only changes to my code were to add in the lines:
var tmp = require('tmp');
var tmpPath = tmp.tmpNameSync();
and then replacing all the instances of './temp/output.pdf' with tmpPath

How to upload images using Google Drive API v3 (Node.js) [duplicate]

I have uploaded a base64 img string to Google Drive via API in node express. After uploading the img, it is not viewable in Drive. I'm not sure on how to resolve this formatting issue. I know I could potentially save the img locally first, then upload the saved img file but I was hoping there is a simpler way.
My code:
const uploadImg = async (folderId,img)=>{
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0
const scopes = [
'https://www.googleapis.com/auth/drive'
];
const auth = new google.auth.JWT(
demoApiCreds.client_email, null,
demoApiCreds.private_key, scopes
);
const drive = google.drive({ version: 'v3', auth });
const fileMetadata = {
'name': 'Client_Design_ScreenShotTest',
'mimeType':'image/jpeg',
'parents':[folderId]
};
const uploadImg = img.split(/,(.+)/)[1];
const media = {
body: uploadImg
}
let res = await drive.files.create({
resource: fileMetadata,
media: media,
fields: 'id',
});
console.log('the response is',res);
console.log('the data is ',res.data);
return res.data;
}
Edit:
The file is stored in drive, as a jpg, but the img is blank and after
the img is clicked google drive complains that the file cannot be
read. The img is still blank after downloading.
The base 64 img string is
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAhAAAADqCAYAAADzlnzfAAAAAXNSR0I...
I remove data:image/png;base64 before uploading as has been suggested in other threads. It fails with or without this prefix.
You want to upload an image to Google Drive using googleapis with node.js.
The image of img is the base64 data.
You have already been able to upload and download files to Google Drive using Drive API.
If my understanding is correct, how about this answer? Please think of this as just one of several answers.
Modification points:
Unfortunately, when the base64 data is uploaded using googleapis, the base64 data is not decoded and the data is upload as the text data. So when you see the uploaded file, you cannot see it as the image. If Content-Transfer-Encoding: base64 can be added to the header of the base64 data in the request body, the base64 data is converted and uploaded as an image. But when googleapis is used, in the current stage, it cannot be achieved.
In order to upload the base64 data encoded from an image as an image to Google Drive, how about the following modification?
Modified script:
In this modification, the base64 image is converted to the stream type, and uploaded. Please modify your script as follows.
From:
const uploadImg = img.split(/,(.+)/)[1];
const media = {
body: uploadImg
}
To:
const stream = require("stream"); // Added
const uploadImg = img.split(/,(.+)/)[1];
const buf = new Buffer.from(uploadImg, "base64"); // Added
const bs = new stream.PassThrough(); // Added
bs.end(buf); // Added
const media = {
body: bs // Modified
};
Note:
Even if 'mimeType':'image/jpeg' is used at fileMetadata, the image file is uploaded as image/png. But for example, if 'mimeType':'application/pdf' is used at fileMetadata, the image file is uploaded as application/pdf. Please be careful this. So I also recommend to modify to 'mimeType':'image/png' as mentioned by 10100111001's answer.
At "googleapis#43.0.0", both patterns of resource: fileMetadata and requestBody: fileMetadata work.
References:
Class Method: Buffer.from(string[, encoding])
Class: stream.PassThrough
Files: create in Drive API
If I misunderstood your question and this was not the direction you want, I apologize.
You need to change your mimeType to image/png.
See here what Mime Types are
Edit:
The property name for the fileMetadata is called requestBody instead of resource.
let res = await drive.files.create({
requestBody: fileMetadata,
media: media,
fields: 'id',
});
https://github.com/googleapis/google-api-nodejs-client/blob/7e2b586e616e757b72f7a9b1adcd7d232c6b1bef/src/apis/drive/v3.ts#L3628
I had the same problem, Solved it by adding "Content-Transfer-Encoding: base64"in body where we write body-request, content-type, etc.

How to decode base64 PDF string in Flutter?

I know there is a package called dart:convert which let me decode base64 image. But apparently, it doesn't work with pdf files. How can I decode the base64 PDF file in Flutter?
I want to store it in Firebase Storage (I know how to do it) but I need the File variable to do it.
I have a web service written in node js where I have a POST route. There, I create a pdf file and encode it to base 64. The response is a base64 string, look at the code.
router.post('/pdf', (req, res, next) => {
//res.send('PDF');
const fname = req.body.fname;
const lname = req.body.lname;
var documentDefinition = {
content: [ write your pdf with pdfMake.org ],
styles: { write your style };
const pdfDoc = pdfMake.createPdf(documentDefinition);
pdfDoc.getBase64((data) => {
res.send({ "base64": data });
});
});
As you can see, it returns the pdf as a base64 string.
Now, in Flutter, I have written this:
http.post("https://mypostaddreess.com",body: json.encode({"data1":"data"}))
.then((response) {
print("Response status: ${response.statusCode}");
print("Response body: ${response.body}");
var data = json.decode(response.body);
var pdf = base64.decode(data["base64"]);
});
}
I have the PDF in the variable 'pdf' as you see. But I don't know how to decode it to download the pdf or show it in my Flutter app.
#SwiftingDuster
a little added, maybe besides decoding, it's also necessary to create a pdf file and open it.
createPdf() async {
var bytes = base64Decode(widget.base64String.replaceAll('\n', ''));
final output = await getTemporaryDirectory();
final file = File("${output.path}/example.pdf");
await file.writeAsBytes(bytes.buffer.asUint8List());
print("${output.path}/example.pdf");
await OpenFile.open("${output.path}/example.pdf");
setState(() {});
}
library needed:
1. open_file
2. path_provider
3. pdf
I think it's better to get the BufferArray and convert it into a pdf file.
Check out my answer from here : Get pdf from blob data
This should convert base64 encoded pdf data into a byte array.
import 'packages:dart/convert.dart';
List<int> pdfDataBytes = base64.decode(pdfBase64)
.map((number) => int.parse(number));
The pdf and the image plugins seems to suit your needs for displaying pdf.
The code should be roughly like so:
import 'package:pdf/pdf.dart';
import 'package:image/image.dart';
...
Image img = decodeImage(pdfDataBytes);
PdfImage image = PdfImage(
pdf,
image: img.data.buffer.asUint8List(),
width: img.width,
height: img.height);
// Display it somehow
...

Resources