I'm trying to create an image via smooch-core API
I have an image as Buffer - base64, And I try something like this:
smoochClient.attachments
.create({
appId: appId,
props: {
for: 'message',
access: 'public',
appUserId: appUserId
},
source: myBuffer
})
.then(() => {
console.log('OK');
}).catch(err => {
console.log(JSON.stringify(err));
});
I get this error: "status":413,"statusText":"Payload Too Large"
[When I create this image normally through Postman it does work well, so it's not too big - I guess it's because of the Buffer's sending]
Anyone know how I can send a buffer to this API?
Are you able to submit the base64 data directly in the postman call?
Reading through the spec here it looks like source should be a filepath/name, and not raw binary data.
The easy way may be to save the base64 data to a[n appropriately encoded] file, then provide that file's path as source
Otherwise I'm not sure I'd go so far as to take apart api_instance.upload_attachment() to feed in the base64 data instead of opening/reading from the specified filename.
I found such a solution:
Create a temporary file to get it's read stream and send it in source instead of the myBuffer parameter and here is the code of creating the temporary file:
async getTempFileSource(bufferData) {
const fs = require("fs");
//remove mime type
if (bufferData.startsWith('data:'))
bufferData = bufferData.split('base64,')[1];
//Get file extension
const type = await require('file-type').fromBuffer(new Buffer(bufferData, 'base64'));
if (!type) {
console.log("getTempFileSource - The buffer data is corrupted", 'red');
return null;
}
//create temporary file
const tempFile = require('tmp').fileSync({postfix: '.' + type.ext});
//append buffer data to temp file
fs.appendFileSync(tempFile.name, new Buffer(bufferData, 'base64'));
//create read stream from the temp file
const source = fs.createReadStream(tempFile.name);
//remove the temp file
tempFile.removeCallback();
return source;
}
Here is the code for creating the attachment:
return new Promise(async (resolve, reject) => {
const source = await getTempFileSource(bufferData);
if (!source)
resolve(null);
else {
session.smoochClient.attachments
.create({
appId: appId,
props: {
for: 'message',
access: 'public',
appUserId: appUserId
},
source: source
})
.then(res => {
resolve(res);
}).catch(err => {
reject(err);
});
}
});
Related
As far as I know (correct me if i'm wrong please) the flow of downloading a file should be that the frontend make a call to an api route and everything else is going on on the server.
My task was to read from firestore and write it to the CSV file, I populated the CSV file with the data and now when I try to send it to the frontend only thing that is in the file after the download it the first line containing headers name and email (the file that was written on my computer is correctly willed with the data). This is my route
import { NextApiHandler } from "next";
import fs from "fs";
import { stringify } from "csv-stringify";
import { firestore } from "../../firestore";
import { unstable_getServerSession } from "next-auth/next";
import { authOptions } from "./auth/[...nextauth]";
const exportFromFirestoreHandler: NextApiHandler = async (req, res) => {
const session = await unstable_getServerSession(req, res, authOptions);
if (!session) {
return res.status(401).json({ message: "You must be authorized here" });
}
const filename = "guestlist.csv";
const writableStream = fs.createWriteStream(filename);
const columns = ["name", "email"];
const stringifier = stringify({ header: true, columns });
const querySnapshot = await firestore.collection("paprockibrzozowski").get();
await querySnapshot.docs.forEach((entry) => {
stringifier.write([entry.data().name, entry.data().email], "utf-8");
});
stringifier.pipe(writableStream);
const csvFile = await fs.promises.readFile(
`${process.cwd()}/${filename}`,
"utf-8"
);
res.status(200).setHeader("Content-Type", "text/csv").send(csvFile);
};
export default exportFromFirestoreHandler;
since I await querySnapshot and await readFile I would expect that the entire content of the file would be sent to the frontend. Can you please tell me what am I doing wrong?
Thanks
If anyone will struggle with this same stuff here is the answer base on the # Nelloverflowc thank you for getting me this far, hoverver files not always were populated with data, at first I tried like so
stringifier.on("close", async () => {
const csvFile = fs.readFileSync(`${process.cwd()}/${filename}`, "utf-8");
res
.status(200)
.setHeader("Content-Type", "text/csv")
.setHeader("Content-Disposition", `attachment; filename=${filename}`)
.send(csvFile);
});
stringifier.end();
the api of https://csv.js.org/ must have changed becuase instead of on.('finish') it is on close now, so reading file sync did the job regarding always getting the file populated with the correct data, but along with it there was an error
API resolved without sending a response for /api/export-from-db, this may result in stalled requests.
the solution to that is to convert file into readable stream like so
try {
const csvFile = fs.createReadStream(`${process.cwd()}/${filename}`);
res
.status(200)
.setHeader("Content-Type", "text/csv")
.setHeader("Content-Disposition", `attachment; filename=${filename}`)
.send(csvFile);
} catch (error) {
res.status(400).json({ error });
}
Here is the tread and the discussion that helped me
Node.js send file in response
The await on that forEach is most definitely not doing what you expect it to do, also you probably shouldn't use await and forEach together
Either switch to using the Sync API for the csv-stringify library or do something along these lines (assuming the first .get() actually contains the actual values from a promise):
[...]
stringifier.pipe(writableStream);
stringifier.on('finish', () => {
const csvFile = await fs.promises.readFile(
`${process.cwd()}/${filename}`,
"utf-8"
);
res.status(200).setHeader("Content-Type", "text/csv").send(csvFile);
});
for (const entry of querySnapshot.docs) {
stringifier.write([entry.data().name, entry.data().email], "utf-8");
);
stringifier.end();
[...]
I am using the Google Drive for Developers Drive API (V3) Nodejs quickstart.
In particular I am concentrating on the following function. Where I have customized the pageSize to 1 for testing. And am calling my function read(file.name);
/**
* Lists the names and IDs of up to 10 files.
* #param {google.auth.OAuth2} auth An authorized OAuth2 client.
*/
function listFiles(auth) {
const drive = google.drive({version: 'v3', auth});
drive.files.list({
pageSize: 1, // only find the last modified file in dev folder
fields: 'nextPageToken, files(id, name)',
}, (err, res) => {
if (err) return console.log('The API returned an error: ' + err);
const files = res.data.files;
if (files.length) {
console.log('Files:');
files.map((file) => {
console.log(`${file.name} (${file.id})`);
read(file.name); // my function here
});
} else {
console.log('No files found.');
}
});
}
// custom code - function to read and output file contents
function read(fileName) {
const readableStream = fs.createReadStream(fileName, 'utf8');
readableStream.on('error', function (error) {
console.log(`error: ${error.message}`);
})
readableStream.on('data', (chunk) => {
console.log(chunk);
})
}
This code reads the file from the Google Drive folder that is synced. I am using this local folder for development. I have found the pageSize: 1 parameter produces the last file that has been modified in this local folder. Therefore my process has been:
Edit .js code file
Make minor edit on testfiles (first txt then gdoc) to ensure it is last modified
Run the code
I am testing a text file against a GDOC file. The filenames are atest.txt & 31832_226114__0001-00028.gdoc respectively. The outputs are as follows:
PS C:\Users\david\Google Drive\Technical-local\gDriveDev> node . gdocToTextDownload.js
Files:
atest.txt (1bm1E4s4ET6HVTrJUj4TmNGaxqJJRcnCC)
atest.txt this is a test file!!
PS C:\Users\david\Google Drive\Technical-local\gDriveDev> node . gdocToTextDownload.js
Files:
31832_226114__0001-00028 (1oi_hE0TTfsKG9lr8Wl7ahGNvMvXJoFj70LssGNFFjOg)
error: ENOENT: no such file or directory, open 'C:\Users\david\Google Drive\Technical-local\gDriveDev\31832_226114__0001-00028'
My question is:
Why does the script read the text file but not the gdoc?
At this point I must 'hard code' the gdoc file extension to the file name, in the function call, to produce the required output as per the text file example eg
read('31832_226114__0001-00028.gdoc');
Which is obviously not what I want to do.
I am aiming to produce a script that will download a large number of gdocs that have been created from .jpg files.
------------------------- code completed below ------------------------
/**
* Lists the names and IDs of pageSize number of files (using query to define folder of files)
* #param {google.auth.OAuth2} auth An authorized OAuth2 client.
*/
function listFiles(auth) {
const drive = google.drive({version: 'v3', auth});
drive.files.list({
corpora: 'user',
pageSize: 100,
// files in a parent folder that have not been trashed
// get ID from Drive > Folder by looking at the URL after /folders/
q: `'11Sejh6XG-2WzycpcC-MaEmDQJc78LCFg' in parents and trashed=false`,
fields: 'nextPageToken, files(id, name)',
}, (err, res) => {
if (err) return console.log('The API returned an error: ' + err);
const files = res.data.files;
if (files.length) {
var ids = [ ];
var names = [ ];
files.forEach(function(file, i) {
ids.push(file.id);
names.push(file.name);
});
ids.forEach((fileId, i) => {
fileName = names[i];
downloadFile(drive, fileId, fileName);
});
}
else
{
console.log('No files found.');
}
});
}
/**
* #param {google.auth.OAuth2} auth An authorized OAuth2 client.
*/
function downloadFile(drive, fileId, fileName) {
// make sure you have valid path & permissions. Use UNIX filepath notation.
const filePath = `/test/test1/${fileName}`;
const dest = fs.createWriteStream(filePath);
let progress = 0;
drive.files.export(
{ fileId, mimeType: 'text/plain' },
{ responseType: 'stream' }
).then(res => {
res.data
.on('end', () => {
console.log(' Done downloading');
})
.on('error', err => {
console.error('Error downloading file.');
})
.on('data', d => {
progress += d.length;
if (process.stdout.isTTY) {
process.stdout.clearLine();
process.stdout.cursorTo(0);
process.stdout.write(`Downloading ${fileName} ${progress} bytes`);
}
})
.pipe(dest);
});
}
My question is: Why does the script read the text file but not the gdoc?
This is because you're trying to download a Google Workspace document, only files with binary content can be downloaded using drive.files.get method. For Google Workspace documents you need to use drive.files.exports as documented here
From your code, I'm seeing you're only listing the files, you will need to identify the type of file you want to download, you can use the mimeType field to check if you need to use the exports method vs get, for example, a Google Doc mime type is application/vnd.google-apps.document meanwhile a docx file (binary) would be application/vnd.openxmlformats-officedocument.wordprocessingml.document
Check the following working example:
Download a file from Google Drive
Run in Fusebit
const fs = require("fs");
const getFile = async (drive, fileId, name) => {
const res = await drive.files.get({ fileId, alt: "media" }, { responseType: "stream" });
return new Promise((resolve, reject) => {
const filePath = `/tmp/${name}`;
console.log(`writing to ${filePath}`);
const dest = fs.createWriteStream(filePath);
let progress = 0;
res.data
.on("end", () => {
console.log("🎉 Done downloading file.");
resolve(filePath);
})
.on("error", (err) => {
console.error("🚫 Error downloading file.");
reject(err);
})
.on("data", (d) => {
progress += d.length;
console.log(`🕛 Downloaded ${progress} bytes`);
})
.pipe(dest);
});
};
const fileKind = "drive#file";
let filesCounter = 0;
const drive = googleClient.drive({ version: "v3" });
const files = await drive.files.list();
// Only files with binary content can be downloaded. Use Export with Docs Editors files
// Read more at https://developers.google.com/drive/api/v3/reference/files/get
// In this example, any docx folder will be downloaded in a temp folder.
const onlyFiles = files.data.files.filter(
(file) =>
file.kind === fileKind &&
file.mimeType === "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
);
const numberOfFilesToDownload = onlyFiles.length;
console.log(`😏 About to download ${numberOfFilesToDownload} files`);
for await (const file of onlyFiles) {
filesCounter++;
console.log(`📁 Downloading file ${file.name}, ${filesCounter} of ${numberOfFilesToDownload}`);
await getFile(drive, file.id, file.name);
}
The answer (as I see it) is that the nodejs script above is running on Windows and therefore must comply with the native OS/file system inherited via the DOS/NT development of Windows. On the other hand, the gdoc extension is a reference created by the Google Drive sync desktop client. And here is the important distinction. The gdoc extension references a file stored on Google Drive (the reference being in the sync folder on a hard drive and the file being in the cloud on Google Drive) Therefore it's not an extension in the usual sense. The usual sense being where the extension is used by a local application as a valid access/read/write file type. So my test function above function read(fileName) won't be able to read the .gdoc in the same way as the .txt extension.
Therefore the correct way to access files on Google Drive from a local application is to use the file's ID. The filename is just a convenient way of labelling the IDs so that the user can meaningfully compare the downloaded copy of the file with the original on Google Drive.
(Refer to the original question) Using the code under the ---------- code completed below --------- I have added these two functions to Google's Nodejs Quickstart Replacing the function listFiles(auth) and adding function downloadFile(drive, fileId, fileName)
The total script file has been used to download multiple files (more than 50 at a time) to my hard drive. This is a useful piece of code in an OCR setup which has a gscript convert .JPG images of historic Electoral Rolls into readable text. These gdocs are messy (still containing the original image and colored fonts of various formats) In downloading as text files the above script cleans them up. Of course images are removed from text files and the fonts are standardized to just upper/lower case text. So, it's more than just a downloader. It's a filter as well.
I hope this of some use to someone.
hope you can help me on this one!
Here is the situation: I want to download a file from my React front end by sending a request to a certain endpoint on my express back end.
Here is my controller for this route.
I build a query, parse the results to generate a csv file and send back that file.
When I console log the response on the front end side, the data is there, it goes through; however, no dialog open allowing the client to download the file on local disk.
module.exports.downloadFile = async (req, res) => {
const sql = await buildQuery(req.query, 'members', connection)
// Select the wanted data from the database
connection.query(sql, (err, results, fields) => {
if (err) throw err;
// Convert the json into csv
try{
const csv = parse(results);
// Save the file on server
fs.writeFileSync(__dirname + '/export.csv', csv)
res.setHeader('Content-disposition', 'attachment; filename=export.csv');
res.download(__dirname + '/export.csv');
} catch (err){
console.error(err)
}
// Reply with the csv file
// Delete the file
})
}
Follow one of these functions as an example to your client side code:
Async:
export const download = (url, filename) => {
fetch(url, {
mode: 'no-cors'
/*
* ALTERNATIVE MODE {
mode: 'cors'
}
*
*/
}).then((transfer) => {
return transfer.blob(); // RETURN DATA TRANSFERED AS BLOB
}).then((bytes) => {
let elm = document.createElement('a'); // CREATE A LINK ELEMENT IN DOM
elm.href = URL.createObjectURL(bytes); // SET LINK ELEMENTS CONTENTS
elm.setAttribute('download', filename); // SET ELEMENT CREATED 'ATTRIBUTE' TO DOWNLOAD, FILENAME PARAM AUTOMATICALLY
elm.click(); // TRIGGER ELEMENT TO DOWNLOAD
elm.remove();
}).catch((error) => {
console.log(error); // OUTPUT ERRORS, SUCH AS CORS WHEN TESTING NON LOCALLY
})
}
Sync:
export const download = async (url, filename) => {
let response = await fetch(url, {
mode: 'no-cors'
/*
* ALTERNATIVE MODE {
mode: 'cors'
}
*
*/
});
try {
let data = await response.blob();
let elm = document.createElement('a'); // CREATE A LINK ELEMENT IN DOM
elm.href = URL.createObjectURL(data); // SET LINK ELEMENTS CONTENTS
elm.setAttribute('download', filename); // SET ELEMENT CREATED 'ATTRIBUTE' TO DOWNLOAD, FILENAME PARAM AUTOMATICALLY
elm.click(); // TRIGGER ELEMENT TO DOWNLOAD
elm.remove();
}
catch(err) {
console.log(err);
}
}
I'm using formidable to parse incoming files and store them on AWS S3
When I was debugging the code I found out that formidable is first saving it to disk at /var/folders/ and overtime some unnecessary files are stacked up on disk which could lead to a big problem later.
It's very silly of me using a code without fully understanding it and now
I have to figure out how to either remove the parsed file after saving it to S3 or save it to s3 without storing it in disk.
But the question is how do I do it?
I would appreciate if someone could point me in the right direction
this is how i handle the files:
import formidable, { Files, Fields } from 'formidable';
const form = new formidable.IncomingForm();
form.parse(req, async (err: any, fields: Fields, files: Files) => {
let uploadUrl = await util
.uploadToS3({
file: files.uploadFile,
pathName: 'myPathName/inS3',
fileKeyName: 'file',
})
.catch((err) => console.log('S3 error =>', err));
}
This is how i solved this problem:
When I parse incoming form-multipart data I have access to all the details of the files. Because it's already parsed and saved to local disk on the server/my computer. So using the path variable given to me by formidable I unlink/remove that file using node's built-in fs.unlink function. Of course I remove the file after saving it to AWS S3.
This is the code:
import fs from 'fs';
import formidable, { Files, Fields } from 'formidable';
const form = new formidable.IncomingForm();
form.multiples = true;
form.parse(req, async (err: any, fields: Fields, files: Files) => {
const pathArray = [];
try {
const s3Url = await util.uploadToS3(files);
// do something with the s3Url
pathArray.push(files.uploadFileName.path);
} catch(error) {
console.log(error)
} finally {
pathArray.forEach((element: string) => {
fs.unlink(element, (err: any) => {
if (err) console.error('error:',err);
});
});
}
})
I also found a solution which you can take a look at here but due to the architecture if found it slightly hard to implement without changing my original code (or let's just say I didn't fully understand the given implementation)
I think i found it. According to the docs see options.fileWriteStreamHandler, "you need to have a function that will return an instance of a Writable stream that will receive the uploaded file data. With this option, you can have any custom behavior regarding where the uploaded file data will be streamed for. If you are looking to write the file uploaded in other types of cloud storages (AWS S3, Azure blob storage, Google cloud storage) or private file storage, this is the option you're looking for. When this option is defined the default behavior of writing the file in the host machine file system is lost."
const form = formidable({
fileWriteStreamHandler: someFunction,
});
EDIT: My whole code
import formidable from "formidable";
import { Writable } from "stream";
import { Buffer } from "buffer";
import { v4 as uuidv4 } from "uuid";
export const config = {
api: {
bodyParser: false,
},
};
const formidableConfig = {
keepExtensions: true,
maxFileSize: 10_000_000,
maxFieldsSize: 10_000_000,
maxFields: 2,
allowEmptyFiles: false,
multiples: false,
};
// promisify formidable
function formidablePromise(req, opts) {
return new Promise((accept, reject) => {
const form = formidable(opts);
form.parse(req, (err, fields, files) => {
if (err) {
return reject(err);
}
return accept({ fields, files });
});
});
}
const fileConsumer = (acc) => {
const writable = new Writable({
write: (chunk, _enc, next) => {
acc.push(chunk);
next();
},
});
return writable;
};
// inside the handler
export default async function handler(req, res) {
const token = uuidv4();
try {
const chunks = [];
const { fields, files } = await formidablePromise(req, {
...formidableConfig,
// consume this, otherwise formidable tries to save the file to disk
fileWriteStreamHandler: () => fileConsumer(chunks),
});
// do something with the files
const contents = Buffer.concat(chunks);
const bucketRef = storage.bucket("your bucket");
const file = bucketRef.file(files.mediaFile.originalFilename);
await file
.save(contents, {
public: true,
metadata: {
contentType: files.mediaFile.mimetype,
metadata: { firebaseStorageDownloadTokens: token },
},
})
.then(() => {
file.getMetadata().then((data) => {
const fileName = data[0].name;
const media_path = `https://firebasestorage.googleapis.com/v0/b/${bucketRef?.id}/o/${fileName}?alt=media&token=${token}`;
console.log("File link", media_path);
});
});
} catch (e) {
// handle errors
console.log("ERR PREJ ...", e);
}
}
Hey guys i'm confronting a very strange behaviour with GridFs while i try to upload a file:
So i send with formdata my file which i want to upload and a code which will be set in metadata in files,
the files are saved correctly and the originalname field is always added to the metadata but the code field which is a req.body paramater has a very strange behaviour.
files.ts
uploadFileFormSubmit(event) {
const formData = new FormData();
formData.append('file', event.target.files.item(0));
formData.append('code', this.courseCode);
this.fileService.uploadFile(formData).subscribe(res => ....
fileService.ts
uploadFile(data): Observable<GeneralResponse> {
return this.http.post<GeneralResponse>('/files/uploadFile', data);
}
here is the backend part:
files.js (back-end)
const storage = new GridFsStorage({
url: dbUrl,
file: (req, file) => {
return new Promise((resolve, reject) => {
crypto.randomBytes(16, (err, buf) => {
if (err) {
return reject(err);
}
const filename = buf.toString('hex') + path.extname(file.originalname);
console.log(req.body)
const code = JSON.parse(JSON.stringify(req.body));
console.log(code)
const fileInfo = {
filename: filename,
metadata: {
originalname: file.originalname,
materialCode: code.code
},
bucketName: 'files'
};
resolve(fileInfo);
});
});
}
});
As you can see i parse the req.body in order to get my property (i found it here this solution because the req.body was [Object: null prototype] { code: 'myCode'} )
And for some files this code data is passed but not always.
note that there are 2 console.logs(before and after JSON.parse()
the first object null is an excel file,
the second is a pdf
the third is jpg file
the fourth is a png file
maybe it's something with the extensions but i cannot imagine why the req.body sometimes gets parsed
and the code gets into metadata but other times not :/
So what can cause this behaviour? thanks for help in advance :D