Upload file with apollo-upload-client and graphql-yoga 3.x - node.js

In the 3.x versions of graphql-yoga fileuploads use the scalar type File for queries, but apollo-upload-client uses Upload, so how can I make it work with those frameworks?

The easy answer is, that it just works by using Upload instead of File in the query.

This is off topic, but you can make a simpler solution by just sending a File. You need to remove apollo-upload-client from the list. Also on the backend. Pure file upload example.
shema.graphql
scalar File
extend type Mutation {
profileImageUpload(file: File!): String!
}
resolver.ts
profileImageUpload: async (_, { file }: { file: File }) => {
// get readableStream from blob
const readableStream = file.stream()
const stream = readableStream.getReader()
console.log('file', file.type)
let _file: Buffer | undefined
while (true) {
// for each iteration: value is the next blob fragment
const { done, value } = await stream.read()
if (done) {
// no more data in the stream
console.log('all blob processed.')
break
}
if (value)
_file = Buffer.concat([_file || Buffer.alloc(0), Buffer.from(value)])
}
if (_file) {
const image = sharp(_file)
const metadata = await image.metadata()
console.log(metadata, 'metadata')
try {
const image = await sharp(_file).resize(600, 600).webp().toBuffer()
fs.writeFileSync('test.webp', image)
console.log(image, 'image')
}
catch (error) {
console.error(error)
}
}
return 'a'
},

Related

how to create api route that will send a CSV file to the frontend in Next.js

As far as I know (correct me if i'm wrong please) the flow of downloading a file should be that the frontend make a call to an api route and everything else is going on on the server.
My task was to read from firestore and write it to the CSV file, I populated the CSV file with the data and now when I try to send it to the frontend only thing that is in the file after the download it the first line containing headers name and email (the file that was written on my computer is correctly willed with the data). This is my route
import { NextApiHandler } from "next";
import fs from "fs";
import { stringify } from "csv-stringify";
import { firestore } from "../../firestore";
import { unstable_getServerSession } from "next-auth/next";
import { authOptions } from "./auth/[...nextauth]";
const exportFromFirestoreHandler: NextApiHandler = async (req, res) => {
const session = await unstable_getServerSession(req, res, authOptions);
if (!session) {
return res.status(401).json({ message: "You must be authorized here" });
}
const filename = "guestlist.csv";
const writableStream = fs.createWriteStream(filename);
const columns = ["name", "email"];
const stringifier = stringify({ header: true, columns });
const querySnapshot = await firestore.collection("paprockibrzozowski").get();
await querySnapshot.docs.forEach((entry) => {
stringifier.write([entry.data().name, entry.data().email], "utf-8");
});
stringifier.pipe(writableStream);
const csvFile = await fs.promises.readFile(
`${process.cwd()}/${filename}`,
"utf-8"
);
res.status(200).setHeader("Content-Type", "text/csv").send(csvFile);
};
export default exportFromFirestoreHandler;
since I await querySnapshot and await readFile I would expect that the entire content of the file would be sent to the frontend. Can you please tell me what am I doing wrong?
Thanks
If anyone will struggle with this same stuff here is the answer base on the # Nelloverflowc thank you for getting me this far, hoverver files not always were populated with data, at first I tried like so
stringifier.on("close", async () => {
const csvFile = fs.readFileSync(`${process.cwd()}/${filename}`, "utf-8");
res
.status(200)
.setHeader("Content-Type", "text/csv")
.setHeader("Content-Disposition", `attachment; filename=${filename}`)
.send(csvFile);
});
stringifier.end();
the api of https://csv.js.org/ must have changed becuase instead of on.('finish') it is on close now, so reading file sync did the job regarding always getting the file populated with the correct data, but along with it there was an error
API resolved without sending a response for /api/export-from-db, this may result in stalled requests.
the solution to that is to convert file into readable stream like so
try {
const csvFile = fs.createReadStream(`${process.cwd()}/${filename}`);
res
.status(200)
.setHeader("Content-Type", "text/csv")
.setHeader("Content-Disposition", `attachment; filename=${filename}`)
.send(csvFile);
} catch (error) {
res.status(400).json({ error });
}
Here is the tread and the discussion that helped me
Node.js send file in response
The await on that forEach is most definitely not doing what you expect it to do, also you probably shouldn't use await and forEach together
Either switch to using the Sync API for the csv-stringify library or do something along these lines (assuming the first .get() actually contains the actual values from a promise):
[...]
stringifier.pipe(writableStream);
stringifier.on('finish', () => {
const csvFile = await fs.promises.readFile(
`${process.cwd()}/${filename}`,
"utf-8"
);
res.status(200).setHeader("Content-Type", "text/csv").send(csvFile);
});
for (const entry of querySnapshot.docs) {
stringifier.write([entry.data().name, entry.data().email], "utf-8");
);
stringifier.end();
[...]

retrieve image from mongodb and display on client side

Now I'm using express, node.js, and mongodb. I just saw that the images can be stored to mongodb with multer and grid fs storage and it works.
enter image description here
enter image description here
And I need to get back to client side. I guess the image can be converted from that chunk binary to image but I really sure how to do so. My ultimate purpose is to display menu with name, price, and picture from mongodb which I uploaded.
Does anyone know how to retrieve it and send image file from controller to boundary class?
Additional resources:
//this is entity class which is for obtaining information about image file
static async getImages(menu) {
try {
let filter = Object.values(menu.image)
const files = await this.files.find({ filename: { $in: filter } }).toArray()
let fileInfos = []
for (const file of files) {
let chunk = await this.chunks.find({ files_id: file._id }).toArray()
console.log(chunk.data)
fileInfos.push(chunk.data)
}
return fileInfos
} catch (err) {
console.log(`Unable to get files: ${err.message}`)
}
}
** so chunk object contains this**
{
_id: new ObjectId("627a28cda6d7935899174cd4"),
files_id: new ObjectId("627a28cda6d7935899174cd3"),
n: 0,
data: new Binary(Buffer.from("89504e470d0a1a0a0000000d49484452000000180000001808020000006f15aaaf0000000674524e530000000000006ea607910000009449444154789cad944b12c0200843a5e3fdaf9c2e3a636d093f95a586f004b5b5e30100c0b2f8daac6d1a25a144e4b74288325e5a23d6b6aea965b3e643e4243b2cc428f472908f35bb572dace8d4652e485bab83f4c84a0030b6347e3cb5cc28dbb84721ff23704c17a7661ad1ee96dc5f22ff5061f458e29621447e4ec8557ba585a99152b97bb4f5d5d68c92532b10f967bc015ce051246ff76d8b0000000049454e44ae426082", "hex"), 0)
}
//this is controller class
static async apiViewMenu(_req, res) {
try {
let menus = await MenusDAO.getAllMenus()
for (const menu of menus) {
menu.images = await ImagesDAO.getImages(menu)
}
//return menus list
res.json(menus)
} catch (err) {
res.status(400).json({ error: err.message })
}
}
I did not handle converting this buffer data to image because I do not know...

Smooch - create attachments from buffer

I'm trying to create an image via smooch-core API
I have an image as Buffer - base64, And I try something like this:
smoochClient.attachments
.create({
appId: appId,
props: {
for: 'message',
access: 'public',
appUserId: appUserId
},
source: myBuffer
})
.then(() => {
console.log('OK');
}).catch(err => {
console.log(JSON.stringify(err));
});
I get this error: "status":413,"statusText":"Payload Too Large"
[When I create this image normally through Postman it does work well, so it's not too big - I guess it's because of the Buffer's sending]
Anyone know how I can send a buffer to this API?
Are you able to submit the base64 data directly in the postman call?
Reading through the spec here it looks like source should be a filepath/name, and not raw binary data.
The easy way may be to save the base64 data to a[n appropriately encoded] file, then provide that file's path as source
Otherwise I'm not sure I'd go so far as to take apart api_instance.upload_attachment() to feed in the base64 data instead of opening/reading from the specified filename.
I found such a solution:
Create a temporary file to get it's read stream and send it in source instead of the myBuffer parameter and here is the code of creating the temporary file:
async getTempFileSource(bufferData) {
const fs = require("fs");
//remove mime type
if (bufferData.startsWith('data:'))
bufferData = bufferData.split('base64,')[1];
//Get file extension
const type = await require('file-type').fromBuffer(new Buffer(bufferData, 'base64'));
if (!type) {
console.log("getTempFileSource - The buffer data is corrupted", 'red');
return null;
}
//create temporary file
const tempFile = require('tmp').fileSync({postfix: '.' + type.ext});
//append buffer data to temp file
fs.appendFileSync(tempFile.name, new Buffer(bufferData, 'base64'));
//create read stream from the temp file
const source = fs.createReadStream(tempFile.name);
//remove the temp file
tempFile.removeCallback();
return source;
}
Here is the code for creating the attachment:
return new Promise(async (resolve, reject) => {
const source = await getTempFileSource(bufferData);
if (!source)
resolve(null);
else {
session.smoochClient.attachments
.create({
appId: appId,
props: {
for: 'message',
access: 'public',
appUserId: appUserId
},
source: source
})
.then(res => {
resolve(res);
}).catch(err => {
reject(err);
});
}
});

how to make formidable not save to var/folders on nodejs and express app

I'm using formidable to parse incoming files and store them on AWS S3
When I was debugging the code I found out that formidable is first saving it to disk at /var/folders/ and overtime some unnecessary files are stacked up on disk which could lead to a big problem later.
It's very silly of me using a code without fully understanding it and now
I have to figure out how to either remove the parsed file after saving it to S3 or save it to s3 without storing it in disk.
But the question is how do I do it?
I would appreciate if someone could point me in the right direction
this is how i handle the files:
import formidable, { Files, Fields } from 'formidable';
const form = new formidable.IncomingForm();
form.parse(req, async (err: any, fields: Fields, files: Files) => {
let uploadUrl = await util
.uploadToS3({
file: files.uploadFile,
pathName: 'myPathName/inS3',
fileKeyName: 'file',
})
.catch((err) => console.log('S3 error =>', err));
}
This is how i solved this problem:
When I parse incoming form-multipart data I have access to all the details of the files. Because it's already parsed and saved to local disk on the server/my computer. So using the path variable given to me by formidable I unlink/remove that file using node's built-in fs.unlink function. Of course I remove the file after saving it to AWS S3.
This is the code:
import fs from 'fs';
import formidable, { Files, Fields } from 'formidable';
const form = new formidable.IncomingForm();
form.multiples = true;
form.parse(req, async (err: any, fields: Fields, files: Files) => {
const pathArray = [];
try {
const s3Url = await util.uploadToS3(files);
// do something with the s3Url
pathArray.push(files.uploadFileName.path);
} catch(error) {
console.log(error)
} finally {
pathArray.forEach((element: string) => {
fs.unlink(element, (err: any) => {
if (err) console.error('error:',err);
});
});
}
})
I also found a solution which you can take a look at here but due to the architecture if found it slightly hard to implement without changing my original code (or let's just say I didn't fully understand the given implementation)
I think i found it. According to the docs see options.fileWriteStreamHandler, "you need to have a function that will return an instance of a Writable stream that will receive the uploaded file data. With this option, you can have any custom behavior regarding where the uploaded file data will be streamed for. If you are looking to write the file uploaded in other types of cloud storages (AWS S3, Azure blob storage, Google cloud storage) or private file storage, this is the option you're looking for. When this option is defined the default behavior of writing the file in the host machine file system is lost."
const form = formidable({
fileWriteStreamHandler: someFunction,
});
EDIT: My whole code
import formidable from "formidable";
import { Writable } from "stream";
import { Buffer } from "buffer";
import { v4 as uuidv4 } from "uuid";
export const config = {
api: {
bodyParser: false,
},
};
const formidableConfig = {
keepExtensions: true,
maxFileSize: 10_000_000,
maxFieldsSize: 10_000_000,
maxFields: 2,
allowEmptyFiles: false,
multiples: false,
};
// promisify formidable
function formidablePromise(req, opts) {
return new Promise((accept, reject) => {
const form = formidable(opts);
form.parse(req, (err, fields, files) => {
if (err) {
return reject(err);
}
return accept({ fields, files });
});
});
}
const fileConsumer = (acc) => {
const writable = new Writable({
write: (chunk, _enc, next) => {
acc.push(chunk);
next();
},
});
return writable;
};
// inside the handler
export default async function handler(req, res) {
const token = uuidv4();
try {
const chunks = [];
const { fields, files } = await formidablePromise(req, {
...formidableConfig,
// consume this, otherwise formidable tries to save the file to disk
fileWriteStreamHandler: () => fileConsumer(chunks),
});
// do something with the files
const contents = Buffer.concat(chunks);
const bucketRef = storage.bucket("your bucket");
const file = bucketRef.file(files.mediaFile.originalFilename);
await file
.save(contents, {
public: true,
metadata: {
contentType: files.mediaFile.mimetype,
metadata: { firebaseStorageDownloadTokens: token },
},
})
.then(() => {
file.getMetadata().then((data) => {
const fileName = data[0].name;
const media_path = `https://firebasestorage.googleapis.com/v0/b/${bucketRef?.id}/o/${fileName}?alt=media&token=${token}`;
console.log("File link", media_path);
});
});
} catch (e) {
// handle errors
console.log("ERR PREJ ...", e);
}
}

How to modify a stream from request to a new response stream in HummusJS

I need to inject a png into a PDF using HummusJS.
In the production version of my API, I'll be receiving a post request containing a base64 that I'll convert to binary reader stream. (In the example below, I'm using a test read from a local file.)
Here's my test input stream (which is defined in a "create" object factory):
hummusReadStreamObject (filePath) {
let fileData = fs.readFileSync(filePath).toString('hex');
let result = []
for (var i = 0; i < fileData.length; i+=2) {
result.push('0x'+fileData[i]+''+fileData[i+1])
}
return new hummus.PDFRStreamForBuffer(result)
},
I generate a PNG (I have confirmed, the PNG is written and valid) and then perform modification during the stream's "finally" event
png.on('finish', function () {
create.modifiedPDFResponseStream({
pngFileName,
readStream: create.hummusReadStreamObject('/foo/file/path'),
res
})
})
the modifiedPDFResponseStream now pulls in the png and is supposed to append it to the file:
modifiedPDFResponseStream ({ pngFileName, readStream, res }) {
const writeStream = new hummus.PDFStreamForResponse(res)
const pdfWriter = hummus.createWriterToModify(
readStream,
writeStream,
{ log: `path/to/error.txt` })
debugger
const pageModifier = new hummus.PDFPageModifier(pdfWriter,0);
if (pngFileName) {
const ctx = pageModifier.startContext().getContext()
ctx.drawImage(2, 2, `./path/to/${pngFileName}`)
pageModifier.endContext().writePage()
pdfWriter.end()
}
}
I sense that I'm close to a solution, the error logs do not report any problems, but i receive the following exception when debugging via Chrome:
events.js:183 Uncaught Error: write after end
at write_ (_http_outgoing.js:622:15)
at ServerResponse.write (_http_outgoing.js:617:10)
at PDFStreamForResponse.write
is the issue something to do with the fact that the stream is being populated during the .png's .on('finish', ...) event? If so, is there a synchronous approach I could take that would mitigate this problem?
inspired by the solution from #JAM in "wait for all streams to finish - stream a directory of files"
I solved this issue by having the .pngWriteStream method return a promise and having the writestream resolve on finish:
return new Promise((resolve, reject) => {
try {
writeStream.on('finish', resolve)
} catch (e) {
$.logger.error(e.message)
reject()
}
so instead of:
const png = create.pngWriteStream({ ... })
png.on('finish', function () {
create.modifiedPDFResponseStream({
pngFileName,
readStream: create.hummusReadStreamObject('/foo/file/path'),
res
})
})
I await the resolution of the promise
const png = await create.pngWriteStream({
fileName: pngFileName,
value: '123456789' })
and the img file is available for hummus.js to write to the pdf!
create.modifiedPDFResponseStream({
pngFileName,
readStream: create.hummusReadStreamObject('/foo/file/path'),
writeStream: new hummus.PDFStreamForResponse(res)
})

Resources