Now I'm using express, node.js, and mongodb. I just saw that the images can be stored to mongodb with multer and grid fs storage and it works.
enter image description here
enter image description here
And I need to get back to client side. I guess the image can be converted from that chunk binary to image but I really sure how to do so. My ultimate purpose is to display menu with name, price, and picture from mongodb which I uploaded.
Does anyone know how to retrieve it and send image file from controller to boundary class?
Additional resources:
//this is entity class which is for obtaining information about image file
static async getImages(menu) {
try {
let filter = Object.values(menu.image)
const files = await this.files.find({ filename: { $in: filter } }).toArray()
let fileInfos = []
for (const file of files) {
let chunk = await this.chunks.find({ files_id: file._id }).toArray()
console.log(chunk.data)
fileInfos.push(chunk.data)
}
return fileInfos
} catch (err) {
console.log(`Unable to get files: ${err.message}`)
}
}
** so chunk object contains this**
{
_id: new ObjectId("627a28cda6d7935899174cd4"),
files_id: new ObjectId("627a28cda6d7935899174cd3"),
n: 0,
data: new Binary(Buffer.from("89504e470d0a1a0a0000000d49484452000000180000001808020000006f15aaaf0000000674524e530000000000006ea607910000009449444154789cad944b12c0200843a5e3fdaf9c2e3a636d093f95a586f004b5b5e30100c0b2f8daac6d1a25a144e4b74288325e5a23d6b6aea965b3e643e4243b2cc428f472908f35bb572dace8d4652e485bab83f4c84a0030b6347e3cb5cc28dbb84721ff23704c17a7661ad1ee96dc5f22ff5061f458e29621447e4ec8557ba585a99152b97bb4f5d5d68c92532b10f967bc015ce051246ff76d8b0000000049454e44ae426082", "hex"), 0)
}
//this is controller class
static async apiViewMenu(_req, res) {
try {
let menus = await MenusDAO.getAllMenus()
for (const menu of menus) {
menu.images = await ImagesDAO.getImages(menu)
}
//return menus list
res.json(menus)
} catch (err) {
res.status(400).json({ error: err.message })
}
}
I did not handle converting this buffer data to image because I do not know...
Related
In the 3.x versions of graphql-yoga fileuploads use the scalar type File for queries, but apollo-upload-client uses Upload, so how can I make it work with those frameworks?
The easy answer is, that it just works by using Upload instead of File in the query.
This is off topic, but you can make a simpler solution by just sending a File. You need to remove apollo-upload-client from the list. Also on the backend. Pure file upload example.
shema.graphql
scalar File
extend type Mutation {
profileImageUpload(file: File!): String!
}
resolver.ts
profileImageUpload: async (_, { file }: { file: File }) => {
// get readableStream from blob
const readableStream = file.stream()
const stream = readableStream.getReader()
console.log('file', file.type)
let _file: Buffer | undefined
while (true) {
// for each iteration: value is the next blob fragment
const { done, value } = await stream.read()
if (done) {
// no more data in the stream
console.log('all blob processed.')
break
}
if (value)
_file = Buffer.concat([_file || Buffer.alloc(0), Buffer.from(value)])
}
if (_file) {
const image = sharp(_file)
const metadata = await image.metadata()
console.log(metadata, 'metadata')
try {
const image = await sharp(_file).resize(600, 600).webp().toBuffer()
fs.writeFileSync('test.webp', image)
console.log(image, 'image')
}
catch (error) {
console.error(error)
}
}
return 'a'
},
Im using html-pdf to generate a pdf document in the server, im then exporting that to the frontend to be downloaded. The PDF generates correctly on the server, however on the client all that is downloaded is a pdf document with the correct amount of pages, but no content. Im saving the file locally to the server, and if i pull that document up on my browser it indeed is formatted how i want it to be formatted, everything is correct. but no matter what i do, all the client receives is several blank pages.
i've read in several SO posts about byte-shaving and that the content isnt being received correctly because of incorrect formatting (i.e. the document is in utf-8 or some such thing) and needs to be in base64 i was successful in creating a base64 document (not shown here) but then that created a new problem in Document not readable when the document was downloaded, it then also created a problem with the server document in that it was now blank, and had no content.
im at my wits end, the document is correct but i cannot receive it correctly in the client, im hoping some fresh eyes can make sense of what i cant see.
server
const pdf = require("html-pdf");
const asyncHandler = require("../middleware/async");
const { error: errorHandler } = require("../middleware/error");
const fs = require("fs");
const path = require("path");
const Deck = require("../models/deckModel");
const DeckListTemp = require("../templates/DeckListTemp");
const Dynamic = require("../models/dynamicModel");
/**
* #description Generate PDF document of deck list
* #param {Number} deckId - Deck ID
* #returns {String} - PDF document
* #author Austin Howard
* #version 1.0.0
* #since 1.0.0
* #date 2022-5-30
*
*/
module.exports = asyncHandler(async (req, response, next) => {
try {
// find the deck
const deck = await Deck.findById(req.params.deckId);
// get the logo of the application
const logo = await Dynamic.findOne({ type: "Logo" });
// check to see if the deck and logo exist
if (!deck || !logo) {
return res.status(404).json({
message: "Cannot find deck or logo on Server",
});
}
// need to sort cards by name
await deck.cards.sort((a, b) => {
if (a.name < b.name) {
return -1;
} else if (a.name > b.name) {
return 1;
} else {
return 0;
}
});
// console log one card so we can see what we're working with
// console.log(deck.cards[0]);
// pass the deck to the template
const html = DeckListTemp(deck, logo.value);
// set up the html-to-pdf converter
// we need to url encode the deck name in case there is any special characters
// basically we just want to make sure any forward slashes are not in the deck name
// so we replace them with a space
// since any forward slash would cause the pdf to go to a subdirectory
// and we don't want that
deck.deck_name = deck.deck_name.replace(/\//g, " ");
// we need to make the pdf a promise so we can await its creation
await Promise.all([
// create the pdf
pdf
.create(html, {
encoding: "UTF-8",
format: "A3",
border: {
top: "1in",
right: "1in",
bottom: "1in",
left: "1in",
},
})
.toFile(
`${__dirname}../../../public/pdf/${deck.deck_name
.substring(0, 50)
.replace(/\//g, " ")}.pdf`,
async function (err, res) {
try {
if (err) {
console.log(err);
} else {
console.log(`Document created for ${deck.deck_name}`);
// create a readable stream from the file
console.log(
`creating readable stream from ${deck.deck_name
.substring(0, 50)
.replace(/\//g, " ")}.pdf`
);
// pipe the readable stream to the response
// need to create a buffer then decode that buffer as base64 so that we can send it to the client
const buffer = await Buffer.from(
await fs.readFileSync(
`${__dirname}../../../public/pdf/${deck.deck_name
.substring(0, 50)
.replace(/\//g, " ")}.pdf`
)
);
response.setHeader(
"Content-disposition",
`attachment; filename=${deck.deck_name}.pdf`
);
await response.sendFile(
path
.join(
`${__dirname}../../../public/pdf/${deck.deck_name
.substring(0, 50)
.replace(/\//g, " ")}.pdf`
)
.toString("base64"),
(err) => {
if (err) {
console.log(err);
} else {
console.log("downloading");
}
}
);
// fs.unlinkSync(
// `${__dirname}../../../public/pdf/${deck.deck_name
// .substring(0, 50)
// .replace(/\//g, " ")}.pdf`
// );
}
} catch (error) {
console.error(error);
}
}
),
]);
} catch (error) {
console.error(error);
res.status(500).json({
success: false,
message: `Server Error - ${error.message}`,
});
}
});
client
import axios from "axios";
import {
DECK_PREVIEW_REQUEST,
DECK_PREVIEW_SUCCESS,
} from "../../constants/deckConstants";
import { setAlert } from "../../utils/alert";
export const generatePdf = (deck) => async (dispatch) => {
try {
dispatch({ type: DECK_PREVIEW_REQUEST });
const config = {
headers: {
"Content-Type": "application/force-download",
},
};
const data = await axios.post(`/api/deck/${deck._id}/pdf`, config);
// console.log(data);
// Create blob link to download
// const blob = new Blob([data.data], { type: "application/pdf" });
console.log(data.data);
const url = window.URL.createObjectURL(
new Blob([data.data], { type: "application/octet-stream" })
);
const link = document.createElement("a");
link.href = url;
link.setAttribute("download", `${deck.deck_name}.pdf`);
// Append to html link element page
document.body.appendChild(link);
// Start download
link.click();
// Clean up and remove the link
link.parentNode.removeChild(link);
dispatch({ type: DECK_PREVIEW_SUCCESS });
} catch (error) {
console.error(error);
dispatch(setAlert(`problem generating pdf ${error}`, "danger"));
}
};
I am trying to large pdf files to elastic search to index them.
uploadPDFDocument: async (req, res, next) => {
try {
let data = req.body;
let client = await cloudSearchController.getElasticSearchClient();
const documentData = await fs.readFile("./large.pdf");
const encodedData = Buffer.from(documentData).toString('base64');
let document = {
id: 'my_id_7',
index: 'my-index-000001',
pipeline: 'attachment',
timeout: '5m',
body: {
data: encodedData
}
}
let response = await client.create(document);
console.log(response);
return res.status(200).send(response);
return true;
} catch (error) {
console.log(error.stack);
return next(error);
}
},
The above code works for small pdf files and I am able extract data from it and index it.
But for large pdf files I get timeout exception.
Is there any other way to this without time out issue?
I have read about fscrawler, filebeats and logstash but they all deal with logs not pdf files.
I use a post request to upload a picture and store the image data in my server but lost some image data:
let storePic = function(imgData) {
const base64Data = imgData.replace(/^data:image\/\w+;base64,/, "");
const dataBuffer = new Buffer.alloc(5000,base64Data, 'base64')
fs.writeFile(imgPath, dataBuffer, (err) => {
if (err) {
console.log('fail to store image')
} else {
console.log('success to store image')
}
})
}
When I get the image from the server, it is broken:
Should use Buffer.from(base64Data, 'base64') instead else its truncated.
Imo its slightly better to match out the image rather then just presume its there:
let matches = imgData.match(/^data:([A-Za-z-+\/]+);base64,(.+)$/)
if (matches.length !== 3) new Error('Invalid base64 image URI')
// matches[1] contains the mime-type which is handy for alot of things
fs.writeFile(imgPath, Buffer.from(matches[2], 'base64'), (err) => {
I need to save data and file as a new project to my Mongo. For this I am using formidable.
My POST method looks like this:
exports.create = async (req, res) => {
let form = new formidable.IncomingForm();
form.keepExtensions = true;
form.parse(req, (err, fields, files) => {
if (err) {
return res
.status(400)
.json({ errors: [{ msg: 'Image could not be uploaded' }] });
}
const {
title,
description,
photo,
tags,
git,
demo,
projectType,
} = fields;
//get links object
const projectFields = {};
projectFields.creator = req.user._id;
if (title) projectFields.title = title;
if (title) projectFields.description = description;
if (photo) projectFields.photo = photo;
if (projectType) projectFields.projectType = projectType;
if (tags) {
projectFields.tags = tags.split(',').map((tag) => tag.trim());
}
//get links object
projectFields.links = {};
if (git) projectFields.links.git = git;
if (demo) projectFields.links.demo = demo;
//1kb = 1000
//1mb = 1000000kb
//name 'photo' mus match client side. use photo
if (files.photo) {
if (files.photo.size > 1000000) {
return res.status(400).json({
errors: [{ msg: 'Image could not be uploaded. File to big.' }],
});
}
//this relates to data in schema product
project.photo.data = fs.readFileSync(files.photo.path);
project.photo.contentType = files.photo.type;
}
});
I want to use async/await so I am using try{}catch(err){} for my project.save(). I am initializing all my fields where I have also nested links. Unfortunately this is not working as I thought it will work. Right now my POST is returning 500. I am sitting on this and right now I am at the point that this can get a bit messy and not even close to any solution.