How can I make a query in graphQL that can download a pdf using "pdfmake"
#Resolver()
export class PdfResolver {
#Authorized()
#Query(() => String)
async CretePdf() {
const fs = require("fs");
const Pdfmake = require("pdfmake");
var fonts = {
Roboto: {
normal: "fonts/roboto/Roboto-Regular.ttf",
bold: "fonts/roboto/Roboto-Medium.ttf",
italics: "fonts/roboto/Roboto-Italic.ttf",
bolditalics: "fonts/roboto/Roboto-MediumItalic.ttf",
},
};
let pdfmake = new Pdfmake(fonts);
let docDefination = {
content: ["Hello World!"],
};
let pdfDoc;
pdfDoc = pdfmake.createPdfKitDocument(docDefination, {});
pdfDoc.pipe(fs.createWriteStream("pdfs/test.pdf"));
pdfDoc.end();
return "Pdf created successfully";
}
This query creates a pdf within my project.
What I need is that instead, when this query is called the pdf is downloaded.
I have seen that in the documentation there is a method called download, but it is for the frontend and I don't know if I can use it
Put your file somewhere temporarily (ex: S3 if you're on AWS)
send the URI of this file to the client in the graphql response data
have the client download from the provided URI
You can then either send another mutation to the server to remove the successfully downloaded file or let it expire after some timeout.
Related
Goal: Try to download a pdf file from Amazon S3 to my local machine via a NodeJS/VueJS application without creating a file on the server's filesystem.
Server: NodeJs(v 18.9.0) Express (4.17.1)
Middleware function that retrieves the file from S3 and converts the stream into a base64 string and sends that string to the client:
const filename = 'lets_go_to_the_snackbar.pdf';
const s3 = new AWS.S3(some access parameters);
const params = {
Bucket: do_not_kick_this_bucket,
Key: `yellowbrickroad/${filename}`
}
try {
const data = await s3
.getObject(params)
.promise();
const byte_string = Buffer.from(data.Body).toString('base64');
res.send(byte_string);
} catch (err) {
console.log(err);
}
Client: VueJS( v 3.2.33)
Function in component receives byte string via an axios (v 0.26.1) GET call to the server. The code to download is as follows:
getPdfContent: async function (filename) {
const resp = await AxiosService.getPdf(filename) // Get request to server made here.
const uriContent = `data:application/pdf;base64,${resp.data}`
const link = document.createElement('a')
link.href = uriContent
link.download = filename
document.body.appendChild(link) // Also tried omitting this line along with...
link.click()
link.remove() // ...omitting this line
}
Expected Result(s):
Browser opens a window to allow a directory to be selected as the file's destination.
Directory Selected.
File is downloaded.
Ice cream and mooncakes are served.
Actual Results(s):
Browser opens a window to allow a directory to be selected as the file's destination
Directory Selected.
Receive Failed - Network Error message.
Lots of crying...
Browser: Chrome (Version 105.0.5195.125 (Official Build) (x86_64))
Read somewhere that Chrome will balk at files larger than 4MB, so I checked the S3 bucket and according to Amazon S3 the file size is a svelte 41.7KB.
After doing some reading, a possible solution was presented that I tried to implement. It involved making a change to the VueJs getPdfContent function as follows:
getPdfContent: async function (filename) {
const resp = await AxiosService.getPdf(filename) // Get request to server made here.
/**** This is the line that was changed ****/
const uriContent = window.URL.createObjectURL(new Blob([resp.data], { type: 'application/pdf' } ))
const link = document.createElement('a')
link.href = uriContent
link.download = filename
document.body.appendChild(link) // Also tried omitting this line along with...
link.click()
link.remove() // ...omitting this line
}
Actual Results(s) for updated code:
Browser opens a window to allow a directory to be selected as the file's destination
Directory Selected.
PDF file downloaded.
Trying to open the file produces the message:
The file “lets_go_to_the_snackbar.pdf” could not be opened.
It may be damaged or use a file format that Preview doesn’t recognize.
I am able to download the file directly from S3 using the AWS S3 console with no problems opening the file.
I've read through similar postings and tried implementing their solutions, but found no joy. I would be highly appreciative if someone can
Give me an idea of where I am going off the path towards reaching the goal
Point me towards the correct path.
Thank you in advance for your help.
After doing some more research I found the problem was how I was returning the data from the server back to the client. I did not need to modify the data received from the S3 service.
Server Code:
let filename = req.params.filename;
const params = {
Bucket: do_not_kick_this_bucket,
Key: `yellowbrickroad/${filename}`
}
try {
const data = await s3
.getObject(params)
.promise();
/* Here I did not modify the information returned */
res.send(data.Body);
res.end();
} catch (err) {
console.log(err);
}
On the client side my VueJS component receives a Blob object as the response
Client Code:
async getFile (filename) {
let response = await AuthenticationService.downloadFile(filename)
const uriContent = window.URL.createObjectURL(new Blob([response.data]))
const link = document.createElement('a')
link.setAttribute('href', uriContent)
link.setAttribute('download', filename)
document.body.appendChild(link)
link.click()
link.remove()
}
In the end the goal was achieved; a file on S3 can be downloaded directly to a user's local machine without the application storing a file on the server.
I would like to mention Sunpun Sandaruwan's answer which gave me the final clue I needed to reach my goal.
I have a Node.js (16.13.1) REST API using Express and one of my endpoints receives one or more uploaded files. The client (web app) uses FormData into which the files are appended. Once they're submitted to my API, the code there uses multer to grab the files from the request object.
Now I'm having trouble trying to send those same files to another API. multer attaches the files to req.files and each file object in that array has several properties one of which is buffer. I tried using the stream package's Duplex object to convert this buffer to a stream so that I could append the file to another FormData object, but when the server the second API is running on receives the request, I get an error from the web server saying that "a potentially dangerous request.form value was detected from the client.".
Any suggestions?
I am working on a nest project I was also facing this issue did some research and found that we need to create a Readable from the Buffer of that file and it's working for me.
// Controller
#UseInterceptors(FileInterceptor('file'))
async uploadFile(#UploadedFile() file: Express.Multer.File) {
return this.apiservice.upload(file);
}
// Service
uploadFile(file: Express.Multer.File) {
const readstream = Readable.from(file.buffer)
console.log(readstream)
const form = new FormData();
form.append('file', file, { filename: extra.filename });
const url = `api_endpoint`;
const config: AxiosRequestConfig = {
headers: {
'Content-Type': 'multipart/form-data'
},
};
return axios.post(url, form, config);
}
I'm using TinyMCE in a custom field for the KeystoneJS AdminUI, which is a React app. I'd like to upload images from the React front to the KeystoneJS GraphQL back. I can upload the images using a REST endpoint I added to the Keystone server -- passing TinyMCE an images_upload_handler callback -- but I'd like to take advantage of Keystone's already-built GraphQL endpoint for an Image list/type I've created.
I first tried to use the approach detailed in this article, using axios to upload the image
const getGQL = (theFile) => {
const query = gql`
mutation upload($file: Upload!) {
createImage(file: $file) {
id
file {
path
filename
}
}
}
`;
// The operation contains the mutation itself as "query"
// and the variables that are associated with the arguments
// The file variable is null because we can only pass text
// in operation variables
const operation = {
query,
variables: {
file: null
}
};
// This map is used to associate the file saved in the body
// of the request under "0" with the operation variable "variables.file"
const map = {
'0': ['variables.file']
};
// This is the body of the request
// the FormData constructor builds a multipart/form-data request body
// Here we add the operation, map, and file to upload
const body = new FormData();
body.append('operations', JSON.stringify(operation));
body.append('map', JSON.stringify(map));
body.append('0', theFile);
// Create the options of our POST request
const opts = {
method: 'post',
url: 'http://localhost:4545/admin/api',
body
};
// #ts-ignore
return axios(opts);
};
but I'm not sure what to pass as theFile -- TinyMCE's images_upload_handler, from which I need to call the image upload, accepts a blobInfo object which contains functions to give me
The file name doesn't work, neither does the blob -- both give me server errors 500 -- the error message isn't more specific.
I would prefer to use a GraphQL client to upload the image -- another SO article suggests using apollo-upload-client. However, I'm operating within the KeystoneJS environment, and Apollo-upload-client says
Apollo Client can only have 1 “terminating” Apollo Link that sends the
GraphQL requests; if one such as apollo-link-http is already setup,
remove it.
I believe Keystone has already set up Apollo-link-http (it comes up multiple times on search), so I don't think I can use Apollo-upload-client.
The UploadLink is just a drop-in replacement for HttpLink. There's no reason you shouldn't be able to use it. There's a demo KeystoneJS app here that shows the Apollo Client configuration, including using createUploadLink.
Actual usage of the mutation with the Upload scalar is shown here.
Looking at the source code, you should be able to use a custom image handler and call blob on the provided blobInfo object. Something like this:
tinymce.init({
images_upload_handler: async function (blobInfo, success, failure) {
const image = blobInfo.blob()
try {
await apolloClient.mutate(
gql` mutation($image: Upload!) { ... } `,
{
variables: { image }
}
)
success()
} catch (e) {
failure(e)
}
}
})
I used to have the same problem and solved it with Apollo upload link. Now when the app got into the production phase I realized that Apollo client took 1/3rd of the gzipped built files and I created minimal graphql client just for keystone use with automatic image upload. The package is available in npm: https://www.npmjs.com/package/#sylchi/keystone-graphql-client
Usage example that will upload github logo to user profile if there is an user with avatar field set as file:
import { mutate } from '#sylchi/keystone-graphql-client'
const getFile = () => fetch('https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png',
{
mode: "cors",
cache: "no-cache"
})
.then(response => response.blob())
.then(blob => {
return new File([blob], "file.png", { type: "image/png" })
});
getFile().then(file => {
const options = {
mutation: `
mutation($id: ID!, $data: UserUpdateInput!){
updateUser(id: $id, data: $data){
id
}
}
`,
variables: {
id: "5f5a7f712a64d9db72b30602", //replace with user id
data: {
avatar: file
}
}
}
mutate(options).then(result => console.log(result));
});
The whole package is just 50loc with 1 dependency :)
The easies way for me was to use graphql-request. The advantage is that you don't need to set manually any header prop and it uses the variables you need from the images_upload_handler as de docs describe.
I did it this way:
const { request, gql} = require('graphql-request')
const query = gql`
mutation IMAGE ($file: Upload!) {
createImage (data:
file: $file,
}) {
id
file {
publicUrl
}
}
}
`
images_upload_handler = (blobInfo, success) => {
// ^ ^ varibles you get from tinymce
const variables = {
file: blobInfo.blob()
}
request(GRAPHQL_API_URL, query, variables)
.then( data => {
console.log(data)
success(data.createImage.fileRemote.publicUrl)
})
}
For Keystone 5 editorConfig would stripe out functions, so I clone the field and set the function in the views/Field.js file.
Good luck ( ^_^)/*
I am trying to return a file from GridFS using my Nest controller. As far as I can tell nest is not respecting my custom content-type header which i set to application/zip, as I am receiving a text content type upon return (see screenshot).
response data image, wrong content-type header
My nest controller looks like this
#Get(':owner/:name/v/:version/download')
#Header('Content-Type', 'application/zip')
async downloadByVersion(#Param('owner') owner: string, #Param('name') name: string, #Param('version') version: string, #Res() res): Promise<any> {
let bundleData = await this.service.getSwimbundleByVersion(owner, name, version);
let downloadFile = await this.service.downloadSwimbundle(bundleData['meta']['fileData']['_id']);
return res.pipe(downloadFile);
}
Here is the service call
downloadSwimbundle(fileId: string): Promise<GridFSBucketReadStream> {
return this.repository.getFile(fileId)
}
which is essentially a pass-through to this.
async getFile(fileId: string): Promise<GridFSBucketReadStream> {
const db = await this.dbSource.db;
const bucket = new GridFSBucket(db, { bucketName: this.collectionName });
const downloadStream = bucket.openDownloadStream(new ObjectID(fileId));
return new Promise<GridFSBucketReadStream>(resolve => {
resolve(downloadStream)
});
}
My end goal is to call the download endpoint and have a browser register that it is a zip file and download it instead of seeing the binary in the browser. Any guidance on what needs to be done to get there would be greatly appreciated. Thanks for reading
You also need to set the Content-Disposition header with a file name. You can use the #Header() decorator if the file name will always be the same or setHeader directly on the response object if you need to be able to send back different file names based on some parameter in your controller.
Both of the following example controller methods work for sending back a downloadable file to the browser from my local file system.
#Get('/test')
#Header('Content-Type', 'application/pdf')
#Header('Content-Disposition', 'attachment; filename=something.pdf')
getTest(#Res() response: Response) {
const data = createReadStream(path.join(__dirname, 'test.pdf'));
data.pipe(response);
}
#Get('/test')
#Header('Content-Type', 'application/pdf')
getTest(#Res() response: Response) {
const data = createReadStream(path.join(__dirname, 'test.pdf'));
response.setHeader(
'Content-Disposition',
'attachment; filename=another.pdf',
);
data.pipe(response);
}
I am using the react-native-fs and I am trying to save a base64 of a pdf file to my android emulators file system.
I receive base64 encoded pdf from the server.
I then decode the base64 string with the line:
var pdfBase64 = 'data:application/pdf;base64,'+base64Str;
saveFile() function
saveFile(filename, pdfBase64){
// create a path you want to write to
var path = RNFS.DocumentDirectoryPath + '/' + filename;
// write the file
RNFS.writeFile(path, base64Image, 'base64').then((success) => {
console.log('FILE WRITTEN!');
})
.catch((err) => {
console.log("SaveFile()", err.message);
});
}
Error
When I try saving the pdfBase64 the saveFile() function catches the following error:
bad base-64
Question
Can anyone tell where or what I am doing wrong?
Thanks.
For anyone having the same problem, here is the solution.
Solution
react-nativive-pdf-view must take the file path to the pdf_base64.
Firstly, I used the react-native-fetch-blob to request the pdf base64 from the server.(Because RN fetch API does not yet support BLOBs).
Also I discovered that react-native-fetch-blob also has a FileSystem API which is way better documented and easier to understand than the 'react-native-fs' library. (Check out its FileSystem API documentation)
Receiving base64 pdf and saving it to a file path:
var RNFetchBlob = require('react-native-fetch-blob').default;
const DocumentDir = RNFetchBlob.fs.dirs.DocumentDir;
getPdfFromServer: function(uri_attachment, filename_attachment) {
return new Promise((RESOLVE, REJECT) => {
// Fetch attachment
RNFetchBlob.fetch('GET', config.apiRoot+'/app/'+uri_attachment)
.then((res) => {
let base64Str = res.data;
let pdfLocation = DocumentDir + '/' + filename_attachment;
RNFetchBlob.fs.writeFile(pdfLocation, pdf_base64Str, 'base64');
RESOLVE(pdfLocation);
})
}).catch((error) => {
// error handling
console.log("Error", error)
});
}
What I was doing wrong was instead of saving the pdf_base64Str to the file location like I did in the example above. I was saving it like this:
var pdf_base64= 'data:application/pdf;base64,'+pdf_base64Str;
which was wrong.
Populate PDF view with file path:
<PDFView
ref={(pdf)=>{this.pdfView = pdf;}}
src={pdfLocation}
style={styles.pdf}
/>
There is a new package to handle the fetching (based on react-native-fetch-blob) and displaying of the PDF via URL: react-native-pdf.
Remove application type in base64 string and it's working for me
var pdfBase64 = 'data:application/pdf;base64,'+base64Str;
To
var pdfBase64 = base64Str;