How can one upload an image to a KeystoneJS GraphQL endpoint? - node.js

I'm using TinyMCE in a custom field for the KeystoneJS AdminUI, which is a React app. I'd like to upload images from the React front to the KeystoneJS GraphQL back. I can upload the images using a REST endpoint I added to the Keystone server -- passing TinyMCE an images_upload_handler callback -- but I'd like to take advantage of Keystone's already-built GraphQL endpoint for an Image list/type I've created.
I first tried to use the approach detailed in this article, using axios to upload the image
const getGQL = (theFile) => {
const query = gql`
mutation upload($file: Upload!) {
createImage(file: $file) {
id
file {
path
filename
}
}
}
`;
// The operation contains the mutation itself as "query"
// and the variables that are associated with the arguments
// The file variable is null because we can only pass text
// in operation variables
const operation = {
query,
variables: {
file: null
}
};
// This map is used to associate the file saved in the body
// of the request under "0" with the operation variable "variables.file"
const map = {
'0': ['variables.file']
};
// This is the body of the request
// the FormData constructor builds a multipart/form-data request body
// Here we add the operation, map, and file to upload
const body = new FormData();
body.append('operations', JSON.stringify(operation));
body.append('map', JSON.stringify(map));
body.append('0', theFile);
// Create the options of our POST request
const opts = {
method: 'post',
url: 'http://localhost:4545/admin/api',
body
};
// #ts-ignore
return axios(opts);
};
but I'm not sure what to pass as theFile -- TinyMCE's images_upload_handler, from which I need to call the image upload, accepts a blobInfo object which contains functions to give me
The file name doesn't work, neither does the blob -- both give me server errors 500 -- the error message isn't more specific.
I would prefer to use a GraphQL client to upload the image -- another SO article suggests using apollo-upload-client. However, I'm operating within the KeystoneJS environment, and Apollo-upload-client says
Apollo Client can only have 1 “terminating” Apollo Link that sends the
GraphQL requests; if one such as apollo-link-http is already setup,
remove it.
I believe Keystone has already set up Apollo-link-http (it comes up multiple times on search), so I don't think I can use Apollo-upload-client.

The UploadLink is just a drop-in replacement for HttpLink. There's no reason you shouldn't be able to use it. There's a demo KeystoneJS app here that shows the Apollo Client configuration, including using createUploadLink.
Actual usage of the mutation with the Upload scalar is shown here.
Looking at the source code, you should be able to use a custom image handler and call blob on the provided blobInfo object. Something like this:
tinymce.init({
images_upload_handler: async function (blobInfo, success, failure) {
const image = blobInfo.blob()
try {
await apolloClient.mutate(
gql` mutation($image: Upload!) { ... } `,
{
variables: { image }
}
)
success()
} catch (e) {
failure(e)
}
}
})

I used to have the same problem and solved it with Apollo upload link. Now when the app got into the production phase I realized that Apollo client took 1/3rd of the gzipped built files and I created minimal graphql client just for keystone use with automatic image upload. The package is available in npm: https://www.npmjs.com/package/#sylchi/keystone-graphql-client
Usage example that will upload github logo to user profile if there is an user with avatar field set as file:
import { mutate } from '#sylchi/keystone-graphql-client'
const getFile = () => fetch('https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png',
{
mode: "cors",
cache: "no-cache"
})
.then(response => response.blob())
.then(blob => {
return new File([blob], "file.png", { type: "image/png" })
});
getFile().then(file => {
const options = {
mutation: `
mutation($id: ID!, $data: UserUpdateInput!){
updateUser(id: $id, data: $data){
id
}
}
`,
variables: {
id: "5f5a7f712a64d9db72b30602", //replace with user id
data: {
avatar: file
}
}
}
mutate(options).then(result => console.log(result));
});
The whole package is just 50loc with 1 dependency :)

The easies way for me was to use graphql-request. The advantage is that you don't need to set manually any header prop and it uses the variables you need from the images_upload_handler as de docs describe.
I did it this way:
const { request, gql} = require('graphql-request')
const query = gql`
mutation IMAGE ($file: Upload!) {
createImage (data:
file: $file,
}) {
id
file {
publicUrl
}
}
}
`
images_upload_handler = (blobInfo, success) => {
// ^ ^ varibles you get from tinymce
const variables = {
file: blobInfo.blob()
}
request(GRAPHQL_API_URL, query, variables)
.then( data => {
console.log(data)
success(data.createImage.fileRemote.publicUrl)
})
}
For Keystone 5 editorConfig would stripe out functions, so I clone the field and set the function in the views/Field.js file.
Good luck ( ^_^)/*

Related

Download a file generated by "pdfmake" with a graphQL query

How can I make a query in graphQL that can download a pdf using "pdfmake"
#Resolver()
export class PdfResolver {
#Authorized()
#Query(() => String)
async CretePdf() {
const fs = require("fs");
const Pdfmake = require("pdfmake");
var fonts = {
Roboto: {
normal: "fonts/roboto/Roboto-Regular.ttf",
bold: "fonts/roboto/Roboto-Medium.ttf",
italics: "fonts/roboto/Roboto-Italic.ttf",
bolditalics: "fonts/roboto/Roboto-MediumItalic.ttf",
},
};
let pdfmake = new Pdfmake(fonts);
let docDefination = {
content: ["Hello World!"],
};
let pdfDoc;
pdfDoc = pdfmake.createPdfKitDocument(docDefination, {});
pdfDoc.pipe(fs.createWriteStream("pdfs/test.pdf"));
pdfDoc.end();
return "Pdf created successfully";
}
This query creates a pdf within my project.
What I need is that instead, when this query is called the pdf is downloaded.
I have seen that in the documentation there is a method called download, but it is for the frontend and I don't know if I can use it
Put your file somewhere temporarily (ex: S3 if you're on AWS)
send the URI of this file to the client in the graphql response data
have the client download from the provided URI
You can then either send another mutation to the server to remove the successfully downloaded file or let it expire after some timeout.

How to use correctly NextJs API

i've just started using Next js with mongodb and i have a question about how should i organize the API route files.
I have a simple application that add, update and delete documents of a mongodb collection. For each operation i created a .ts file inside the api folder. Like this
And for example my new_task.ts file looks like this
export default async function AddTask (req:NextApiRequest, res:NextApiResponse) {
const task:Task = req.body
const client = await clientPromise;
const db = client.db("diary");
const myCollection: Collection = db.collection('tasks');
try {
await myCollection.insertOne(task)
res.send('Success')
} catch (error) {
res.status(400).json({error})
console.log(error)
}
}
Everything is working ok but i think it's kinda messy the file organization. Is there a way to put every operation inside just one file? Or to do so i would have to build a custom server with express?
Thanks
In one route function, you can check the request object req to see if the HTTP request method is GET POST PUT PATCH or DELETE. Depending on which method, you can call a different function.
Here is an example from the NextJS docs.
import type { NextApiRequest, NextApiResponse } from 'next'
export default function userHandler(req: NextApiRequest, res: NextApiResponse) {
const {
query: { id, name },
method,
} = req
switch (method) {
case 'GET':
// Get data from your database
res.status(200).json({ id, name: `User ${id}` })
break
case 'PUT':
// Update or create data in your database
res.status(200).json({ id, name: name || `User ${id}` })
break
default:
res.setHeader('Allow', ['GET', 'PUT'])
res.status(405).end(`Method ${method} Not Allowed`)
}
}
Another thing you can do to make your code more re-useable and easier to maintain is to write reusable function definitions in a lib folder and then import them into your api route files when you want to use them.
Have you tried creating a file in the lib folder and writing function definitions there for MongoDB and then importing those function definitions into your api route file?
Then call the appropriate function depending upon the request method.
In ./lib/mongodb, write a function definition and import any Mongo-related imports you need.
export async function updateUserInfo(parameters) {
// . . . your code needs to return something, probably an array or object from MongoDB
}
In your api route file, import that function definition.
import { updateUserInfo } from "../../lib/mongodb"
Inside your route function, call updateUserInfo and pass whatever arguments you need based on the parameters you put in the definition. Handle its return value using await.
import type { NextApiRequest, NextApiResponse } from 'next'
import { updateUserInfo } from "../../lib/mongodb"
export default function userHandler(req: NextApiRequest, res: NextApiResponse) {
const {
query: { id, name },
method,
} = req
switch (method) {
case 'GET':
// Get data from your database
res.status(200).json({ id, name: `User ${id}` })
break
case 'PUT':
// Update or create data in your database
const updateResult = await updateUserInfo( . . .)
// FIX THE OBJECT IN .JSON BELOW TO SUIT YOUR CODE
res.status(200).json({ id, name: name || `User ${id}` })
break
default:
res.setHeader('Allow', ['GET', 'PUT'])
res.status(405).end(`Method ${method} Not Allowed`)
}
}
You can reuse updateUserInfo anywhere you have arguments for the required parameters.
Also, consider when you are calling the API route. At build time or after. At build, you call from static functions and after you call from client-side.
So by using the lib file for function definitions, you can reuse them in server functions and static functions.
The structure of the files inside the api folder is your api architecture. So it's organization depends upon your application's needs. You can use static and dynamic routes, as you maybe already know.
Consider API best practices when designing your architecture.

How can I properly create redirects from an array in Gatsby

I am working with Gatsby and WordPress. I am trying to redirect some URLs using the Gatsby redirect API. I write the query to get an Object and then I use the Map method to create an array of the items we need from that object. I then run a for Each method to get the individual data from that array but it fails on running the development server.
What is the Right way to do this?
const { createRedirect } = actions;
const yoastRedirects = graphql(`
{
wp {
seo {
redirects {
format
origin
target
type
}
}
}
}
`)
const redirectOriginUrls = yoastRedirects.wp.seo.redirects.map(redirect=>(redirect.origin))
const redirectTargetUrls = yoastRedirects.wp.seo.redirects.map(redirect=>(
redirect.target
))
redirectOriginUrls.forEach(redirectOriginUrl=>(
redirectTargetUrls.forEach(redirectTargetUrl=>(
createRedirect({
fromPath: `/${redirectOriginUrl}`,
toPath: `/${redirectTargetUrl}`,
isPermanent: true
})
))
))
The createRedirect API needs to recieve a structure like:
exports.createPages = ({ graphql, actions }) => {
const { createRedirect } = actions
createRedirect({ fromPath: '/old-url', toPath: '/new-url', isPermanent: true })
createRedirect({ fromPath: '/url', toPath: '/zn-CH/url', Language: 'zn' })
createRedirect({ fromPath: '/not_so-pretty_url', toPath: '/pretty/url', statusCode: 200 })
// Create pages
}
In your case, you are not entering to the correct fetched data. Assuming that the loops are properly done, you must do:
let redirectOriginUrls=[];
let redirectTargetUrls=[];
yoastRedirects.data.wp.seo.redirects.map(redirect=>{
return redirectOriginUrls.push(redirect.origin)
});
yoastRedirects.data.wp.seo.redirects.map(redirect=>{
return redirectTargetUrls.push(redirect.target)
})
Instead of:
const redirectOriginUrls = yoastRedirects.wp.seo.redirects.map(redirect=>(redirect.origin))
const redirectTargetUrls = yoastRedirects.wp.seo.redirects.map(redirect=>(
redirect.target
))
Notice the .data addition in the nested object.
In addition, keep in mind that the createRedirect API will only work only when having a hosting infrastructure behind, like AWS or Netlify, both have plugins integration with Gatsby. This will generate meta redirect HTML files for redirecting on any static file host.

Google cloud tasks NodeJS api: Get queue stats?

I want to obtain the stats field for a queue in google cloud tasks using the nodejs client library #google-cloud/tasks. The stats field only exists in the v2beta3 version, however to get it we need to pass a query params readMask=*, but I don't know how to pass it using the client lib.
I tried using the otherArgs params, but its not working.
const tasks = require('#google-cloud/tasks');
const client = new tasks.v2beta3.GoogleCloudTasks()
// Get queue containing stats
const queue = await client.getQueue({name: '..'}, {otherArgs: {readMask: '*'}})
The readMask specifies which paths of the response object to fetch. The response will include all possible paths with placeholders like null, UNSPECIFIED, etc. and then contain the actual values you want.
const request = {
...
readMask: { paths: ['name', 'stats', 'state', ...] }
};
getQueue
const { v2beta3 } = require('#google-cloud/tasks');
const tasksClient = new v2beta3.CloudTasksClient();
async function main() {
const request = {
name: 'projects/PROJECT/locations/LOCATION/queues/QUEUE',
readMask: { paths: ['name', 'stats'] }
};
const [response] = await tasksClient.getQueue(request);
console.log(response);
}
main();
/*
{
name: 'projects/PROJECT/locations/LOCATION/queues/QUEUE',
...
stats: {
tasksCount: '113',
oldestEstimatedArrivalTime: null,
executedLastMinuteCount: '0',
concurrentDispatchesCount: '0',
effectiveExecutionRate: 500
}
}
*/
listQueues
const { v2beta3 } = require('#google-cloud/tasks');
const tasksClient = new v2beta3.CloudTasksClient();
async function main() {
const request = {
parent: 'projects/PROJECT/locations/LOCATION',
readMask: { paths: ['name', 'stats'] }
};
const [response] = await tasksClient.listQueues(request);
console.log(response);
}
main();
/*
[
{
name: 'projects/PROJECT/locations/LOCATION/queues/QUEUE',
...
stats: {
tasksCount: '1771',
oldestEstimatedArrivalTime: [Object],
executedLastMinuteCount: '0',
concurrentDispatchesCount: '0',
effectiveExecutionRate: 500
}
},
...
]
*/
By taking a look at the source code for the client library I see no reference for the readMask parameter as specified on the v2beta3 version of the REST API projects.locations.queues.get method.
The relevant method on the NodeJS client library getQueue() expects a type of request IGetQueueRequest that doesn't have the readMask parameter and is only expecting the name property.
Nonetheless this implementation might change in the future to include a relevant method to get the stats.
Regarding the REST API itself, there is an error on the public docs on the readMask section as * is not a valid character. If you want to get the Queue.stats field you should simply enter stats on the readMask parameter. If you want to get all the relevant fields you should enter all of them (e.g. name,rateLimits,retryConfig,state,taskTtl,tombstoneTtl,type,stats should get all the relevant fields you get from calling the method + the Queue.stats field).
The following picture should help you.
As a workaround if you click on the expand symbol on the Try this API section of the docs for the relevant method you could click on the JAVASCRIPT section and get the relevant code as how to build the request as shown on the following picture.
EDIT JANUARY 23rd 2020
The documentation was corrected to inform that in order to express that:
[Queue.stats] will be returned only if it was explicitly specified in the mask.
Which translates that simply writing stats under the readMask field will return the stats.

NestJS return a fie from GridFS

I am trying to return a file from GridFS using my Nest controller. As far as I can tell nest is not respecting my custom content-type header which i set to application/zip, as I am receiving a text content type upon return (see screenshot).
response data image, wrong content-type header
My nest controller looks like this
#Get(':owner/:name/v/:version/download')
#Header('Content-Type', 'application/zip')
async downloadByVersion(#Param('owner') owner: string, #Param('name') name: string, #Param('version') version: string, #Res() res): Promise<any> {
let bundleData = await this.service.getSwimbundleByVersion(owner, name, version);
let downloadFile = await this.service.downloadSwimbundle(bundleData['meta']['fileData']['_id']);
return res.pipe(downloadFile);
}
Here is the service call
downloadSwimbundle(fileId: string): Promise<GridFSBucketReadStream> {
return this.repository.getFile(fileId)
}
which is essentially a pass-through to this.
async getFile(fileId: string): Promise<GridFSBucketReadStream> {
const db = await this.dbSource.db;
const bucket = new GridFSBucket(db, { bucketName: this.collectionName });
const downloadStream = bucket.openDownloadStream(new ObjectID(fileId));
return new Promise<GridFSBucketReadStream>(resolve => {
resolve(downloadStream)
});
}
My end goal is to call the download endpoint and have a browser register that it is a zip file and download it instead of seeing the binary in the browser. Any guidance on what needs to be done to get there would be greatly appreciated. Thanks for reading
You also need to set the Content-Disposition header with a file name. You can use the #Header() decorator if the file name will always be the same or setHeader directly on the response object if you need to be able to send back different file names based on some parameter in your controller.
Both of the following example controller methods work for sending back a downloadable file to the browser from my local file system.
#Get('/test')
#Header('Content-Type', 'application/pdf')
#Header('Content-Disposition', 'attachment; filename=something.pdf')
getTest(#Res() response: Response) {
const data = createReadStream(path.join(__dirname, 'test.pdf'));
data.pipe(response);
}
#Get('/test')
#Header('Content-Type', 'application/pdf')
getTest(#Res() response: Response) {
const data = createReadStream(path.join(__dirname, 'test.pdf'));
response.setHeader(
'Content-Disposition',
'attachment; filename=another.pdf',
);
data.pipe(response);
}

Resources