Using promises with PDFMake on Firebase Cloud Functions - node.js

I am using PDFMake (a variant of PDFKit) to generate PDFs on Firebase Cloud Functions using a realtime database trigger. The function gets all relevant data from the database and then passes it to the function that is supposed to generate the PDF.
All this is done using Promises. Everything works fine until the point where the PDF is actually generated.
Here's the code in my main event listener:
exports.handler = (admin, event, storage) => {
const quotationData = event.data.val();
// We must return a Promise when performing async tasks inside Functions
// Eg: Writing to realtime db
const companyId = event.params.companyId;
settings.getCompanyProfile(admin, companyId)
.then((profile) => {
return quotPdfHelper.generatePDF(fonts, profile, quotationData, storage);
})
.then(() => {
console.log('Generation Successful. Pass for email');
})
.catch((err) => {
console.log(`Error: ${err}`);
});
};
To generate the PDF, here's my code:
exports.generatePDF = (fonts, companyInfo, quotationData, storage) => {
const printer = new PdfPrinter(fonts);
const docDefinition = {
content: [
{
text: [
{
text: `${companyInfo.title}\n`,
style: 'companyHeader',
},
`${companyInfo.addr_line1}, ${companyInfo.addr_line2}\n`,
`${companyInfo.city} (${companyInfo.state}) - INDIA\n`,
`Email: ${companyInfo.email} • Web: ${companyInfo.website}\n`,
`Phone: ${companyInfo.phone}\n`,
`GSTIN: ${companyInfo.gst_registration_number} • PAN: AARFK6552G\n`,
],
style: 'body',
//absolutePosition: {x: 20, y: 45}
},
],
styles: {
companyHeader: {
fontSize: 18,
bold: true,
},
body: {
fontSize: 10,
},
},
pageMargins: 20,
};
return new Promise((resolve, reject) => {
// const bucket = storage.bucket(`${PROJECT_ID}.appspot.com`);
// const filename = `${Date.now()}-quotation.pdf`;
// const file = bucket.file(filename);
// const stream = file.createWriteStream({ resumable: false });
const pdfDoc = printer.createPdfKitDocument(docDefinition);
// pdfDoc.pipe(stream);
const chunks = [];
let result = null;
pdfDoc.on('data', (chunk) => {
chunks.push(chunk);
});
pdfDoc.on('error', (err) => {
reject(err);
});
pdfDoc.on('end', () => {
result = Buffer.concat(chunks);
resolve(result);
});
pdfDoc.end();
});
};
What could be wrong here that is preventing the promise and thereby the quotation code to be executed as intended?
On firebase log, all I see is Function execution took 3288 ms, finished with status: 'ok'

Based on the execution time and lack of errors, it looks like you're successfully creating the buffer for the PDF but you're not actually returning it from the function.
.then((profile) => {
return quotPdfHelper.generatePDF(fonts, profile, quotationData, storage);
})
.then(() => {
console.log('Generation Successful. Pass for email');
})
In the code above, you're passing the result to the next then block, but then returning undefined from that block. The end result of this Promise chain will be undefined. To pass the result through, you'd want to return it at the end of the Promise chain:
.then((profile) => {
return quotPdfHelper.generatePDF(fonts, profile, quotationData, storage);
})
.then(buffer => {
console.log('Generation Successful. Pass for email');
return buffer;
})

I'm trying to experiment generating pdf using firebase cloud function but I am blocked about defining fonts parameter.
Here's my definition:
var fonts = {
Roboto: {
normal: './fonts/Roboto-Regular.ttf',
bold: './fonts/Roboto-Bold.ttf',
italics: './fonts/Roboto-Italic.ttf',
bolditalics: './fonts/Roboto-BoldItalic.ttf'
}
};
I've created a fonts folder which contain the for above files. However wherever I set the fonts folder (in root, in functions folder or in node_modules folder), I get the error 'no such file or directory' when deploying functions. Any advice would be very much appreciated.

Related

Google Cloud Tasks: How to update a tasks time to live?

Description:
I have created a Firebase app where a user can insert a Firestore document. When this document is created a timestamp is added so that it can be automatically deleted after x amount of time, by a cloud function.
After the document is created, a http/onCreate cloud function is triggered successfully, and it creates a cloud task. Which then deletes the document on the scheduled time.
export const onCreatePost = functions
.region(region)
.firestore.document('/boxes/{id}')
.onCreate(async (snapshot) => {
const data = snapshot.data() as ExpirationDocData;
// Box creation timestamp.
const { timestamp } = data;
// The path of the firebase document('/myCollection/{docId}').
const docPath = snapshot.ref.path;
await scheduleCloudTask(timestamp, docPath)
.then(() => {
console.log('onCreate: cloud task created successfully.');
})
.catch((error) => {
console.error(error);
});
});
export const scheduleCloudTask = async (timestamp: number, docPath: string) => {
// Convert timestamp to seconds.
const timestampToSeconds = timestamp / 1000;
// Doc time to live in seconds
const documentLifeTime = 20;
const expirationAtSeconds = timestampToSeconds + documentLifeTime;
// The Firebase project ID.
const project = 'my-project';
// Cloud Tasks -> firestore time to life queue.
const queue = 'my-queue';
const queuePath: string = tasksClient.queuePath(project, region, queue);
// The url to the callback function.
// That gets envoked by Google Cloud tasks when the deadline is reached.
const url = `https://${region}-${project}.cloudfunctions.net/callbackFn`;
const payload: ExpirationTaskPayload = { docPath };
// Google cloud IAM & ADMIN principle account.
const serviceAccountEmail = 'myServiceAccount#appspot.gserviceaccount.com';
// Configuration for the Cloud Task
const task = {
httpRequest: {
httpMethod: 'POST',
url,
oidcToken: {
serviceAccountEmail,
},
body: Buffer.from(JSON.stringify(payload)).toString('base64'),
headers: {
'Content-Type': 'application/json',
},
},
scheduleTime: {
seconds: expirationAtSeconds,
},
};
await tasksClient.createTask({
parent: queuePath,
task,
});
};
export const callbackFn = functions
.region(region)
.https.onRequest(async (req, res) => {
const payload = req.body as ExpirationTaskPayload;
try {
await admin.firestore().doc(payload.docPath).delete();
res.sendStatus(200);
} catch (error) {
console.error(error);
res.status(500).send(error);
}
});
Problem:
The user can also extend the time to live for the document. When that happens the timestamp is successfully updated in the Firestore document, and a http/onUpdate cloud function runs like expected.
Like shown below I tried to update the cloud tasks "time to live", by calling again
the scheduleCloudTask function. Which obviously does not work and I guess just creates another task for the document.
export const onDocTimestampUpdate = functions
.region(region)
.firestore.document('/myCollection/{docId}')
.onUpdate(async (change, context) => {
const before = change.before.data() as ExpirationDocData;
const after = change.after.data() as ExpirationDocData;
if (before.timestamp < after.timestamp) {
const docPath = change.before.ref.path;
await scheduleCloudTask(after.timestamp, docPath)
.then((res) => {
console.log('onUpdate: cloud task created successfully.');
return;
})
.catch((error) => {
console.error(error);
});
} else return;
});
I have not been able to find documentation or examples where an updateTask() or a similar method is used to update an existing task.
Should I use the deleteTask() method and then use the createTask() method and create a new task after the documents timestamp is updated?
Thanks in advance,
Cheers!
Yes, that's how you have to do it. There is no API to update a task.

generated sitemaps are corrupted using sitemap library for node/js

I'm using a library called sitemap to generate files from an array of objects constructed during runtime. My goal is to upload these generated sitemaps to an S3 bucket.
So far, the function is hosted on AWS lambda and uploading generated files correctly to the bucket.
My problem is that, the generated sitemaps are corrupted. When I run the function locally, they get generated correctly without any issues.
Here's my handler:
module.exports.handler = async () => {
try {
console.log("inside handler....");
await clearGeneratedSitemapsFromTmpDir();
const sms = new SitemapAndIndexStream({
limit: 10000,
getSitemapStream: (i) => {
const sitemapStream = new SitemapStream({
lastmodDateOnly: true,
});
const linkPath = `/sitemap-${i + 1}.xml`;
const writePath = `/tmp/${linkPath}`;
sitemapStream.pipe(createWriteStream(resolve(writePath)));
return [new URL(linkPath, hostName).toString(), sitemapStream];
},
});
const data = await generateSiteMap();
sms.pipe(createWriteStream(resolve("/tmp/sitemap-index.xml")));
// data.forEach((item) => sms.write(item));
Readable.from(data).pipe(sms);
sms.end();
await uploadToS3();
await clearGeneratedSitemapsFromTmpDir();
} catch (error) {
console.log("🚀 ~ file: index.js ~ line 228 ~ exec ~ error", error);
Sentry.captureException(error);
}
};
The data variable has an array of around 11k items, so according to the code above, two sitemap files would be generated(first 10k, rest to second sitemap) in addition to a sitemap index where it lists the two generated sitemaps.
Here's my uploadToS3 function:
const uploadToS3 = async () => {
try {
console.log("uploading to s3....");
const files = await getGeneratedXmlFilesNames();
for (let i = 0; i < files.length; i += 1) {
const file = files[i];
const filePath = `/tmp/${file}`;
// const stream = createReadStream(resolve(filePath));
const fileRead = await readFileAsync(filePath, { encoding: "utf-8" });
const params = {
Body: fileRead,
Key: `${file}`,
ACL: "public-read",
ContentType: "application/xml",
ContentDisposition: "inline",
};
// const result = await s3Client.upload(params).promise();
const result = await s3Client.putObject(params).promise();
console.log(
"🚀 ~ file: index.js ~ line 228 ~ uploadToS3 ~ result",
result
);
}
} catch (error) {
console.log("uploadToS3 => error", error);
// Sentry.captureException(error);
}
};
And here's the function that cleans up the generated files from lambda's /tmp directory after upload to S3:
const clearGeneratedSitemapsFromTmpDir = async () => {
try {
console.log("cleaning up....");
const readLocalTempDirDir = await readDirAsync("/tmp");
const xmlFiles = readLocalTempDirDir.filter((file) =>
file.includes(".xml")
);
for (const file of xmlFiles) {
await unlinkAsync(`/tmp/${file}`);
console.log("deleting file....");
}
} catch (error) {
console.log(
"🚀 ~ file: index.js ~ line 207 ~ clearGeneratedSitemapsFromTmpDir ~ error",
error
);
}
};
My hunch is that the issue is related to streams as I haven't fully understood them yet.
Any help here is highly appreciated.
Side note: I tried to sleep for 10s before uploading, but that didn't work either.
As a workaround, I did this:
const data = await generateSiteMap();
const logger = createWriteStream(resolve("/tmp/all-urls.json.txt"), {
flags: "a",
});
data.forEach((el) => {
logger.write(JSON.stringify(el));
logger.write("\n");
});
logger.end();
const stream = lineSeparatedURLsToSitemapOptions(
createReadStream(resolve("/tmp/all-urls.json.txt"))
)
.pipe(sms)
.pipe(createWriteStream(resolve("/tmp/sitemap-index.xml")));
await new Promise((fulfill) => stream.on("finish", fulfill));
await uploadToS3();
await clearGeneratedSitemapsFromTmpDir();
Will keep question open in case somebody answers it correctly.

How to get an docx file generated in node saved to firebase storage

Hi I am quite new to docxtemplater but I absolutely love how it works. Right now I seem to be able to generate a new docx document as follows:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const {Storage} = require('#google-cloud/storage');
var PizZip = require('pizzip');
var Docxtemplater = require('docxtemplater');
admin.initializeApp();
const BUCKET = 'gs://myapp.appspot.com';
exports.test2 = functions.https.onCall((data, context) => {
// The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors).
function replaceErrors(key, value) {
if (value instanceof Error) {
return Object.getOwnPropertyNames(value).reduce(function(error, key) {
error[key] = value[key];
return error;
}, {});
}
return value;
}
function errorHandler(error) {
console.log(JSON.stringify({error: error}, replaceErrors));
if (error.properties && error.properties.errors instanceof Array) {
const errorMessages = error.properties.errors.map(function (error) {
return error.properties.explanation;
}).join("\n");
console.log('errorMessages', errorMessages);
// errorMessages is a humanly readable message looking like this :
// 'The tag beginning with "foobar" is unopened'
}
throw error;
}
let file_name = 'example.docx';// this is the file saved in my firebase storage
const File = storage.bucket(BUCKET).file(file_name);
const read = File.createReadStream();
var buffers = [];
readable.on('data', (buffer) => {
buffers.push(buffer);
});
readable.on('end', () => {
var buffer = Buffer.concat(buffers);
var zip = new PizZip(buffer);
var doc;
try {
doc = new Docxtemplater(zip);
doc.setData({
first_name: 'Fred',
last_name: 'Flinstone',
phone: '0652455478',
description: 'Web app'
});
try {
doc.render();
doc.pipe(remoteFile2.createReadStream());
}
catch (error) {
errorHandler(error);
}
} catch(error) {
errorHandler(error);
}
});
});
My issue is that i keep getting an error that doc.pipe is not a function. I am quite new to nodejs but is there a way to have the newly generated doc after doc.render() to be saved directly to the firebase storage?
Taking a look at the type of doc, we find that is a Docxtemplater object and find that doc.pipe is not a function of that class. To get the file out of Docxtemplater, we need to use doc.getZip() to return the file (this will be either a JSZip v2 or Pizzip instance based on what we passed to the constructor). Now that we have the zip's object, we need to generate the binary data of the zip - which is done using generate({ type: 'nodebuffer' }) (to get a Node.JS Buffer containing the data). Unfortunately because the docxtemplater library doesn't support JSZip v3+, we can't make use of the generateNodeStream() method to get a stream to use with pipe().
With this buffer, we can either reupload it to Cloud Storage or send it back to the client that is calling the function.
The first option is relatively simple to implement:
import { v4 as uuidv4 } from 'uuid';
/* ... */
const contentBuffer = doc.getZip()
.generate({type: 'nodebuffer'});
const targetName = "compiled.docx";
const targetStorageRef = admin.storage().bucket()
.file(targetName);
await targetStorageRef.save(contentBuffer);
// send back the bucket-name pair to the caller
return { bucket: targetBucket, name: targetName };
However, to send back the file itself to the client isn't as easy because this involves switching to using a HTTP Event Function (functions.https.onRequest) because a Callable Cloud Function can only return JSON-compatible data. Here we have a middleware function that takes a callable's handler function but supports returning binary data to the client.
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
import corsInit from "cors";
admin.initializeApp();
const cors = corsInit({ origin: true }); // TODO: Tighten
function callableRequest(handler) {
if (!handler) {
throw new TypeError("handler is required");
}
return (req, res) => {
cors(req, res, (corsErr) => {
if (corsErr) {
console.error("Request rejected by CORS", corsErr);
res.status(412).json({ error: "cors", message: "origin rejected" });
return;
}
// for validateFirebaseIdToken, see https://github.com/firebase/functions-samples/blob/main/authorized-https-endpoint/functions/index.js
validateFirebaseIdToken(req, res, () => { // validateFirebaseIdToken won't pass errors to `next()`
try {
const data = req.body;
const context = {
auth: req.user ? { token: req.user, uid: req.user.uid } : null,
instanceIdToken: req.get("Firebase-Instance-ID-Token"); // this is used with FCM
rawRequest: req
};
let result: any = await handler(data, context);
if (result && typeof result === "object" && "buffer" in result) {
res.writeHead(200, [
["Content-Type", res.contentType],
["Content-Disposition", "attachment; filename=" + res.filename]
]);
res.end(result.buffer);
} else {
result = functions.https.encode(result);
res.status(200).send({ result });
}
} catch (err) {
if (!(err instanceof HttpsError)) {
// This doesn't count as an 'explicit' error.
console.error("Unhandled error", err);
err = new HttpsError("internal", "INTERNAL");
}
const { status } = err.httpErrorCode;
const body = { error: err.toJSON() };
res.status(status).send(body);
}
});
});
};
})
functions.https.onRequest(callableRequest(async (data, context) => {
/* ... */
const contentBuffer = doc.getZip()
.generate({type: "nodebuffer"});
const targetName = "compiled.docx";
return {
buffer: contentBuffer,
contentType: "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
filename: targetName
}
}));
In your current code, there are a number of odd segments where you have nested try-catch blocks and variables in different scopes. To help combat this, we can make use of File#download() that returns a Promise that resolves with the file contents in a Node.JS Buffer and File#save() that returns a Promise that resolves when the given Buffer is uploaded.
Rolling this together for reuploading to Cloud Storage gives:
// This code is based off the examples provided for docxtemplater
// Copyright (c) Edgar HIPP [Dual License: MIT/GPLv3]
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
import PizZip from "pizzip";
import Docxtemplater from "docxtemplater";
admin.initializeApp();
// The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors).
function replaceErrors(key, value) {
if (value instanceof Error) {
return Object.getOwnPropertyNames(value).reduce(
function (error, key) {
error[key] = value[key];
return error;
},
{}
);
}
return value;
}
function errorHandler(error) {
console.log(JSON.stringify({ error: error }, replaceErrors));
if (error.properties && error.properties.errors instanceof Array) {
const errorMessages = error.properties.errors
.map(function (error) {
return error.properties.explanation;
})
.join("\n");
console.log("errorMessages", errorMessages);
// errorMessages is a humanly readable message looking like this :
// 'The tag beginning with "foobar" is unopened'
}
throw error;
}
exports.test2 = functions.https.onCall(async (data, context) => {
const file_name = "example.docx"; // this is the file saved in my firebase storage
const templateRef = await admin.storage().bucket()
.file(file_name);
const template_content = (await templateRef.download())[0];
const zip = new PizZip(template_content);
let doc;
try {
doc = new Docxtemplater(zip);
} catch (error) {
// Catch compilation errors (errors caused by the compilation of the template : misplaced tags)
errorHandler(error);
}
doc.setData({
first_name: "Fred",
last_name: "Flinstone",
phone: "0652455478",
description: "Web app",
});
try {
doc.render();
} catch (error) {
errorHandler(error);
}
const contentBuffer = doc.getZip().generate({ type: "nodebuffer" });
// do something with contentBuffer
// e.g. reupload to Cloud Storage
const targetStorageRef = admin.storage().bucket().file("compiled.docx");
await targetStorageRef.save(contentBuffer);
return { bucket: targetStorageRef.bucket.name, name: targetName };
});
In addition to returning a bucket-name pair to the caller, you may also consider returning an access URL to the caller. This could be a signed url that can last for up to 7 days, a download token URL (like getDownloadURL(), process described here) that can last until the token is revoked, Google Storage URI (gs://BUCKET_NAME/FILE_NAME) (not an access URL, but can be passed to a client SDK that can access it if the client passes storage security rules) or access it directly using its public URL (after the file has been marked public).
Based on the above code, you should be able to merge in returning the file directly yourself.

how to make formidable not save to var/folders on nodejs and express app

I'm using formidable to parse incoming files and store them on AWS S3
When I was debugging the code I found out that formidable is first saving it to disk at /var/folders/ and overtime some unnecessary files are stacked up on disk which could lead to a big problem later.
It's very silly of me using a code without fully understanding it and now
I have to figure out how to either remove the parsed file after saving it to S3 or save it to s3 without storing it in disk.
But the question is how do I do it?
I would appreciate if someone could point me in the right direction
this is how i handle the files:
import formidable, { Files, Fields } from 'formidable';
const form = new formidable.IncomingForm();
form.parse(req, async (err: any, fields: Fields, files: Files) => {
let uploadUrl = await util
.uploadToS3({
file: files.uploadFile,
pathName: 'myPathName/inS3',
fileKeyName: 'file',
})
.catch((err) => console.log('S3 error =>', err));
}
This is how i solved this problem:
When I parse incoming form-multipart data I have access to all the details of the files. Because it's already parsed and saved to local disk on the server/my computer. So using the path variable given to me by formidable I unlink/remove that file using node's built-in fs.unlink function. Of course I remove the file after saving it to AWS S3.
This is the code:
import fs from 'fs';
import formidable, { Files, Fields } from 'formidable';
const form = new formidable.IncomingForm();
form.multiples = true;
form.parse(req, async (err: any, fields: Fields, files: Files) => {
const pathArray = [];
try {
const s3Url = await util.uploadToS3(files);
// do something with the s3Url
pathArray.push(files.uploadFileName.path);
} catch(error) {
console.log(error)
} finally {
pathArray.forEach((element: string) => {
fs.unlink(element, (err: any) => {
if (err) console.error('error:',err);
});
});
}
})
I also found a solution which you can take a look at here but due to the architecture if found it slightly hard to implement without changing my original code (or let's just say I didn't fully understand the given implementation)
I think i found it. According to the docs see options.fileWriteStreamHandler, "you need to have a function that will return an instance of a Writable stream that will receive the uploaded file data. With this option, you can have any custom behavior regarding where the uploaded file data will be streamed for. If you are looking to write the file uploaded in other types of cloud storages (AWS S3, Azure blob storage, Google cloud storage) or private file storage, this is the option you're looking for. When this option is defined the default behavior of writing the file in the host machine file system is lost."
const form = formidable({
fileWriteStreamHandler: someFunction,
});
EDIT: My whole code
import formidable from "formidable";
import { Writable } from "stream";
import { Buffer } from "buffer";
import { v4 as uuidv4 } from "uuid";
export const config = {
api: {
bodyParser: false,
},
};
const formidableConfig = {
keepExtensions: true,
maxFileSize: 10_000_000,
maxFieldsSize: 10_000_000,
maxFields: 2,
allowEmptyFiles: false,
multiples: false,
};
// promisify formidable
function formidablePromise(req, opts) {
return new Promise((accept, reject) => {
const form = formidable(opts);
form.parse(req, (err, fields, files) => {
if (err) {
return reject(err);
}
return accept({ fields, files });
});
});
}
const fileConsumer = (acc) => {
const writable = new Writable({
write: (chunk, _enc, next) => {
acc.push(chunk);
next();
},
});
return writable;
};
// inside the handler
export default async function handler(req, res) {
const token = uuidv4();
try {
const chunks = [];
const { fields, files } = await formidablePromise(req, {
...formidableConfig,
// consume this, otherwise formidable tries to save the file to disk
fileWriteStreamHandler: () => fileConsumer(chunks),
});
// do something with the files
const contents = Buffer.concat(chunks);
const bucketRef = storage.bucket("your bucket");
const file = bucketRef.file(files.mediaFile.originalFilename);
await file
.save(contents, {
public: true,
metadata: {
contentType: files.mediaFile.mimetype,
metadata: { firebaseStorageDownloadTokens: token },
},
})
.then(() => {
file.getMetadata().then((data) => {
const fileName = data[0].name;
const media_path = `https://firebasestorage.googleapis.com/v0/b/${bucketRef?.id}/o/${fileName}?alt=media&token=${token}`;
console.log("File link", media_path);
});
});
} catch (e) {
// handle errors
console.log("ERR PREJ ...", e);
}
}

Download files before build in gatsby wordpress

I have a client that im working with who needs his pdfs to be readable in browser and the user doesn't need to download them first and it turned out to not be an option to do it through Wordpress so I thought I can download them in gatsby before build everytime if they don't already exist and I was wondering if this is possible.
I found this repo: https://github.com/jamstack-cms/jamstack-ecommerce
that shows a way to do it with this code:
function getImageKey(url) {
const split = url.split('/')
const key = split[split.length - 1]
const keyItems = key.split('?')
const imageKey = keyItems[0]
return imageKey
}
function getPathName(url, pathName = 'downloads') {
let reqPath = path.join(__dirname, '..')
let key = getImageKey(url)
key = key.replace(/%/g, "")
const rawPath = `${reqPath}/public/${pathName}/${key}`
return rawPath
}
async function downloadImage (url) {
return new Promise(async (resolve, reject) => {
const path = getPathName(url)
const writer = fs.createWriteStream(path)
const response = await axios({
url,
method: 'GET',
responseType: 'stream'
})
response.data.pipe(writer)
writer.on('finish', resolve)
writer.on('error', reject)
})
}
but It doesn't seem to work if i put it in my createPages and i cant use it outside it either because i don't have access to graphql to query the data first.
any idea how to do this?
WordPress source example is defined as async:
exports.createPages = async ({ graphql, actions }) => {
... so you can already use await to download your file(-s) just after querying data (and before createQuery() call). It should (NOT TESTED) be as easy as:
// Check for any errors
if (result.errors) {
console.error(result.errors)
}
// Access query results via object destructuring
const { allWordpressPage, allWordpressPost } = result.data
const pageTemplate = path.resolve(`./src/templates/page.js`)
allWordpressPage.edges.forEach(edge => {
// for one file per edge
// url taken/constructed from some edge property
await downloadImage (url);
createPage({
Of course for multiple files you should use Promise.all to wait for [resolving] all [returned promise] downloads before creating page:
allWordpressPage.edges.forEach(edge => {
// for multiple files per edge(page)
// url taken/constructed from some edge properties in a loop
// adapth 'paths' of iterable (edge.xxx.yyy...)
// and/or downloadImage(image) argument, f.e. 'image.someUrl'
await Promise.all(
edge.node.someImageArrayNode.map( image => { return downloadImage(image); }
);
createPage({
If you need to pass/update image nodes (for components usage) you should be able to mutate nodes, f.e.:
await Promise.all(
edge.node.someImageArrayNode.map( image => {
image["fullUrl"] = `/publicPath/${image.url}`;
return downloadImage(image.url); // return Promise at the end
}
);
createPage({
path: slugify(item.name),
component: ItemView,
context: {
content: item,
title: item.name,
firstImageUrl: edge.node.someImageArrayNode[0].fullUrl,
images: edge.node.someImageArrayNode

Resources