I am using web viewer and want to rotate individual pages and update them in the database.
Right now I am able to rotate the whole pdf only.
I am following this doc https://www.pdftron.com/documentation/web/guides/manipulation/rotate/
but not able to understand much
export default function PdfTron(props: any): ReactElement {
const viewer = useRef<HTMLDivElement>(null);
const {DrawingLibDetailsState, DrawingLibDetailsDispatch}: any = useContext(DrawingLibDetailsContext);
const [newInstance, setNewInstance] = useState<any>(null);
const [currentPage, setCurrentPage] = useState<any>(null);
const {dispatch, state }:any = useContext(stateContext);
//console.log("currentPage in state",currentPage)
useEffect(() => {
WebViewer(
{
path: '/webviewer/lib',
licenseKey: process.env["REACT_APP_PDFTRON_LICENSE_KEY"],
initialDoc: '',
filename: 'drawings',
extension: "pdf",
isReadOnly: true,
fullAPI: true,
disabledElements: [
// 'leftPanelButton',
// // 'selectToolButton',
// 'stickyToolButton',
// 'toggleNotesButton',
]
},
viewer.current as HTMLDivElement,
).then((instance: any) => {
setNewInstance(instance)
// you can now call WebViewer APIs here...
});
}, []);
useEffect(() => {
if(DrawingLibDetailsState?.parsedFileUrl?.url && newInstance ){
const s3Key = DrawingLibDetailsState?.parsedFileUrl?.s3Key;
const pageNum = s3Key.split('/')[s3Key.split('/').length-1].split('.')[0];
const fileName = DrawingLibDetailsState?.drawingLibDetails[0]?.fileName?.replace(".pdf", "");
const downloadingFileName = `page${pageNum}_${fileName}`;
newInstance.loadDocument(DrawingLibDetailsState?.parsedFileUrl?.url, {extension: "pdf",
filename: downloadingFileName ? downloadingFileName : 'drawing',})
const { documentViewer } = newInstance.Core;
const pageRotation = newInstance.Core.PageRotation;
const clickDocument =newInstance.Core.DocumentViewer.Click;
const pageNumber = newInstance.Core.pageNum;
//get page rotation from the PDF
documentViewer.addEventListener('rotationUpdated', (rotation: number) => {
updateRotation(rotation)
})
// trigger an event after the document loaded
documentViewer.addEventListener('documentLoaded', async() => {
const doc = documentViewer.getDocument();
const rotation = DrawingLibDetailsState?.drawingLibDetails[0]?.sheetsReviewed?.pdfRotation ?
DrawingLibDetailsState?.drawingLibDetails[0]?.sheetsReviewed?.pdfRotation : 0
documentViewer.setRotation(rotation)
})
documentViewer.on('pageNumberUpdated', () => {
DrawingLibDetailsDispatch(setDrawingPageNumber(0));
})
}
}, [DrawingLibDetailsState?.parsedFileUrl?.url, newInstance]);
useEffect(() => {
if(DrawingLibDetailsState?.drawingPageNum && newInstance ){
const { documentViewer, PDFNet } = newInstance.Core;
PDFNet.initialize()
documentViewer.addEventListener('documentLoaded',async () => {
await PDFNet.initialize()
const pdfDoc = documentViewer.getDocument();
const doc = await pdfDoc.getPDFDoc();
newInstance.UI.pageManipulationOverlay.add([
{
type: 'customPageOperation',
header: 'Custom options',
dataElement: 'customPageOperations',
operations: [
{
title: 'Alert me',
img: '/path-to-image',
onClick: (selectedPageNumbers:any) => {
alert(`Selected thumbnail pages: ${selectedPageNumbers}`);
},
dataElement: 'customPageOperationButton',
},
],
},
{ type: 'divider' },
]);
documentViewer.setCurrentPage(DrawingLibDetailsState?.drawingPageNum, true);
});
documentViewer.setCurrentPage(DrawingLibDetailsState?.drawingPageNum, true);
}
}, [DrawingLibDetailsState?.drawingPageNum]);
useEffect(() => {
if(props?.drawingSheetsDetails?.fileSize){
fetchSheetUrl(props?.drawingSheetsDetails)
}
}, [props?.drawingSheetsDetails]);
const fetchSheetUrl = (file: any) => {
const payload = [{
fileName: file.fileName,
key: file.sourceKey,
expiresIn: 100000000,
// processed: true
}];
getSheetUrl(payload);
}
const getSheetUrl = async (payload: any) => {
try {
dispatch(setIsLoading(true));
const fileUploadResponse = await postApi('V1/S3/downloadLink', payload);
if(fileUploadResponse.success){
const fileData = {
s3Key: payload[0].key,
url: fileUploadResponse.success[0].url
}
DrawingLibDetailsDispatch(setParsedFileUrl(fileData));
}
dispatch(setIsLoading(false));
} catch (error) {
Notification.sendNotification(error, AlertTypes.warn);
dispatch(setIsLoading(false));
}
}
const updateRotation = (rotation: number) => {
props.updateRotation(rotation)
}
return (
<>
<div className="webviewer" ref={viewer}></div>
</>
)
}
In WebViewer 8.0 you would need to enable the left panel by default when the document is loaded, and then use event delegation on left panel to watch for button clicks on the single page rotation buttons.
const { documentViewer } = instance.Core
documentViewer.addEventListener('documentLoaded',()=>{
let panelElement = instance.docViewer.getScrollViewElement().closest('#app').querySelector('[data-element="thumbnailsPanel"]');
if (!parentElement) {
instance.UI.toggleElementVisibility('leftPanel');
panelElement = instance.docViewer.getScrollViewElement().closest('#app').querySelector('[data-element="thumbnailsPanel"]');
}
panelElement.addEventListener('click', (e) => {
if (e.target.dataset?.element === 'thumbRotateClockwise' || e.target.dataset?.element === 'thumbRotateCounterClockwise') {
// The single page rotations are performed asychronously and there are no events firings in 8.0, so we have to manually add a delay before the page finishes rotating itself.
setTimeout(() => {
const pageNumber = parseInt(e.target.parentElement.previousSibling.textContent);
const rotation = instance.docViewer.getDocument().getPageRotation(pageNumber);
console.log('page ', pageNumber, ' self rotation is ', rotation);
}, 500);
}
});
})
If you have the option to upgrade to the latest WebViewer, you can listen to the ‘pagesUpdated’ event on documentViewer and the code becomes shorter & cleaner:
const { documentViewer } = instance.Core
documentViewer.addEventListener('pagesUpdated',(changes)=>{
changes.contentChanged.forEach(pageNumber=>{
const rotation = documentViewer.getDocument().getPageRotation(pageNumber)
console.log('page ', pageNumber, ' self rotation is ', rotation);
})
})
For both situations, when you load the document back, you can use documentViewer.getDocument().rotatePages to rotate to your saved rotations.
assuming we have the saved page rotations data structured like this
const rotationData = [
{pageNumber: 1, rotation: 180},
{pageNumber: 3, rotation: 90},
{pageNumber: 4, rotation: 270},
]
We can use the following code to rotate our individual pages back:
const { documentViewer } = instance.Core
documentViewer.addEventListener('documentLoaded',()=>{
rotationData.forEach(page=>{
const originalRotation = documentViewer.getDocument().getPageRotation(page.pageNumber)
if (originalRotation !== page.rotation) {
documentViewer.getDocument().rotatePages([page.pageNumber], (page.rotation-originalRotation)/90);
}
})
})
Related
I am trying to test an API by mocking the database function, but the imported function is being called instead of the mocked one.
Here are the code snippets
const supertest = require('supertest');
const axios = require('axios');
const querystring = require('querystring');
const { app } = require('../app');
const DEF = require('../Definition');
const tripDb = require('../database/trip');
const request = supertest.agent(app); // Agent can store cookies after login
const { logger } = require('../Log');
describe('trips route test', () => {
let token = '';
let companyId = '';
beforeAll(async (done) => {
// do something before anything else runs
logger('Jest starting!');
const body = {
username: process.env.EMAIL,
password: process.env.PASSWORD,
grant_type: 'password',
client_id: process.env.NODE_RESOURCE,
client_secret: process.env.NODE_SECRET,
};
const config = {
method: 'post',
url: `${process.env.AUTH_SERV_URL}/auth/realms/${process.env.REALM}/protocol/openid-connect/token`,
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
data: querystring.stringify(body),
};
const res = await axios(config);
token = res.data.access_token;
done();
});
const shutdown = async () => {
await new Promise((resolve) => {
DEF.COM.RCLIENT.quit(() => {
logger('redis quit');
resolve();
});
});
// redis.quit() creates a thread to close the connection.
// We wait until all threads have been run once to ensure the connection closes.
await new Promise(resolve => setImmediate(resolve));
};
afterAll(() => shutdown());
test('post correct data', async (done) => {
const createTripMock = jest.spyOn(tripDb, 'addTrip').mockImplementation(() => Promise.resolve({
pk: `${companyId}_trip`,
uid: '1667561135293773',
lsi1: 'Kotha Yatra',
lsi2: companyId,
name: 'Kotha Yatra',
companyId,
origin: {
address: 'Goa, India',
location: {
lat: 15.2993265,
lng: 74.12399599999999,
},
},
destination: {
address: 'Norway',
location: {
lat: 60.47202399999999,
lng: 8.468945999999999,
},
},
path: [
{
lat: 15.2993265,
lng: 74.12399599999999,
},
{
lat: 60.47202399999999,
lng: 8.468945999999999,
},
],
isDeleted: false,
currentVersion: 1,
geofences: [],
}));
const response = await request.post('/api/trips').set('Authorization', `Bearer ${token}`).send(tripPayload);
expect(createTripMock).toHaveBeenCalled();
expect(response.status).toEqual(200);
expect(response.body.status).toBe('success');
done();
});
});
The database function:
const addTrip = (trip) => {
// const uid = trip.uid ? trip.uid : (Date.now() * 1000) + Math.round(Math.random() * 1000);
const uid = (Date.now() * 1000) + Math.round(Math.random() * 1000);
const item = {
pk: `${trip.companyId}_trip`,
uid: `v${trip.version ? trip.version : 0}#${uid}`,
lsi1: trip.name,
lsi2: trip.companyId,
name: trip.name,
companyId: trip.companyId,
origin: trip.origin,
destination: trip.destination,
path: trip.path,
isDeleted: false,
};
if (!trip.version || trip.version === 0) {
item.currentVersion = 1;
} else {
item.version = trip.version;
}
if (trip.geofences) item.geofences = trip.geofences;
const params = {
TableName: TN,
Item: item,
ConditionExpression: 'attribute_not_exists(uid)',
};
// console.log('params ', params);
return new Promise((resolve, reject) => {
ddb.put(params, (err, result) => {
// console.log('err ', err);
if (err) {
if (err.code === 'ConditionalCheckFailedException') return reject(new Error('Trip id or name already exists'));
return reject(err);
}
if (!trip.version || trip.version === 0) {
const newItem = { ...item };
delete newItem.currentVersion;
newItem.version = 1;
newItem.uid = `v1#${item.uid.split('#')[1]}`;
const newParams = {
TableName: TN,
Item: newItem,
ConditionExpression: 'attribute_not_exists(uid)',
};
// console.log('new params ', newParams);
ddb.put(newParams, (v1Err, v1Result) => {
// console.log('v1 err ', v1Err);
if (v1Err) return reject(v1Err);
item.uid = item.uid.split('#')[1];
return resolve(item);
});
} else {
item.uid = item.uid.split('#')[1];
return resolve(item);
}
});
});
};
module.exports = {
addTrip,
};
I was mocking the above database function when I was making a request to add API, instead, the original function is being called and I was getting the result that I had written in the mock Implementation.
What should I do to just mock the result ,when the function is called and no implementation of the original function should happen.
Even this did not give an error
expect(createTripMock).toHaveBeenCalled();
Still the database function call is happening
I tried using mockReturnValue, mockReturnValueOnce, mockImplemenationOnce but not luck.
Can anyone help me with this?
I tried to filter posts by category but it's not working on frontend
I want when a user clicks on a particular category to get posts in that category
this is my backend (NODEJS)
exports.getMovies = async (req, res) => {
const { pageNo = 0, limit = 10 } = req.query;
// filter category
let filter = {};
if (req.query.categories) {
filter = { category: req.query.categories.split(",") };
}
const movies = await Movie.find(filter)
.populate("category comments")
.sort({ createdAt: -1 })
.skip(parseInt(pageNo) * parseInt(limit))
.limit(parseInt(limit));
const results = movies.map((movie) => ({
id: movie._id,
title: movie.title,
poster: movie.poster?.url,
responsivePosters: movie.poster?.responsive,
category: movie.category,
comments: movie.comments,
genres: movie.genres,
status: movie.status,
}));
res.json({ movies: results });
};
The front end API
export const getMovies = async (pageNo, limit, filter) => {
const token = getToken();
try {
const { data } = await client(
`/movie/movies?pageNo=${pageNo}&limit=${limit}&filter=${filter}`,
{
headers: {
authorization: "Bearer " + token,
"content-type": "multipart/form-data",
},
}
);
return data;
} catch (error) {
return catchError(error);
}
};
The front end CATEGORY COMPONENT
I want the user to filter the post by category by clicking on the category
export default function AllCategory() {
const [allCategories, setAllCategories] = useState([]);
const fetchCategories = async () => {
const res = await getCategoryForUsers();
setAllCategories(res);
};
useEffect(() => {
fetchCategories();
}, []);
return (
<div className=''>
<ul className=' space-x-4 '>
{allCategories.map((c, index) => {
return <li key={index}>{c.title}</li>;
})}
</ul>
</div>
);
}
Remove the ``` as that is not how to post code here.
You'll want to use a filter first in another function
const [selectedCategories, setCurrentlySelectedCategories] = useState([]);
const handleSelectCategory = (category) => {
const currentlySelected = allCategories.filter((item) => item.category == category);
setCurrentlySelectedCategories(currentlySelected);
}
Now just call map on the selectedCategories
I'm working learning React Native/Reactjs with NodeJs(NestJs) and Firebase. I have some problems with updating live data. For example, I have and bell icon which represent Notification and I want it to update whenever the data change and show the number of unread noti in database. My code:
API
async getNotifications(data: any): Promise<any> {
const receiverId = data.userId;
const warehouseId = await this.getKhoId(receiverId);
const ref = db.ref('/Notification');
const result = [];
await ref.once('value', function (snapshot) {
if (snapshot.val())
for (let j = 0; j < warehouseId.length; j++)
for (let i = 0; i < snapshot.val().length; i++) {
if (
snapshot.val()[i] &&
snapshot.val()[i].warehouseId == warehouseId[j]
) {
result.push({
content: snapshot.val()[i].content,
read: snapshot.val()[i].read,
time: snapshot.val()[i].time,
number: i,
});
}
}
else return 'None';
});
return result;
}
React Native
useEffect(() => {
const handleData = async () => {
const userId = await SecureStore.getItemAsync("userId");
const numberOfNoti = await axios({
method: "post",
url: "http://192.168.1.8:5000/getNotifications",
data: {
userId: userId,
},
}).then(function (response) {
let val = 0;
if (response.data) {
response.data.forEach((item) => {
if (item.read === 0) {
val++;
}
});
}
return val;
});
setNumberOfNoti(numberOfNoti);
};
handleData();
}, []);
and the component AppBar contain the bell icon:
{numberOfNoti !== 0 ? (
<Badge size={16} style={{ position: "absolute", top: 5, right: 5 }}>
{numberOfNoti}
</Badge>
) : (
void 0
)}
How can I live updating the number in Badge when data in Firebase change? I also have Notification component which contains a list of Notification and updating state of that Notification(from unread to read onPress) and I want to change the Badge number too.
I realize I can call API to update continuously using setInterval and it looks something like this. I don't know whether it's a proper way or not but it run fine.
useEffect(() => {
const interval = setInterval(() => {
const handleData = async () => {
const userId = await SecureStore.getItemAsync("userId");
const numberOfNoti = await axios({
method: "post",
url: "http://192.168.1.2:5000/getNotifications",
data: {
userId: userId,
},
}).then(function (response) {
let val = 0;
if (response.data) {
response.data.forEach((item) => {
if (item.read === 0) {
val++;
}
});
}
return val;
});
setNumberOfNoti(numberOfNoti);
};
handleData();
}, 1000);
return () => clearInterval(interval);
}, []);
I am trying to achieve cursor pagination on my data table material-ui-datatables.
I am using react js for the front end, express.js backend, and I am using mongo_DB for storage. I want to pass the page number, page limit previous and next as request body from my data table to API and I am using mangoose pagination plugin.
import React, { useState, useEffect } from "react";
import MUIDataTable from "mui-datatables";
import axios from "axios";
import PropagateLoader from "react-spinners/PropagateLoader";
// employee_info
function employee_info() {
let [loading, setLoading] = useState(true);
const [Response, setResponse] = useState([]);
const get_employee_details = () => {
axios
.get(configData.SERVER_URL + "/api/get_employee_info")
.then((res) => {
setResponse(res.data);
setLoading(false);
});
};
useEffect(() => {
const interval = setInterval(() => get_employee_details(), 10000);
return () => {
clearInterval(interval);
};
}, []);
if (loading === true) {
return (
<div style={style}>
<PropagateLoader loading={loading} color={"#36D7B7"} size={30} />
</div>
);
} else {
return EmployeeInfoTable(setResponse);
}
}
//DataTable
function EmployeeInfoTable(value) {
if (
typeof value == "undefined" ||
value == null ||
value.length == null ||
value.length < 0
) {
return <div></div>;
}
const columns = [
{ label: "Employee_ID", name: "employee_id" },
{ label: "Name", name: "name" },
{ label: "Department", name: "department" },
{ label: "Manger", name: "manager" },
];
const data = value.map((item) => {
return [
item.employee_id,
item.name,
item.department,
item.manager,
];
});
const options = {
caseSensitive: true,
responsive: "standard",
selectableRows: "none",
filter: false,
download: false,
print: false,
viewColumns: false,
};
return (
<MUIDataTable
title={"Employee_Details"}
data={data}
columns={columns}
options={options}
/>
);
}
Service API
const MongoPaging = require('mongo-cursor-pagination');
const express = require("express");
const router = express.Router();
router.get('/get_employee_info', async (req, res, next) => {
try {
const result = await MongoPaging.find(db.collection('employee'), {
query: {
employee: req.employee_id
},
paginatedField: 'created',
fields: {
manger: req.manger,
},
limit: req.query.limit,
next: req.query.next,
previous: req.query.previous,
}
res.json(result);
} catch (err) {
next(err);
}
})
First, let me say, I am very new to backend application and Nodejs. I primarily do mobile development, so my knowledge of the language is limited.
I have an endpoint in Firebase Functions that builds and saves a PDF from data in Firestore and images in Storage. The PDF building works just fine, and I am not getting any errors. However, the final piece of code to save the PDF doesn't execute consistently. I have log statements that never get fired, but sometimes the PDF is saved. I assume it has something to do with my use of async methods but I'm not sure. Is there anything blatantly wrong with this code? This is the entirety of the code I am using.
const admin = require('firebase-admin');
const firebase_tools = require('firebase-tools');
const functions = require('firebase-functions');
const Printer = require('pdfmake');
const fonts = require('pdfmake/build/vfs_fonts.js');
const {Storage} = require('#google-cloud/storage');
const url = require('url');
const https = require('https')
const os = require('os');
const fs = require('fs');
const path = require('path');
const storage = new Storage();
const bucketName = '<BUCKET NAME REMOVED FOR THIS QUESTION>'
admin.initializeApp({
serviceAccountId: 'firebase-adminsdk-ofnne#perimeter1-d551f.iam.gserviceaccount.com',
storageBucket: bucketName
});
const bucket = admin.storage().bucket()
const firestore = admin.firestore()
const fontDescriptors = {
Roboto: {
normal: Buffer.from(fonts.pdfMake.vfs['Roboto-Regular.ttf'], 'base64'),
bold: Buffer.from(fonts.pdfMake.vfs['Roboto-Medium.ttf'], 'base64'),
italics: Buffer.from(fonts.pdfMake.vfs['Roboto-Italic.ttf'], 'base64'),
bolditalics: Buffer.from(fonts.pdfMake.vfs['Roboto-Italic.ttf'], 'base64'),
}
};
function buildLog(data) {
const filePath = data.imageReference;
const fileName = path.basename(filePath);
const tempFilePath = path.join(os.tmpdir(), fileName);
return {
stack: [
{
image: tempFilePath,
fit: [130, 220]
},
{
text: data["logEventType"],
style: 'small'
},
{
text: data["date"],
style: 'small'
}
],
unbreakable: true,
width: 130
}
}
function buildLogsBody(data) {
var body = [];
var row = []
var count = 0
data.forEach(function(logData) {
const log = buildLog(logData)
row.push(log)
count = count + 1
if (count == 4) {
body.push([{columns: row, columnGap: 14}])
body.push([{text: '\n'}])
row = []
count = 0
}
});
body.push([{columns: row, columnGap: 14}])
return body;
}
function title(incidentTitle, pageNumber, logCount, messageCount) {
var pageTitle = "Incident Summary"
const logPageCount = Math.ceil(logCount / 8)
if (messageCount > 0 && pageNumber > logPageCount) {
pageTitle = "Message History"
}
var body = [{
text: incidentTitle + ' | ' + pageTitle,
style: 'header'
}]
return body
}
function messageBody(message) {
var body = {
stack: [
{
columns: [
{width: 'auto', text: message['senderName'], style: 'messageSender'},
{text: message['date'], style: 'messageDate'},
],
columnGap: 8,
lineHeight: 1.5
},
{text: message['content'], style: 'message'},
{text: '\n'}
],
unbreakable: true
}
return body
}
function buildMessageHistory(messages) {
var body = []
if (messages.length > 0) {
body.push({ text: "", pageBreak: 'after' })
}
messages.forEach(function(message) {
body.push(messageBody(message))
body.push('\n')
})
return body
}
const linebreak = "\n"
async function downloadImages(logs) {
await Promise.all(logs.map(async (log) => {
functions.logger.log('Image download started for ', log);
const filePath = log.imageReference;
const fileName = path.basename(filePath);
const tempFilePath = path.join(os.tmpdir(), fileName);
await bucket.file(filePath).download({destination: tempFilePath});
functions.logger.log('Image downloaded locally to', tempFilePath);
}));
}
//////////// PDF GENERATION /////////////////
exports.generatePdf = functions.https.onCall(async (data, context) => {
console.log("PDF GENERATION STARTED **************************")
// if (request.method !== "GET") {
// response.send(405, 'HTTP Method ' + request.method + ' not allowed');
// return null;
// }
const teamId = data.teamId;
const incidentId = data.incidentId;
const incidentRef = firestore.collection('teams/').doc(teamId).collection('/history/').doc(incidentId);
const incidentDoc = await incidentRef.get()
const messages = []
const logs = []
if (!incidentDoc.exists) {
throw new functions.https.HttpsError('not-found', 'Incident history not found.');
}
const incident = incidentDoc.data()
const incidentTitle = incident["name"]
const date = "date" //incident["completedDate"]
const address = incident["address"]
const eventLogRef = incidentRef.collection('eventLog')
const logCollection = await eventLogRef.get()
logCollection.forEach(doc => {
logs.push(doc.data())
})
functions.logger.log("Checking if images need to be downloaded");
if (logs.length > 0) {
functions.logger.log("Image download beginning");
await downloadImages(logs);
}
functions.logger.log("Done with image download");
const messagesRef = incidentRef.collection('messages')
const messageCollection = await messagesRef.get()
messageCollection.forEach(doc => {
messages.push(doc.data())
})
////////////// DOC DEFINITION ///////////////////////
const docDefinition = {
pageSize: { width: 612, height: 792 },
pageOrientation: 'portrait',
pageMargins: [24,60,24,24],
header: function(currentPage, pageCount, pageSize) {
var headerBody = {
columns: [
title(incidentTitle, currentPage, logs.length, messages.length),
{
text: 'Page ' + currentPage.toString() + ' of ' + pageCount,
alignment: 'right',
style: 'header'
}
],
margin: [24, 24, 24, 0]
}
return headerBody
},
content: [
date,
linebreak,
address,
linebreak,
{ text: [
{ text: 'Incident Commander:', style: 'header' },
{ text: ' Daniel', style: 'regular'},
]
},
linebreak,
{
text: [
{ text: 'Members involved:', style: 'header' },
{text: ' Shawn, Zack, Gabe', style: 'regular'},
]
},
linebreak,
buildLogsBody(logs),
buildMessageHistory(messages)
],
pageBreakBefore: function(currentNode, followingNodesOnPage, nodesOnNextPage, previousNodesOnPage) {
return currentNode.headlineLevel === 1 && followingNodesOnPage.length === 0;
},
styles: {
header: {
fontSize: 16,
bold: true
},
regular: {
fontSize: 16,
bold: false
},
messageSender: {
fontSize: 14,
bold: true
},
message: {
fontSize: 14
},
messageDate: {
fontSize: 14,
color: 'gray'
}
}
}
const printer = new Printer(fontDescriptors);
const pdfDoc = printer.createPdfKitDocument(docDefinition);
var chunks = []
const pdfName = `${teamId}/${incidentId}/report.pdf`;
pdfDoc.on('data', function (chunk) {
chunks.push(chunk);
});
pdfDoc.on('end', function () {
functions.logger.log("PDF on end started")
const result = Buffer.concat(chunks);
// Upload generated file to the Cloud Storage
const fileRef = bucket.file(
pdfName,
{
metadata: {
contentType: 'application/pdf'
}
}
);
// bucket.upload("report.pdf", { destination: "${teamId}/${incidentId}/report.pdf", public: true})
fileRef.save(result);
fileRef.makePublic().catch(console.error);
// Sending generated file as a response
// res.send(result);
functions.logger.log("File genderated and saved.")
return { "response": result }
});
pdfDoc.on('error', function (err) {
res.status(501).send(err);
throw new functions.https.HttpsError('internal', err);
});
pdfDoc.end();
})
For quick reference, the main endpoint method is the exports.generatePdf and the pdfDoc.on at the end is the code that should handle the saving, but code appears to never fire, as the logs in it are never logged, and the document is not being saved always.
This is a function lifecycle issue, your function is killed prior to completing its task because the last operation you do deal with an event handler instead of returning a Promise. The reason it sometimes works is only because you got lucky. Once a function is complete, it should have finished doing everything it needs to.
So what you need to do is correctly pipe the data from the pdfDoc stream through to Cloud Storage, all wrapped in Promise that Cloud Functions can use to monitor progress and not kill your function before it finishes.
In it's simplest form it looks like this:
const stream = /* ... */;
const storageStream = bucket
.file(/* path */)
.createWriteStream(/* options */);
return new Promise((resolve, reject) => {
storageStream.once("finish", resolve); // resolve when written
storageStream.once("error", reject); // reject when either stream errors
stream.once("error", reject);
stream.pipe(storageStream); // pipe the data
});
Note: The Google Cloud Storage Node SDK is not the same as the Firebase Client's Cloud Storage SDK!
return new Promise((resolve, reject) => {
const pdfDoc = printer.createPdfKitDocument(docDefinition);
const pdfName = `${teamId}/${incidentId}/report.pdf`;
// Reference to Cloud Storage upload location
const fileRef = bucket.file(pdfName);
const pdfReadStream = pdfDoc;
const storageWriteStream = fileRef.createWriteStream({
predefinedAcl: 'publicRead', // saves calling makePublic()
contentType: 'application/pdf'
});
// connect errors from the PDF
pdfReadStream.on('error', (err) => {
console.error("PDF stream error: ", err);
reject(new functions.https.HttpsError('internal', err));
});
// connect errors from Cloud Storage
storageWriteStream.on('error', (err) => {
console.error("Storage stream error: ", err);
reject(new functions.https.HttpsError('internal', err));
});
// connect upload is complete event.
storageWriteStream.on('finish', () => {
functions.logger.log("File generated and saved to Cloud Storage.");
resolve({ "uploaded": true });
});
// pipe data through to Cloud Storage
pdfReadStream.pipe(storageWriteStream);
// finish the document
pdfDoc.end();
});