Getting 0 from x, y, bbox values of svg elements and setting the values manually is not working - svg

I am using svgjs library as following:
const fs = require('fs');
const { createSVGWindow } = require('svgdom')
const window = createSVGWindow()
const document = window.document
const { SVG, registerWindow } = require('#svgdotjs/svg.js')
const setSizeInfoToID = require('./utils');
const svgFromFile = (path) => {
try {
const svgString = fs.readFileSync(path, 'utf8');
return svgString;
} catch (err) {
console.error(err);
}
};
// register window and document
registerWindow(window, document);
const svgFromString = (string) => {
const draw = SVG(document.documentElement);
var svg = draw.svg(string);
return svg;
};
// reading svg string from file
var svgstring = svgFromFile('svgs/test svgs/bib1.svg')
// create SVG document from the strings
var svg = svgFromString(svgstring);
let selector = `[id*="P8_5_" i]`;
const element = svg.findOne(selector);
console.log(element.x());
console.log(element.y());
element.x(1000);
element.y(1000);
console.log(element.x());
console.log(element.y());
//The output I am getting is:
0
0
0
0
here is the svg file I'm working with
I am getting the x, y and bbox values all 0 and setting the values also not working.
This problem is true for every <g> element of the svg. Although I can use the same methods to get and set x,y and bbox values from similar svg files

Related

Error on provided shape, expected (1200000) and actual (400000) tensor values are different

Is there any way to have the input image values match the expected values? The input image I know for a fact is 800px in width, 500px in height, and is colored (meaning 3 channels), so the shape should be [800,500,3].
However, I receive the error below:
The error: Error: Based on the provided shape, [800,500,3], the tensor should have 1200000 values but has 400000
Is there any way to give the image a value of 1200000, or a way to adjust the shape to 400000 but still retain the 800x500 size with 3 channels?
The code:
var tf = require('#tensorflow/tfjs-node');
const fs = require(`fs`)
const Jimp = require(`jimp`)
const image = `./1-1.png`
const imageWidth = 800;
const imageHeight = 500;
const imageChannels = 3;
const getData = async function (path) {
const data = [];
const image = await Jimp.read(path);
await image
.scan(0, 0, imageWidth, imageHeight, (x, y, idx) => {
let v = image.bitmap.data[idx + 0];
data.push(v / 255);
});
return data;
};
const createImage = async (data) => {
const imTen = tf.tensor(data, [imageWidth, imageHeight, 3]);
const inTen = imTen.expandDims();
return inTen;
}
const main = async () => {
const model = await tf.loadLayersModel('file:///retake/savedmodels/model.json');
model.summary();
const a = await getData(image)
const b = await createImage(a)
const tfImage = b
console.log(im)
const prediction = model.predict(tfImage);
prediction.print();
}
main()
The input image I know for a fact is 800px in width, 500px in height, and is colored (meaning 3 channels)
Check return value of your getData method since it seems its returning grayscale image - that would match 400,000 instead of expected 1,200,000 values

mask after finding shape's contour

I'm trying to write ANPR project in nodejs.
I managed to draw the contours around the license plate, but now I want to mask it.
this is what I got now (just without the text):
i want to mask it like so (and even crop it afterwards):
this is my code by now:
import cv from 'opencv4nodejs';
// read image, grayscale
const img = cv.imread('./images/img2.jpg');
const grayImg = img.bgrToGray();
const bfilter = grayImg.bilateralFilter(11, 17, 17);
const edged = bfilter.canny(30, 200);
const getContour = (handMask) => {
const mode = cv.RETR_TREE;
const method = cv.CHAIN_APPROX_SIMPLE;
const contours = handMask.findContours(mode, method);
return contours
.sort((c0, c1) => c1.area - c0.area)
.slice(0, 10)
.find((contour) => {
const x = contour.approxPolyDP(10, true);
return x.length === 4;
});
};
let edgeContour = getContour(edged);
let mask = new cv.Mat(grayImg.rows, grayImg.cols, 0, 0);
let x = img.drawContours([edgeContour.getPoints()], 0, new cv.Vec3(0, 255, 0), {
thickness: 5,
});
x = img.bitwiseAnd(img);
cv.imshow('drawContours', x);
cv.waitKey();

Creating a 2D Snapshot using three JS library inside Node JS

I'm using the below code to create 3D model using the Three JS library. Code was running successfully in node JS, but when I'm trying to save an image, I'm getting output as a blank image.
I do have a ThreeUtil JS file which is holding the constants of my shader materials and a couple of functions. There is no problem with this utility file
Observation: I see my buffer data with all element values as zero itself. Can someone let me know where exactly I'm going wrong with my implementation?
Please find the code in below sandbox URL
https://codesandbox.io/s/boring-lewin-7msxm
Attached sample input file as sampleinput.json . from an endpoint need to class main typescript file which will perform the creation and capturing screenshot
import Jimp from 'jimp';
const XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest;
const THREE = require('three');
const THREESTLLoader = require('three-stl-loader');
const THREEObjLoader = require('three-obj-loader');
const jsdom = require('jsdom');
const gl = require('gl');
import { WebGLRenderTarget } from 'three';
const HOMUNCULUS_OBJ_URL = './assets/3d-homunculus.obj';
const Canvas = require('canvas');
export class ThreeDimensionalModel {
private prostateMeshData: ProstateMeshData;
private width: number = 660;
private height: number = 660;
private dimensions: ThreeCanvasDimensions;
private scenes: Scenes;
private camera: THREE.PerspectiveCamera;
private matrix: THREE.Matrix4;
private rotationMatrix: THREE.Matrix4;
private prostateMeshBoundingBox: THREE.Box3;
private renderer: THREE.WebGLRenderer;
constructor() {
const { JSDOM } = jsdom;
const { window } = new JSDOM();
const { document } = (new JSDOM('')).window;
// #ts-ignore
global.window = window;
// #ts-ignore
global.document = document;
// #ts-ignore
global.window.decodeURIComponent = global.decodeURIComponent;
// Adding XML HTTP Requests library
// #ts-ignore
global.XMLHttpRequest = XMLHttpRequest;
this.dimensions = this.calculateDimensions();
this.scenes = this.initializeScenes();
this.camera = this.initializeCamera();
this.decorateScenes();
this.loadModels(this.prostateMeshData);
this.renderer = this.createRenderer();
this.animate();
}
private calculateDimensions = (): ThreeCanvasDimensions => ({
width: this.width,
height: this.height,
aspect: this.width / this.height,
})
private initializeScenes = (): Scenes => ({
main: new THREE.Scene(),
homunculus: new THREE.Scene(),
})
private initializeCamera(): THREE.PerspectiveCamera {
const camera = new THREE.PerspectiveCamera(
70,
this.dimensions.aspect,
1,
10000);
camera.position.z = 100;
return camera;
}
private decorateScenes(): void {
const ambientLight = new THREE.AmbientLight(0x505050);
const spotLight = this.getSpotLight();
const plane = this.getPlane();
ThreeUtil.addObjectToAllScenes(this.scenes, spotLight);
ThreeUtil.addObjectToAllScenes(this.scenes, ambientLight);
ThreeUtil.addObjectToAllScenes(this.scenes, plane);
}
private loadModels(prostateMeshData: ProstateMeshData): void {
const loader = new THREESTLLoader(THREE);
const dataUri = `data:text/plain;base64,${prostateMeshData.prostateMeshUrl}`;
loader.prototype.load(dataUri, (geometry: THREE.Geometry) => {
this.setModelMatricesAndProstateMeshBoundings(geometry);
this.setupHomunculusMesh();
const prostateMesh = this.createProstateMesh(geometry);
let biopsyMeshes;
if (prostateMeshData.biopsies) {
biopsyMeshes = this.createBiopsyMeshes(prostateMeshData.biopsies);
}
if (prostateMeshData.rois) {
this.loadRoiMeshesOnScene(prostateMeshData.rois);
}
this.addToMainScene(prostateMesh);
if (prostateMeshData.biopsies) {
biopsyMeshes.forEach(biopsyMesh => this.addToMainScene(biopsyMesh));
}
this.scaleInView();
});
}
public saveAsImage() {
// after init of renderer
const renderTarget = new WebGLRenderTarget(this.width, this.height);
const target = this.renderer.getRenderTarget() || renderTarget;
const size = this.renderer.getSize();
// #ts-ignore
target.setSize(size.width, size.height);
this.renderer.setRenderTarget(renderTarget);
this.renderer.render( this.scenes.main, this.camera, target );
// retrieve buffer from render target
const buffer = new Uint8Array(this.width * this.height * 4);
this.renderer.readRenderTargetPixels(renderTarget, 0, 0, this.width, this.height, buffer);
// create jimp image
const image = new Jimp({
data: buffer,
width: this.width,
height: this.height
}, (err: any, images: any) => {
images.write('output.jpeg');
});
}
private createRenderer(): THREE.WebGLRenderer {
const canvasWidth = 1024;
const canvasHeight = 1024;
const canvas = new Canvas.Canvas(canvasWidth, canvasHeight);
canvas.addEventListener = () => {
// noop
}
canvas.style = {};
const renderer = new THREE.WebGLRenderer({ context: gl(this.width, this.height), canvas , preserveDrawingBuffer: true , alpha: true});
renderer.autoClear = false;
renderer.setClearColor(0xFFFFFF);
renderer.setPixelRatio(2);
renderer.setSize(this.dimensions.width, this.dimensions.height);
renderer.sortObjects = false;
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFShadowMap;
return renderer;
}
private getPlane = (): THREE.Mesh => new THREE.Mesh(
new THREE.PlaneBufferGeometry(2000, 2000, 8, 8),
new THREE.MeshBasicMaterial({ visible: false })
)
private animate = (): void => {
this.renderScenes();
}
private getSpotLight(): THREE.SpotLight {
const spotLight = new THREE.SpotLight(0xffffff, 1.5);
spotLight.position.set(0, 500, 2000);
spotLight.castShadow = true;
spotLight.shadow.camera.near = 200;
spotLight.shadow.camera.far = this.camera.far;
spotLight.shadow.camera.fov = 50;
spotLight.shadow.bias = -0.00022;
spotLight.shadow.mapSize.width = 2048;
spotLight.shadow.mapSize.height = 2048;
return spotLight;
}
private setModelMatricesAndProstateMeshBoundings(geometry: THREE.Geometry): void {
geometry.computeFaceNormals();
geometry.computeBoundingBox();
// A transformation matrix (matrix) is applied to prostate segmentation, target and biopsy,
// which contains a translation and a rotation
// The translation shifts the center of segmentation mesh to view center
this.prostateMeshBoundingBox = geometry.boundingBox.clone();
const center = this.prostateMeshBoundingBox.getCenter(new THREE.Vector3());
const translationMatrix = new THREE.Matrix4().makeTranslation(-center.x, -center.y, -center.z);
// The default view (DICOM coordinate) is patient head pointing towards user.
// We like patient foot pointing towards user.
// Therefore a 180 degree rotation along x (left-right) axis is applied.
const rotAxis = new THREE.Vector3(1, 0, 0);
this.rotationMatrix = new THREE.Matrix4().makeRotationAxis(rotAxis, Math.PI);
// matrix is the multiplication of rotation and translation
this.matrix = new THREE.Matrix4().multiplyMatrices(this.rotationMatrix, translationMatrix);
}
private createProstateMesh(geometry: THREE.Geometry): THREE.Mesh {
const shaderMaterial = ThreeUtil.createShaderMaterial(new THREE.Vector4(0.8, 0.6, 0.2, 0));
geometry.computeFaceNormals();
geometry.computeBoundingBox();
// A transformation matrix (matrix) is applied to prostate segmentation, target and biopsy,
// which contains a translation and a rotation
// The translation shifts the center of segmentation mesh to view center
this.prostateMeshBoundingBox = geometry.boundingBox.clone();
const mesh = new THREE.Mesh(geometry, shaderMaterial);
mesh.applyMatrix(this.matrix);
mesh.name = 'prostate';
return mesh;
}
private renderScenes(): void {
this.renderer.clear();
this.renderer.setViewport(-150, 0, 210, 75);
this.renderer.render(this.scenes.homunculus, this.camera);
this.renderer.clearDepth();
this.renderer.setViewport(0, 0, this.dimensions.width, this.dimensions.height);
this.renderer.render(this.scenes.main, this.camera);
}
private toCameraCoordinates(position: THREE.Vector3): THREE.Vector3 {
return position.applyMatrix4(this.camera.matrixWorldInverse);
}
private scaleInView(): void {
const point1 = this.toCameraCoordinates(this.prostateMeshBoundingBox.min);
const point2 = this.toCameraCoordinates(this.prostateMeshBoundingBox.max);
this.camera.fov = ThreeUtil.angleInDegree(point1, point2, this.camera) * 2;
this.camera.updateProjectionMatrix();
}
private createBiopsyMeshes(biopsiesMeshData: BiopsyMeshData[]): THREE.Mesh[] {
return biopsiesMeshData.map((biopsyMeshData: BiopsyMeshData, index: number) => {
const startPoint = new THREE.Vector3(
biopsyMeshData.proximalX,
biopsyMeshData.proximalY,
biopsyMeshData.proximalZ
);
const endPoint = new THREE.Vector3(biopsyMeshData.distalX, biopsyMeshData.distalY, biopsyMeshData.distalZ);
const color = biopsyMeshData.adenocarcinoma ?
0x0000ff: 0x002233;
const opacity = biopsyMeshData.adenocarcinoma ? 0.65 : 0.3;
const material = new THREE.MeshLambertMaterial({
transparent: true,
depthWrite: false,
depthTest: false,
emissive: new THREE.Color(color),
opacity,
});
const cylinder = ThreeUtil.getCylinderBetweenPoints(startPoint, endPoint, material);
cylinder.applyMatrix(this.matrix);
cylinder.name = `biopsy_${index}`;
return cylinder;
});
}
private loadRoiMeshesOnScene(roisMeshData: RoiMeshData[]): void {
roisMeshData.forEach((roiMeshData: RoiMeshData, index: number) => this.setupRoiMesh(roiMeshData, index));
}
private setupRoiMesh(roiMeshData: RoiMeshData, index: number): void {
const loader = new THREESTLLoader(THREE);
const dataUri = `data:text/plain;base64,${roiMeshData.ROIMeshUrl}`;
if (roiMeshData.tooltipConfiguration) {
if (typeof(roiMeshData.tooltipConfiguration) === 'string' ) {
roiMeshData.tooltipConfiguration = JSON.parse(roiMeshData.tooltipConfiguration);
}
}
loader.prototype.load(dataUri, (roiGeometry: THREE.Geometry) => {
const color = new THREE.Color(0xA396CB);
const colorVector = new THREE.Vector4(color.r, color.g, color.b, 0);
const shaderMaterial = ThreeUtil.createShaderMaterial(colorVector, roiMeshData.adenocarcinoma);
const mesh = new THREE.Mesh(roiGeometry, shaderMaterial);
mesh.name = `roi_${roiMeshData.ROIId}`;
roiGeometry.applyMatrix(this.matrix);
this.addToMainScene(mesh);
this.saveAsImage();
});
}
private setupHomunculusMesh(): void {
new THREEObjLoader(THREE);
new THREE.OBJLoader().load(HOMUNCULUS_OBJ_URL, (group: THREE.Group) => {
group.traverse((child) => {
if (child instanceof THREE.Mesh) {
// apply custom material
// #ts-ignore
child.material = new THREE.MeshPhongMaterial({
color: 0xb09b70,
specular: 0x050505,
shininess: 100,
flatShading: false
});
const homunculusMatrix = new THREE.Matrix4();
homunculusMatrix.makeScale(0.020, 0.020, 0.020);
homunculusMatrix.multiply(this.rotationMatrix);
child.applyMatrix(homunculusMatrix);
}
});
this.scenes.homunculus.add(group);
});
}
private addToMainScene(object3D: THREE.Object3D): void {
this.scenes.main.add(object3D);
// Adding Ambient Light for all scenes inside main scenes except Humanoid
const ambientLight = new THREE.AmbientLight(0x000000);
this.scenes.main.add(ambientLight);
}
}

NodeJS - How to store image dimension in a variable

I am trying to store the image dimension (width, height) in a variable in Nodejs.
for reference
https://github.com/lovell/sharp/issues/776
const metaReader = sharp()
metaReader
.metadata()
.then(info => {
// info object contains the image dimension. how to return this object
console.log(info)
})
let metainfo = stream.pipe(metaReader)
Can try this code instead?
let image;
const metaReader = sharp()
metaReader
.metadata()
.then(info => {
// info object contains the image dimension. how to return this object
image.width = info.width;
image.height = info.height;
return image;
})
let metainfo = stream.pipe(metaReader)
You could return the value dims and then interact with it in the next then
or you could just interact with those dimensions in the current one you have.
const metaReader = sharp()
metaReader
.metadata()
.then(info => {
const dims = {width: info.width, height: info.height}
// Interact with dims here
// Or return it and interact in the next then
return dims
}).then(dims => {
// or interact with the dimensions here.
})

How do I replace a string in a PDF file using NodeJS?

I have a template PDF file, and I want to replace some marker strings to generate new PDF files and save them. What's the best/simplest way to do this? I don't need to add graphics or anything fancy, just a simple text replacement, so I don't want anything too complicated.
Thanks!
Edit: Just found HummusJS, I'll see if I can make progress and post it here.
I found this question by searching, so I think it deserves the answer. I found the answer by BrighTide here: https://github.com/galkahana/HummusJS/issues/71#issuecomment-275956347
Basically, there is this very powerful Hummus package which uses library written in C++ (crossplatform of course). I think the answer given in that github comment can be functionalized like this:
var hummus = require('hummus');
/**
* Returns a byteArray string
*
* #param {string} str - input string
*/
function strToByteArray(str) {
var myBuffer = [];
var buffer = new Buffer(str);
for (var i = 0; i < buffer.length; i++) {
myBuffer.push(buffer[i]);
}
return myBuffer;
}
function replaceText(sourceFile, targetFile, pageNumber, findText, replaceText) {
var writer = hummus.createWriterToModify(sourceFile, {
modifiedFilePath: targetFile
});
var sourceParser = writer.createPDFCopyingContextForModifiedFile().getSourceDocumentParser();
var pageObject = sourceParser.parsePage(pageNumber);
var textObjectId = pageObject.getDictionary().toJSObject().Contents.getObjectID();
var textStream = sourceParser.queryDictionaryObject(pageObject.getDictionary(), 'Contents');
//read the original block of text data
var data = [];
var readStream = sourceParser.startReadingFromStream(textStream);
while(readStream.notEnded()){
Array.prototype.push.apply(data, readStream.read(10000));
}
var string = new Buffer(data).toString().replace(findText, replaceText);
//Create and write our new text object
var objectsContext = writer.getObjectsContext();
objectsContext.startModifiedIndirectObject(textObjectId);
var stream = objectsContext.startUnfilteredPDFStream();
stream.getWriteStream().write(strToByteArray(string));
objectsContext.endPDFStream(stream);
objectsContext.endIndirectObject();
writer.end();
}
// replaceText('source.pdf', 'output.pdf', 0, /REPLACEME/g, 'My New Custom Text');
UPDATE:
The version used at the time of writing an example was 1.0.83, things might change recently.
UPDATE 2:
Recently I got an issue with another PDF file which had a different font. For some reason the text got split into small chunks, i.e. string QWERTYUIOPASDFGHJKLZXCVBNM1234567890- got represented as -286(Q)9(WER)24(T)-8(YUIOP)116(ASDF)19(GHJKLZX)15(CVBNM1234567890-)
I had no idea what else to do rather than make up a regex.. So instead of this one line:
var string = new Buffer(data).toString().replace(findText, replaceText);
I have something like this now:
var string = Buffer.from(data).toString();
var characters = REPLACE_ME;
var match = [];
for (var a = 0; a < characters.length; a++) {
match.push('(-?[0-9]+)?(\\()?' + characters[a] + '(\\))?');
}
string = string.replace(new RegExp(match.join('')), function(m, m1) {
// m1 holds the first item which is a space
return m1 + '( ' + REPLACE_WITH_THIS + ')';
});
Building on Alex's (and other's) solution, I noticed an issue where some non-text data were becoming corrupted. I tracked this down to encoding/decoding the PDF text as utf-8 instead of as a binary string. Anyways here's a modified solution that:
Avoids corrupting non-text data
Uses streams instead of files
Allows multiple patterns/replacements
Uses the MuhammaraJS package which is a maintained fork of HummusJS (should be able to swap in HummusJS just fine as well)
Is written in TypeScript (feel free to remove the types for JS)
import muhammara from "muhammara";
interface Pattern {
searchValue: RegExp | string;
replaceValue: string;
}
/**
* Modify a PDF by replacing text in it
*/
const modifyPdf = ({
sourceStream,
targetStream,
patterns,
}: {
sourceStream: muhammara.ReadStream;
targetStream: muhammara.WriteStream;
patterns: Pattern[];
}): void => {
const modPdfWriter = muhammara.createWriterToModify(sourceStream, targetStream, { compress: false });
const numPages = modPdfWriter
.createPDFCopyingContextForModifiedFile()
.getSourceDocumentParser()
.getPagesCount();
for (let page = 0; page < numPages; page++) {
const copyingContext = modPdfWriter.createPDFCopyingContextForModifiedFile();
const objectsContext = modPdfWriter.getObjectsContext();
const pageObject = copyingContext.getSourceDocumentParser().parsePage(page);
const textStream = copyingContext
.getSourceDocumentParser()
.queryDictionaryObject(pageObject.getDictionary(), "Contents");
const textObjectID = pageObject.getDictionary().toJSObject().Contents.getObjectID();
let data: number[] = [];
const readStream = copyingContext.getSourceDocumentParser().startReadingFromStream(textStream);
while (readStream.notEnded()) {
const readData = readStream.read(10000);
data = data.concat(readData);
}
const pdfPageAsString = Buffer.from(data).toString("binary"); // key change 1
let modifiedPdfPageAsString = pdfPageAsString;
for (const pattern of patterns) {
modifiedPdfPageAsString = modifiedPdfPageAsString.replaceAll(pattern.searchValue, pattern.replaceValue);
}
// Create what will become our new text object
objectsContext.startModifiedIndirectObject(textObjectID);
const stream = objectsContext.startUnfilteredPDFStream();
stream.getWriteStream().write(strToByteArray(modifiedPdfPageAsString));
objectsContext.endPDFStream(stream);
objectsContext.endIndirectObject();
}
modPdfWriter.end();
};
/**
* Create a byte array from a string, as muhammara expects
*/
const strToByteArray = (str: string): number[] => {
const myBuffer = [];
const buffer = Buffer.from(str, "binary"); // key change 2
for (let i = 0; i < buffer.length; i++) {
myBuffer.push(buffer[i]);
}
return myBuffer;
};
And then to use it:
/**
* Fill a PDF with template data
*/
export const fillPdf = async (sourceBuffer: Buffer): Promise<Buffer> => {
const sourceStream = new muhammara.PDFRStreamForBuffer(sourceBuffer);
const targetStream = new muhammara.PDFWStreamForBuffer();
modifyPdf({
sourceStream,
targetStream,
patterns: [{ searchValue: "home", replaceValue: "emoh" }], // TODO use actual patterns
});
return targetStream.buffer;
};
There is another Node.js Package asposepdfcloud, Aspose.PDF Cloud SDK for Node.js. You can use it to replace text in your PDF document conveniently. Its free plan offers 150 credits monthly. Here is sample code to replace text in PDF document, don't forget to install asposepdfcloud first.
const { PdfApi } = require("asposepdfcloud");
const { TextReplaceListRequest }= require("asposepdfcloud/src/models/textReplaceListRequest");
const { TextReplace }= require("asposepdfcloud/src/models/textReplace");
// Get App key and App SID from https://aspose.cloud
pdfApi = new PdfApi("xxxxx-xxxxx-xxxx-xxxxxxxxxxx", "xxxxxxxxxxxxxxxxxxxxxb");
var fs = require('fs');
const name = "02_pages.pdf";
const remoteTempFolder = "Temp";
//const localTestDataFolder = "C:\\Temp";
//const path = remoteTempFolder + "\\" + name;
//var data = fs.readFileSync(localTestDataFolder + "\\" + name);
const textReplace= new TextReplace();
textReplace.oldValue= "origami";
textReplace.newValue= "aspose";
textReplace.regex= false;
const textReplace1= new TextReplace();
textReplace1.oldValue= "candy";
textReplace1.newValue= "biscuit";
textReplace1.regex= false;
const trr = new TextReplaceListRequest();
trr.textReplaces = [textReplace,textReplace1];
// Upload File
//pdfApi.uploadFile(path, data).then((result) => {
// console.log("Uploaded File");
// }).catch(function(err) {
// Deal with an error
// console.log(err);
//});
// Replace text
pdfApi.postDocumentTextReplace(name, trr, null, remoteTempFolder).then((result) => {
console.log(result.body.code);
}).catch(function(err) {
// Deal with an error
console.log(err);
});
P.S: I'm developer evangelist at aspose.

Resources