I'm creating a document with pdfMake. In my document, I have three tables: the first show the three best equipments, the second shows the three worst equipments and the third shows all the equipments. It happens that the first table prints ok, but when I try to render the next tables, it does not work as expected (see this image).
The code is this:
const tableHeader = [
setTableHeader("Código"),
setTableHeader("OEE"),
setTableHeader("Disponibilidade"),
setTableHeader("Performance"),
setTableHeader("Qualidade"),
]
const equipmentsTable = [
tableHeader,
...sortedEquipsInfos.map((equip)=>[
equip.eECode,
equip.oee.toLocaleString('pt-BR', {maximumFractionDigits: 2}) + '%',
equip.availability.toLocaleString('pt-BR', {maximumFractionDigits: 2}) + '%',
equip.performance.toLocaleString('pt-BR', {maximumFractionDigits: 2}) + '%',
equip.quality.toLocaleString('pt-BR', {maximumFractionDigits: 2}) + '%',
])];
...
const contentAux = [];
contentAux.push({
table: {
body: [
equipmentsTable[0], //header
equipmentsTable[equipmentsTable.length - 3],
equipmentsTable[equipmentsTable.length - 2],
equipmentsTable[equipmentsTable.length - 1],
]
}
});
contentAux.push({
table: {
body: [
equipmentsTable[0],
equipmentsTable[1],
equipmentsTable[2],
equipmentsTable[3],
]
}
})
contentAux.push(
{
table: {
body: equipmentsTable
},
}
)
...
var pdfDoc = printer.createPdfKitDocument({
...
content: contentAux
If I coment the first table, the second works fine. If I coment the first and the second, the third works. Any idea of what is happening?
This happens due to a known bug (or limitation) in pdfmake.
You cannot reuse the same object reference (equipmentsTable in your case) in the document definition. See https://github.com/bpampuch/pdfmake/issues/465
A possible workaround is to copy the objects (in all 3 cases):
...
const equipmentsTableCopy1 = JSON.parse(JSON.stringify(equipmentsTable));
...
[
equipmentsTableCopy1[0],
equipmentsTableCopy1[1],
equipmentsTableCopy1[2],
equipmentsTableCopy1[3],
]
...
Related
excel snippet
I am using Mule 4 and am trying to read an excel file, then convert into JSON using Dataweave and update them in salesforce.
Below is the payload which I am getting while I read the excel. I need to convert this into the requested Output Payload.
The values are dynamic. There might be more objects.
Any ideas appreciated.
Thanks.
Input Payload:
{
"X":[
{
"A":"Key1",
"B":"Key2",
"C":"Key3",
"D":"value1",
"E":"value2"
},
{
"A":"",
"B":"",
"C":"Key4",
"D":"value3",
"E":"value4"
},
{
"A":"Key5",
"B":"Key6",
"C":"Key7",
"D":"Value5",
"E":"Value6"
},
{
"A":"",
"B":"",
"C":"Key8",
"D":"Value7",
"E":"Value8"
}
]
}
Output Payload:
[
{
"Key1":{
"Key2":{
"Key3":"value1",
"Key4":"value3"
}
},
"Key5":{
"Key6":{
"Key7":"Value5",
"Key8":"Value7"
}
}
},
{
"Key1":{
"Key2":{
"Key3":"value2",
"Key4":"value4"
}
},
"Key5":{
"Key6":{
"Key7":"Value6",
"Key8":"Value8"
}
}
}
]
The following seems to work.
This is JavaScript. I don't know what the underlying syntax or scripting language is for DataWeave, but between the C-family syntax and inline comments you can probably treat the JS like pseudo-code and read it well enough to recreate the logic.
// I assume you started with a table structure like this:
//
// A B C D E
// == == == == ==
// K1 K2 K3 v1 v2 <~ X[0]
// __ __ K4 v3 v4 <~ X[1]
// K5 K6 K7 v5 v6 <~ X[2]
// __ __ K8 v7 v8 <~ X[3]
//
// So I'm going to call A,B,C,D,E "column labels"
// and the elements in `X` "rows".
// Here's the original input you provided:
input = {
"X": [
{
"A":"Key1",
"B":"Key2",
"C":"Key3",
"D":"value1",
"E":"value2"
},
{
"A":"",
"B":"",
"C":"Key4",
"D":"value3",
"E":"value4"
},
{
"A":"Key5",
"B":"Key6",
"C":"Key7",
"D":"Value5",
"E":"Value6"
},
{
"A":"",
"B":"",
"C":"Key8",
"D":"Value7",
"E":"Value8"
}
]
}
// First let's simplify the structure by filling in the missing keys at
// `X[1].A`, `X[1].B` etc. We could keep track of the last non-blank
// value while doing the processing below instead, but doing it now
// reduces the complexity of the final loop.
input.X.forEach((row, row_index) => {
(Object.keys(row)).forEach((col_label) => {
if (row[col_label].length == 0) {
row[col_label] = input.X[row_index - 1][col_label]
}
});
});
// Now X[1].A is "Key1", X[1].B is "Key2", etc.
// I'm not quite sure if there's a hard-and-fast rule that determines
// which values become keys and which become values, so I'm just going
// to explicitly describe the structure. If there's a pattern to follow
// you could compute this dynamically.
const key_column_labels = ["A","B","C"]
const val_column_labels = ["D","E"]
// this will be the root object we're building
var output_list = []
// since the value columns become output rows we need to invert the loop a bit,
// so the outermost thing we iterate over is the list of value column labels.
// our general strategy is to walk down the "tree" of key-columns and
// append the current value-column. we do that for each input row, and then
// repeat that whole cycle for each value column.
val_column_labels.forEach((vl) => {
// the current output row we're populating
var out_row = {}
output_list.push(out_row)
// for each input row
input.X.forEach((in_row) => {
// start at the root level of the output row
var cur_node = out_row
// for each of our key column labels
key_column_labels.forEach((kl, ki) => {
if (ki == (key_column_labels.length - 1)) {
// this is the last key column (C), the one that holds the values
// so set the current vl as one of the keys
cur_node[in_row[kl]] = in_row[vl]
} else if (cur_node[in_row[kl]] == null) {
// else if there's no map stored in the current node for this
// key value, let's create one
cur_node[in_row[kl]] = {}
// and "step down" into it for the next iteration of the loop
cur_node = cur_node[in_row[kl]]
} else {
// else if there's no map stored in the current node for this
// key value, so let's step down into the existing map
cur_node = cur_node[in_row[kl]]
}
});
});
});
console.log( JSON.stringify(output_list,null,2) )
// When I run this I get the data structure you're looking for:
//
// ```
// $ node json-transform.js
// [
// {
// "Key1": {
// "Key2": {
// "Key3": "value1",
// "Key4": "value3"
// }
// },
// "Key5": {
// "Key6": {
// "Key7": "Value5",
// "Key8": "Value7"
// }
// }
// },
// {
// "Key1": {
// "Key2": {
// "Key3": "value2",
// "Key4": "value4"
// }
// },
// "Key5": {
// "Key6": {
// "Key7": "Value6",
// "Key8": "Value8"
// }
// }
// }
// ]
// ```
Here's a JSFiddle that demonstrates this: https://jsfiddle.net/wcvmu0g9/
I'm not sure this captures the general form you're going for (because I'm not sure I fully understand that), but I think you should be able to abstract this basic principle.
It was a challenging one. I was able to at least get the output you expect with this Dataweave code. I'll put some comments on the code.
%dw 2.0
output application/json
fun getNonEmpty(key, previousKey) =
if(isEmpty(key)) previousKey else key
fun completeKey(item, previousItem) =
{
A: getNonEmpty(item.A,previousItem.A),
B: getNonEmpty(item.B,previousItem.B)
} ++ (item - "A" - "B")
// Here I'm filling up the A and B columns to have the complete path using the previous items ones if they come empty
var completedStructure =
payload.X reduce ((item, acc = []) ->
acc + completeKey(item, acc[-1])
)
// This takes a list, groups it by a field and let you pass also what to
// want to do with the grouped values.
fun groupByKey(structure, field, next) =
structure groupBy ((item, i) -> item[field]) mapObject ((v, k, i1) ->
{
(k): next(k,v)
}
)
// This one was just to not repete the code for each value field
fun valuesForfield(structure, field) =
groupByKey(structure, "A", (key,value) ->
groupByKey(value, "B", (k,v) ->
groupByKey(value, "C", (k,v) -> v[0][field]))
)
var valueColumns = ["D","E"]
---
valueColumns map (value, index) -> valuesForfield(completedStructure,value)
EDIT: valueColumns is now Dynamic
Just took up TypeScript a couple of weeks ago without much knowledge in JavaScript.
I am trying to go through all the files in the specified directory and put each file name (string) and change time (number) into an array of array and sort by change time.
It looks like this: [['natalie.jpg', 143], ['mike.jpg', 20], ['john.jpg', 176], ['Jackie.jpg', 6]]
Problem 1: I do not know how to specify the inner array content, string and number. Type? Interface? Class? Tuple?
Problem 2: I do not know how to sort by change time in ascending order, so that the array changes to: [['Jackie.jpg', 6], ['mike.jpg', 20], ['natalie.jpg', 143], ['john.jpg', 176]]
import fs from 'fs'
const dirPath = '/home/me/Desktop/'
type imageFileDesc = [string, number] // tuple
const imageFileArray = [imageFileDesc] // ERROR HERE!
function readImageFiles (dirPath: string) {
try {
const dirObjectNames = fs.readdirSync(dirPath)
for (const dirObjectName of dirObjectNames) {
const dirObject = fs.lstatSync(dirPath + '/' + dirObjectName)
if (dirObject.isFile()) {
imageFileArray.push([dirObjectName, dirObject.ctimeMs]) // ERROR HERE!
}
}
imageFileArray.sort(function (a: number, b: number) {
return b[1] - a[1] // ERROR HERE! Can we do something like b.ctime - a.ctime?
})
} catch (error) {
console.error('Error in reading ' + dirPath)
}
}
readImageFiles(dirPath)
console.log(imageFileArray)
import * as fs from 'fs'
// Read files in folder
const files = fs.readdirSync( './files' )
// This will store the file name and their modification time
const imageFileArray: [ string, Date ][] = []
for ( const file of files ) {
// You can get a file's last modified time through fs.stat
const stats = fs.statSync( `./files/${file}` )
const modifiedTime = stats.mtime
// Push a tuple containing the filename and the last modified time
imageFileArray.push( [ file, modifiedTime ] )
}
// Sort from older to newer
const sorted = imageFileArray.sort( (a, b) => a[1].getTime() - b[1].getTime() )
console.log( sorted )
The modified time returns a Date object. If you want to sort from newer to older, just invert the operation in the sort function.
I am grouping by "Id" and get the sum of "Total_Weight" in google apps script. This is the result of that computation.
res_1 output:
[ { Id: '400 ', Total_Weight: 484308 },
{ Id: '500W', Total_Weight: 13232 } ]
After this, I have a if-else clause that loops over the "Id" in above array and does some computation.
res_1.forEach((r2,i2)=>{
if (r2['Id']=="400") {
var cost = (r2['Total_Weight']/1000)*cost
NewArray.push([r2['Id'], cost]);
}
else if (r2['Id']=="400W") {
var cost = (r2['Total_Weight']/1000)*cost
NewArray.push([r2['Id'], cost ]);
}
}
My challenge is in "res_1" the first Id is "400 " (400 followed by a space). Hence, when it comes to the for loop, it does not go into the first if clause. I tried with replacing spaces but that doesnt work as well.
Are there any ways that this could be resolved? Any leads would be great.
Try using the .replace call on the res_1 output like so:
var res_1_trim = res_1.replace(/\s/g, "")
res_1_trim.forEach((r2,i2)=>{
if (r2['Id']=="400") {
var cost = (r2['Total_Weight']/1000)*cost
NewArray.push([r2['Id'], cost]);
}
else if (r2['Id']=="400W") {
var cost = (r2['Total_Weight']/1000)*cost
NewArray.push([r2['Id'], cost ]);
}
}
\s is a solution to find whitespace and the 'g' provides a match for instances of whitespace.(.replace documentation)
I have a Bixby capsule in progress that lets users access both free and premium content "packs". Each pack is a file stored in a content/ directory. I want to loop over these files and read them into a variable called entitled_content.
I started from the facts capsule which uses a utility function to search a local file called content.js.
const CONTENT = []
const literature = require("../content/literature")
const enhanced = require("../content/enhanced")
const roosevelt = require("../content/roosevelt")
const ambition = require("../content/ambition")
const chaucer = require ("../content/chaucer")
//const GET_REMOTE = require('./lib/getRemoteContent.js')
var console = require('console')
console.log(roosevelt)
console.log(ambition)
console.log(chaucer)
const entitlements = ["roosevelt", "ambition", "chaucer"]
var entitled_content = []
entitlements.forEach(function (item) {
entitled_content = entitled_content.concat(item)
console.log(item); })
console.log(entitled_content)
What it does is this:
[ { tags: [ 'roosevelt' ],
text: 'Happiness is not a goal; it is a by-product. --Eleanor Roosevelt',
image: { url: 'images/' } } ]
[ { tags: [ 'ambition' ],
text: 'Ambition is but avarice on stilts, and masked. --Walter Savage Landor' } ]
[ { tags: [ 'literature' ],
text: 'A man was reading The Canterbury Tales one Saturday morning, when his wife asked What have you got there? Replied he, Just my cup and Chaucer.' },
{ tags: [ 'literature' ],
text: 'For years a secret shame destroyed my peace-- I\'d not read Eliot, Auden or MacNiece. But now I think a thought that brings me hope: Neither had Chaucer, Shakespeare, Milton, Pope. Source: Justin Richardson.' } ]
roosevelt
ambition
chaucer
[ 'roosevelt', 'ambition', 'chaucer' ]
What I want it to do is to assemble these three files roosevelt, ambition and chaucer into a single array variable entitled_content that will then be searched by the utility function. What's wrong is that this line entitled_content = entitled_content.concat(item) isn't doing what I want it to do, which is to get the entire contents of the file named "item".
Because you wrapped your variable names in quotation marks the program reads them as strings.
Change it from
const entitlements = ["roosevelt", "ambition", "chaucer"]
to
const entitlements = [roosevelt, ambition, chaucer]
When I parse this little piece of JSON:
{ "value" : 9223372036854775807 }
This is what I get:
{ hello: 9223372036854776000 }
Is there any way to parse it properly?
Not with built-in JSON.parse. You'll need to parse it manually and treat values as string (if you want to do arithmetics with them there is bignumber.js) You can use Douglas Crockford JSON.js library as a base for your parser.
EDIT2 ( 7 years after original answer ) - it might soon be possible to solve this using standard JSON api. Have a look at this TC39 proposal to add access to source string to a reviver function - https://github.com/tc39/proposal-json-parse-with-source
EDIT1: I created a package for you :)
var JSONbig = require('json-bigint');
var json = '{ "value" : 9223372036854775807, "v2": 123 }';
console.log('Input:', json);
console.log('');
console.log('node.js bult-in JSON:')
var r = JSON.parse(json);
console.log('JSON.parse(input).value : ', r.value.toString());
console.log('JSON.stringify(JSON.parse(input)):', JSON.stringify(r));
console.log('\n\nbig number JSON:');
var r1 = JSONbig.parse(json);
console.log('JSON.parse(input).value : ', r1.value.toString());
console.log('JSON.stringify(JSON.parse(input)):', JSONbig.stringify(r1));
Output:
Input: { "value" : 9223372036854775807, "v2": 123 }
node.js bult-in JSON:
JSON.parse(input).value : 9223372036854776000
JSON.stringify(JSON.parse(input)): {"value":9223372036854776000,"v2":123}
big number JSON:
JSON.parse(input).value : 9223372036854775807
JSON.stringify(JSON.parse(input)): {"value":9223372036854775807,"v2":123}
After searching something more clean - and finding only libs like jsonbigint, I just wrote my own solution. Is not the best, but it solves my problem. For those that are using Axios you can use it on transformResponse callback (this was my original problem - Axios parses the JSON and all bigInts cames wrong),
const jsonStr = `{"myBigInt":6028792033986383748, "someStr":"hello guys", "someNumber":123}`
const result = JSON.parse(jsonStr, (key, value) => {
if (typeof value === 'number' && !Number.isSafeInteger(value)) {
let strBig = jsonStr.match(new RegExp(`(?:"${key}":)(.*?)(?:,)`))[1] // get the original value using regex expression
return strBig //should be BigInt(strBig) - BigInt function is not working in this snippet
}
return value
})
console.log({
"original": JSON.parse(jsonStr),
"handled": result
})
A regular expression is difficult to get right for all cases.
Here is my attempt, but all I'm giving you is some extra test cases, not the solution. Likely you will want to replace a very specific attribute, and a more generic JSON parser (that handles separating out the properties, but leaves the numeric properties as strings) and then you can wrap that specific long number in quotes before continuing to parse into a javascript object.
let str = '{ "value" : -9223372036854775807, "value1" : "100", "strWNum": "Hi world: 42 is the answer", "arrayOfStrWNum": [":42, again.", "SOIs#1"], "arrayOfNum": [100,100,-9223372036854775807, 100, 42, 0, -1, 0.003] }'
let data = JSON.parse(str.replace(/([:][\s]*)(-?\d{1,90})([\s]*[\r\n,\}])/g, '$1"$2"$3'));
console.log(BigInt(data.value).toString());
console.log(data);
you can use this code for change big numbers to strings and later use BigInt(data.value)
let str = '{ "value" : -9223372036854775807, "value1" : "100" }'
let data = JSON.parse(str.replace(/([^"^\d])(-?\d{1,90})([^"^\d])/g, '$1"$2"$3'));
console.log(BigInt(data.value).toString());
console.log(data);