Arangodb array reduce - arangodb

Given the following:
LET replacements = [
["foo", "bar"],
["bar", "baz"]
]
LET title = "foo"
// JS CODE
// title = replacements.reduce((acc, r) => r.replace(acc[0], acc[1]), title);
// or
// for (const r of replacements) {
// title = title.replace(r[0], r[1]);
// }
RETURN title
How is the logic I described with JS possible to implement in aql?
I can't seem to get FOR loops to work without returning something, and LET itself seems not to allow further reassignment.

This could be a case for a user function.
In arangosh:
127.0.0.1:8529#_system> require("#arangodb/aql/functions").register(
"MYFUNC::REPLACEEQ",
function (replacements, title) {
return replacements.reduce(
(t, r) => t.replace(r[0], r[1]),
title
);
}
);
The AQL-Query:
LET replacements = [
["foo", "bar"],
["bar", "baz"]
]
RETURN MYFUNC::REPLACEEQ(replacements, "foo")
// => ["baz"]

Related

Is there a way to map a json object for the below example?

excel snippet
I am using Mule 4 and am trying to read an excel file, then convert into JSON using Dataweave and update them in salesforce.
Below is the payload which I am getting while I read the excel. I need to convert this into the requested Output Payload.
The values are dynamic. There might be more objects.
Any ideas appreciated.
Thanks.
Input Payload:
{
"X":[
{
"A":"Key1",
"B":"Key2",
"C":"Key3",
"D":"value1",
"E":"value2"
},
{
"A":"",
"B":"",
"C":"Key4",
"D":"value3",
"E":"value4"
},
{
"A":"Key5",
"B":"Key6",
"C":"Key7",
"D":"Value5",
"E":"Value6"
},
{
"A":"",
"B":"",
"C":"Key8",
"D":"Value7",
"E":"Value8"
}
]
}
Output Payload:
[
{
"Key1":{
"Key2":{
"Key3":"value1",
"Key4":"value3"
}
},
"Key5":{
"Key6":{
"Key7":"Value5",
"Key8":"Value7"
}
}
},
{
"Key1":{
"Key2":{
"Key3":"value2",
"Key4":"value4"
}
},
"Key5":{
"Key6":{
"Key7":"Value6",
"Key8":"Value8"
}
}
}
]
The following seems to work.
This is JavaScript. I don't know what the underlying syntax or scripting language is for DataWeave, but between the C-family syntax and inline comments you can probably treat the JS like pseudo-code and read it well enough to recreate the logic.
// I assume you started with a table structure like this:
//
// A B C D E
// == == == == ==
// K1 K2 K3 v1 v2 <~ X[0]
// __ __ K4 v3 v4 <~ X[1]
// K5 K6 K7 v5 v6 <~ X[2]
// __ __ K8 v7 v8 <~ X[3]
//
// So I'm going to call A,B,C,D,E "column labels"
// and the elements in `X` "rows".
// Here's the original input you provided:
input = {
"X": [
{
"A":"Key1",
"B":"Key2",
"C":"Key3",
"D":"value1",
"E":"value2"
},
{
"A":"",
"B":"",
"C":"Key4",
"D":"value3",
"E":"value4"
},
{
"A":"Key5",
"B":"Key6",
"C":"Key7",
"D":"Value5",
"E":"Value6"
},
{
"A":"",
"B":"",
"C":"Key8",
"D":"Value7",
"E":"Value8"
}
]
}
// First let's simplify the structure by filling in the missing keys at
// `X[1].A`, `X[1].B` etc. We could keep track of the last non-blank
// value while doing the processing below instead, but doing it now
// reduces the complexity of the final loop.
input.X.forEach((row, row_index) => {
(Object.keys(row)).forEach((col_label) => {
if (row[col_label].length == 0) {
row[col_label] = input.X[row_index - 1][col_label]
}
});
});
// Now X[1].A is "Key1", X[1].B is "Key2", etc.
// I'm not quite sure if there's a hard-and-fast rule that determines
// which values become keys and which become values, so I'm just going
// to explicitly describe the structure. If there's a pattern to follow
// you could compute this dynamically.
const key_column_labels = ["A","B","C"]
const val_column_labels = ["D","E"]
// this will be the root object we're building
var output_list = []
// since the value columns become output rows we need to invert the loop a bit,
// so the outermost thing we iterate over is the list of value column labels.
// our general strategy is to walk down the "tree" of key-columns and
// append the current value-column. we do that for each input row, and then
// repeat that whole cycle for each value column.
val_column_labels.forEach((vl) => {
// the current output row we're populating
var out_row = {}
output_list.push(out_row)
// for each input row
input.X.forEach((in_row) => {
// start at the root level of the output row
var cur_node = out_row
// for each of our key column labels
key_column_labels.forEach((kl, ki) => {
if (ki == (key_column_labels.length - 1)) {
// this is the last key column (C), the one that holds the values
// so set the current vl as one of the keys
cur_node[in_row[kl]] = in_row[vl]
} else if (cur_node[in_row[kl]] == null) {
// else if there's no map stored in the current node for this
// key value, let's create one
cur_node[in_row[kl]] = {}
// and "step down" into it for the next iteration of the loop
cur_node = cur_node[in_row[kl]]
} else {
// else if there's no map stored in the current node for this
// key value, so let's step down into the existing map
cur_node = cur_node[in_row[kl]]
}
});
});
});
console.log( JSON.stringify(output_list,null,2) )
// When I run this I get the data structure you're looking for:
//
// ```
// $ node json-transform.js
// [
// {
// "Key1": {
// "Key2": {
// "Key3": "value1",
// "Key4": "value3"
// }
// },
// "Key5": {
// "Key6": {
// "Key7": "Value5",
// "Key8": "Value7"
// }
// }
// },
// {
// "Key1": {
// "Key2": {
// "Key3": "value2",
// "Key4": "value4"
// }
// },
// "Key5": {
// "Key6": {
// "Key7": "Value6",
// "Key8": "Value8"
// }
// }
// }
// ]
// ```
Here's a JSFiddle that demonstrates this: https://jsfiddle.net/wcvmu0g9/
I'm not sure this captures the general form you're going for (because I'm not sure I fully understand that), but I think you should be able to abstract this basic principle.
It was a challenging one. I was able to at least get the output you expect with this Dataweave code. I'll put some comments on the code.
%dw 2.0
output application/json
fun getNonEmpty(key, previousKey) =
if(isEmpty(key)) previousKey else key
fun completeKey(item, previousItem) =
{
A: getNonEmpty(item.A,previousItem.A),
B: getNonEmpty(item.B,previousItem.B)
} ++ (item - "A" - "B")
// Here I'm filling up the A and B columns to have the complete path using the previous items ones if they come empty
var completedStructure =
payload.X reduce ((item, acc = []) ->
acc + completeKey(item, acc[-1])
)
// This takes a list, groups it by a field and let you pass also what to
// want to do with the grouped values.
fun groupByKey(structure, field, next) =
structure groupBy ((item, i) -> item[field]) mapObject ((v, k, i1) ->
{
(k): next(k,v)
}
)
// This one was just to not repete the code for each value field
fun valuesForfield(structure, field) =
groupByKey(structure, "A", (key,value) ->
groupByKey(value, "B", (k,v) ->
groupByKey(value, "C", (k,v) -> v[0][field]))
)
var valueColumns = ["D","E"]
---
valueColumns map (value, index) -> valuesForfield(completedStructure,value)
EDIT: valueColumns is now Dynamic

Svelte Store. Spread syntax is not merging - just adding

I am trying to add some brocolli to my basket in the svelte store I have created. My code adds the brocooli to the basket but then duplicates the baskets and adds a whole new basket to my store. Not sure if the problem is caused by my lack of understanding of javascript or svelte.
Desired result
Basket 1 OrangePineapple Basket 2 BananaApplePlumwalnuthazelnutnutmegbroccoli
ACTUAL RESULT
Basket 1 OrangePineapple Basket 2 BananaApplePlumwalnuthazelnutnutmeg Basket 2 BananaApplePlumwalnuthazelnutnutmegbroccoli
Link to svelte codebox where you can view and run code
https://svelte.dev/repl/80d428000a3f425da798cec3450a59d4?version=3.46.2
if you click the button you see that my basket is duplicating. I am just trying to add the brocooli to the basket.
code below
import { writable } from 'svelte/store';
export const storeBaskets = writable([
{
"name": "Basket 1",
"items": ["Orange", "Pineapple"]
},
{
"name": "Basket 2",
"items": ["Banana", "Apple","Plum","walnut","hazelnut","nutmeg"]
}
])
//Local functions
export const add = (item,basketIndex) => {
storeBaskets.update(val => {
const newItems = [...val[basketIndex].items, item]
const newBasket = {'name':val[basketIndex].name,'items':newItems}
val = [...val,newBasket]
return val
})
val = [...val,newBasket]
With this line you're copying the previous store value and adding the newBasket "on top". That's how the spread operator works with arrays
let arr = [1,2,3]
let n = 4
let arr2 = [...arr, n]
console.log(arr2) // [ 1 , 2 , 3 , 4 ]
I wonder if you might have thought of the different behaviour when spreading an object, where an already existing entry might be overriden if the key already exists
let obj = {key: 'value'}
let key = 'newValue'
let obj2 = {...obj, key}
console.log(obj2) // { key: "newValue" }
To make your code working you could replace the line by val[basketIndex] = newBasket
export const add = (item,basketIndex) => {
storeBaskets.update(val => {
const newItems = [...val[basketIndex].items, item]
const newBasket = {'name':val[basketIndex].name,'items':newItems}
val[basketIndex] = newBasket
return val
})
}
Or, instead of spreading, simply push the new value directly to the according nested array in just one line
export const add = (item,basketIndex) => {
storeBaskets.update(val => {
val[basketIndex].items.push(item)
return val
})
}
You might not need to spread, because it's an array, you'r spreading the existing items of the array and then adding the new basket to it. You can map and replace by basketIndex, like:
export const add = (item,basketIndex) => {
storeBaskets.update(val => {
const newItems = [...val[basketIndex].items, item]
const newBasket = {'name':val[basketIndex].name,'items':newItems}
return val.map((basket, i) => i === basketIndex ? newBasket : basket)
})
}
(Working example)

how to check if one of the array got a value using Ramda

hi im trying to check if one of the array got a value and it need to return true
input1 = { "value": [
{
"props": {
"forest": []
}
},
{
"props": {
"forest": [
{
"items": "woods"
}
]
}
} ] }
input2 = { "value": [
{
"props": {
"forest": []
}
},
{
"props": {
"forest": []
}
} ] }
the code i tried is R.anyPass to check if one of the value is True and if yes then it return true
const forestgotwoods = R.pipe(
R.path(['value']),
R.map(R.pipe(
R.path(['props','forest']),
R.isEmpty,
R.not,
)),
R.anyPass
);
console.log(forestwoods(input1)); undef
console.log(forestwoods(input2)); undef
and I also try it this way
const forestgotwoods = R.anyPass(
[R.pipe(
R.path(['value']),
R.map(R.pipe(
R.path(['props','forest']),
R.isEmpty,
R.not,
))
)]
);
console.log(forestwoods(input1)); //true
console.log(forestwoods(input2)); //true
The result for input1 need to be true
The result for input2 need to be false
Use R.any that returns a boolean according to the predicate. R.anyPass accepts an array of predicates, which is not needed here. You can remove the R.anyPass from the start because we don't need the check here.
In addition, you need to check if any any array of props.forest is empty, so remove replace R.map with R.any.
const forestgotwoods = R.pipe(
R.prop('value'), // get the value array
R.any(R.pipe( // if predicate returns true break and return true, if not return false
R.path(['props', 'forest']), // get the forest array
R.isEmpty,
R.not,
))
)
const input1 = {"value":[{"props":{"forest":[]}},{"props":{"forest":[{"items":"woods"}]}}]}
const input2 = {"value":[{"props":{"forest":[]}},{"props":{"forest":[]}}]}
console.log(forestgotwoods(input1)); // true
console.log(forestgotwoods(input2)); // false
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
Another option is to use R.all to check if all are empty, and then use R.not to negate the result of R.all:
const forestgotwoods = R.pipe(
R.prop('value'), // get the value array
R.all(R.pipe( // if predicate returns true for all, return true, if at least one returns false, return false
R.path(['props', 'forest']), // get the forest array
R.isEmpty,
)),
R.not // negate the result of R.all
)
const input1 = {"value":[{"props":{"forest":[]}},{"props":{"forest":[{"items":"woods"}]}}]}
const input2 = {"value":[{"props":{"forest":[]}},{"props":{"forest":[]}}]}
console.log(forestgotwoods(input1)); // true
console.log(forestgotwoods(input2)); // false
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
Using my 2nd solution with R.propSatisfies (suggested by #Hitmands) generates a short and readable solution:
const forestgotwoods = R.pipe(
R.prop('value'), // get the value array
R.all(R.pathSatisfies(R.isEmpty, ['props', 'forest'])), // if all are empty, return true, if at least one return is not, return false
R.not // negate the result of R.all
)
const input1 = {"value":[{"props":{"forest":[]}},{"props":{"forest":[{"items":"woods"}]}}]}
const input2 = {"value":[{"props":{"forest":[]}},{"props":{"forest":[]}}]}
console.log(forestgotwoods(input1)); // true
console.log(forestgotwoods(input2)); // false
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
I think this is a simpler, more readable Ramda approach:
const forestHasWoods = where ({
value: any (hasPath (['props', 'forest', 0]))
})
const input1 = {value: [{props: {forest: []}}, {props: {forest: [{items: "woods"}]}}]};
const input2 = {value: [{props: {forest: []}}, {props: {forest: []}}]};
console.log (forestHasWoods (input1))
console.log (forestHasWoods (input2))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
<script> const {where, any, hasPath} = R </script>
where is used to turn a description of an object into a predicate, especially useful in filtering. any has been discussed in other answers, and is definitely what you want here rather than anyPass. And hasPath reports whether three is a value to be found at this path in the given object..
There is a potential issue here, I suppose. If you are dealing with sparse arrays -- well if you are, you're already in a state of sin -- but if you are, then forest could have a value but not one at index 0. If this is the case, you might prefer a version like this:
const forestHasWoods = where ({
value: any (pathSatisfies (complement (isEmpty), ['props', 'forest']))
})
Similarly to the previous answer from #Ori Drori,
you could also leverage R.useWith to create a function that operates on both arrays...
const whereForestNotEmpty = R.pathSatisfies(
R.complement(R.isEmpty),
['props', 'forest'],
);
const findForest = R.pipe(
R.propOr([], 'value'),
R.find(whereForestNotEmpty),
);
const find = R.useWith(R.or, [findForest, findForest]);
// ===
const a = {
"value": [
{"props":{"forest":[]}},
{"props":{"forest":[{"items":"woods"}]}}
]
}
const b = {
"value": [
{"props":{"forest":[]}},
{"props":{"forest":[]}}
]
}
console.log(find(a, b));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
Note: use R.complement to negate predicates

Parse string (node js ) .Find array of numbers into string

\n54766392632990,178.32.243.13,wfsdsfsdfs23432,\n54766393632990,178.32.243.13,
Above u can see example of string which I want to parse.. I want to get array if numbers which exist between (\n....,178.32.243.13) .. In this example it will be smth like :
[54766392632990,54766393632990] - how to make it
Please run this script it full file your requirement
var ss = "\n54766392632990,178.32.243.13,wfsdsfsdfs23432,\n54766393632990,178.32.243.13,"
var ddd = ss.split(",")
console.log(ddd)
var dfd = []
ddd.forEach(function(res){
if(res.startsWith("\n"))
{
dfd.push(res.replace("\n",""))
}
})
console.log(dfd)
Result [ '54766392632990', '54766393632990' ]
"\n54766392632990,178.32.243.13,wfsdsfsdfs23432,\n54766393632990,178.32.243.13,"
.split("\n")
.filter((n)=> n!== "")
.map(n=> parseInt(n.split(",")[0]))
You can do something like this to parse this string
let s = "\n54766392632990,178.32.243.13,wfsdsfsdfs23432,\n54766393632990,178.32.243.13,"
s = s.split("\n");
let array = [];
for(let i=0;i<s.length;i++) {
let v = s[i].split(",178.32.243.13,");
for(let j=0;j<v.length;j++) {
if(!isNaN(parseInt(v[j]))) {
array.push(v[j]);
}
}
}
console.log(array);

Groovier way to parse tsv file into map

I have a tsv file in the form of "key \t value", and I need to read into a map. Currently i do it like this:
referenceFile.eachLine { line ->
def (name, reference) = line.split(/\t/)
referencesMap[name.toLowerCase()] = reference
}
Is there a shorter/nicer way to do it?
It's already quite short. Two answers I can think of:
First one avoids the creation of a temporary map object:
referenceFile.inject([:]) { map, line ->
def (name, reference) = line.split(/\t/)
map[name.toLowerCase()] = reference
map
}
Second one is more functional:
referenceFile.collect { it.split(/\t/) }.inject([:]) { map, val -> map[val[0].toLowerCase()] = val[1]; map }
The only other way I can think of doing it would be with an Iterator like you'd find in Commons IO:
#Grab( 'commons-io:commons-io:2.4' )
import org.apache.commons.io.FileUtils
referencesMap = FileUtils.lineIterator( referenceFile, 'UTF-8' )
.collectEntries { line ->
line.tokenize( '\t' ).with { k, v ->
[ (k.toLowerCase()): v ]
}
}
Or with a CSV parser:
#Grab('com.xlson.groovycsv:groovycsv:1.0')
import static com.xlson.groovycsv.CsvParser.parseCsv
referencesMap = referenceFile.withReader { r ->
parseCsv( [ separator:'\t', readFirstLine:true ], r ).collectEntries {
[ (it[ 0 ].toLowerCase()): it[ 1 ] ]
}
}
But neither of them are shorter, and not necessarily nicer either...
Though I prefer option 2 as it can handle cases such as:
"key\twith\ttabs"\tvalue
As it deals with quoted strings
This is the comment tim_yates added to melix's answer, and I think it's the shortest/clearest answer:
referenceFile.collect { it.tokenize( '\t' ) }.collectEntries { k, v -> [ k.toLowerCase(), v ] }

Resources