This is my situation
I have a string like : ABCDEF
Then an array like : ['ABC','GBC','DE','DEF',...]
I need to find the substrings that compose the string ABCDEF
I did this :
let info = data_.filter( v => { return action_.toLowerCase().includes(v[0].toLowerCase())});
But the result return also DE, my string is known must be composed by 2 substring, so the only match must be
ABCDEF
[
[ 'ABC', '0.06172000' ],
[ 'DEF', '675.1805' ]
]
not
ABCDEF
[
[ 'ABC', '0.06172000' ],
[ 'DE', '0.0537598600' ],
[ 'DEF', '675.1805' ]
]
I have to combine the substrings to get the one, maybe in the filter function, or after the filter result, or there are other solutions?
Related
I have an object list-
List<Person> personList = [
{name: "a" , age:20 },
{name: "b" , age:24 },
{name: "c" , age:25 },
{name: "d" , age:26 },
]
Now, what is the shortest way to remove age from each object?
Final list will be:
personList = [
{name: "a" },
{name: "b" },
{name: "c" },
{name: "d" },
]
With a bit syntax lift up your example works using findAll
def x = [
[name: "a" , age:20 ],
[name: "b" , age:24 ],
[name: "c" , age:25 ],
[name: "d" , age:26 ]
]
println x.collect {it.findAll {it.key != 'age'}}
[[name:a], [name:b], [name:c], [name:d]]
First of all you should not create a List with type of Person (unknown class) and fill it with Maps without cast.
With Maps you have at least 2 simple options.
Option 1 - create a new List:
personList = personList.collect{ [ name:it.name ] }
Option 2 - mutate the existing List:
personList*.remove( 'age' )
I'm trying to sort a numerical field however it seems to parse each character in turn so 9 is 'higher' than 11' but lower than 91
Is there a way to sort by the whole string?
Example data:
{
"testing": [
{"name": "01"},
{"name": "3"},
{"name": "9"},
{"name": "91"},
{"name": "11"},
{"name": "2"}
]
}
Query:
reverse(sort_by(testing, &name))[*].[name]
result:
[
"91"
],
[
"9"
],
[
"3"
],
[
"2"
],
[
"11"
],
[
"01"
]
]
This can be tried at http://jmespath.org/
edit:
So I can get the correct output by piping it to sort -V but is there not an easier way?
Context
Jmespath latest version as of 2020-09-12
Use-case
DevBobDylan wants to sort string items in a JSON ArrayOfObject table
Solution
use Jmespath pipe operator to chain expressions together
Example
testing|[*].name|sort(#)
Screenshot
Take a 2D list (i.e., list of lists) of strings. It returns a dictionary whose keys are the first elements of each row, and where each such key is mapped to the list consisting of the remaining elements of that row.
Been on geeks for geeks trying to figure this out. I get how to get to the first list I want to pull from, but I don't know how to go to each list after and then put it as a value in a new dictionary with the remaining strings as values in the dictionary.
def list2dict(list2d):
new_dict = {}
for i in range(list2d[0]):
for j in range(2):
new_dict.append[j] + ':' + list2d[j]
return new_dict
list2d is a 2d list of strings
Input:
1. Let x1 be the following list of lists:
[ [ 'aa', 'bb', 'cc', 'dd' ],
[ 'ee', 'ff', 'gg', 'hh', 'ii', 'jj' ],
[ 'kk', 'll', 'mm', 'nn' ] ]
Output:
Then list2dict(x1) returns the dictionary
{ 'aa' : [ 'bb', 'cc', 'dd' ],
'ee' : [ 'ff', 'gg', 'hh', 'ii', 'jj' ],
'kk' : [ 'll', 'mm', 'nn' ]
}
Input
2. Let x2 be the following list of lists:
[ [ 'aa', 'bb' ],
[ 'cc', 'dd' ],
[ 'ee', 'ff' ],
[ 'gg', 'hh' ],
[ 'kk', 'll' ] ]
Output
Then list2dict(x2) returns the dictionary
{ 'aa' : [ 'bb' ],
'cc' : [ 'dd' ],
'ee' : [ 'ff' ],
'gg' : [ 'hh' ],
'kk' : [ 'll' ]
}
I think you are looking for something like this...
Online Version
def list2dict(list2d):
new_dict = {}
for i in list2d:
key = i[0]
list = i[1:len(i)]
new_dict[key] = list
print(new_dict)
return new_dict
I am using ELK(elastic search, kibana, logstash, filebeat) to collect logs. I have a log file with following lines, every line has a json, my target is to using Logstash Grok to take out of key/value pair in the json and forward it to elastic search.
2018-03-28 13:23:01 charge:{"oldbalance":5000,"managefee":0,"afterbalance":"5001","cardid":"123456789","txamt":1}
2018-03-28 13:23:01 manage:{"cuurentValue":5000,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}
I am using Grok Debugger to make regex pattern and see the result. My current regex is:
%{TIMESTAMP_ISO8601} %{SPACE} %{WORD:$:data}:{%{QUOTEDSTRING:key1}:%{BASE10NUM:value1}[,}]%{QUOTEDSTRING:key2}:%{BASE10NUM:value2}[,}]%{QUOTEDSTRING:key3}:%{QUOTEDSTRING:value3}[,}]%{QUOTEDSTRING:key4}:%{QUOTEDSTRING:value4}[,}]%{QUOTEDSTRING:key5}:%{BASE10NUM:value5}[,}]
As one could see it is hard coded since the keys in json in real log could be any word, the value could be integer, double or string, what's more, the length of the keys varies. so my solution is not acceptable. My solution result is shown as follows, just for reference. I am using Grok patterns.
My question is that trying to extract keys in json is wise or not since elastic search use json also? Second, if I try to take keys/values out of json, are there correct,concise Grok patterns?
current result of Grok patterns give following output when parsing first line in above lines.
{
"TIMESTAMP_ISO8601": [
[
"2018-03-28 13:23:01"
]
],
"YEAR": [
[
"2018"
]
],
"MONTHNUM": [
[
"03"
]
],
"MONTHDAY": [
[
"28"
]
],
"HOUR": [
[
"13",
null
]
],
"MINUTE": [
[
"23",
null
]
],
"SECOND": [
[
"01"
]
],
"ISO8601_TIMEZONE": [
[
null
]
],
"SPACE": [
[
""
]
],
"WORD": [
[
"charge"
]
],
"key1": [
[
""oldbalance""
]
],
"value1": [
[
"5000"
]
],
"key2": [
[
""managefee""
]
],
"value2": [
[
"0"
]
],
"key3": [
[
""afterbalance""
]
],
"value3": [
[
""5001""
]
],
"key4": [
[
""cardid""
]
],
"value4": [
[
""123456789""
]
],
"key5": [
[
""txamt""
]
],
"value5": [
[
"1"
]
]
}
second edit
Is it possible to use Json filter of Logstash? but in my case Json is part of line/event, not whole event is Json.
===========================================================
Third edition
I do not see updated solution functions well to parse json. My regex is as follows:
filter {
grok {
match => {
"message" => [
"%{TIMESTAMP_ISO8601}%{SPACE}%{GREEDYDATA:json_data}"
]
}
}
}
filter {
json{
source => "json_data"
target => "parsed_json"
}
}
It does not have key:value pair, instead it is msg+json string. The parsed json is not parsed.
Testing data is as below:
2018-03-28 13:23:01 manage:{"cuurentValue":5000,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}
2018-03-28 13:23:03 payment:{"cuurentValue":5001,"reload":0,"newbalance":"5002","posid":"987654321","something":"new3","additionalFields":2}
2018-03-28 13:24:07 management:{"cuurentValue":5002,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}
[2018-06-04T15:01:30,017][WARN ][logstash.filters.json ] Error parsing json {:source=>"json_data", :raw=>"manage:{\"cuurentValue\":5000,\"payment\":0,\"newbalance\":\"5001\",\"posid\":\"123456789\",\"something\":\"new2\",\"additionalFields\":1}", :exception=>#<LogStash::Json::ParserError: Unrecognized token 'manage': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"manage:{"cuurentValue":5000,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}"; line: 1, column: 8]>}
[2018-06-04T15:01:30,017][WARN ][logstash.filters.json ] Error parsing json {:source=>"json_data", :raw=>"payment:{\"cuurentValue\":5001,\"reload\":0,\"newbalance\":\"5002\",\"posid\":\"987654321\",\"something\":\"new3\",\"additionalFields\":2}", :exception=>#<LogStash::Json::ParserError: Unrecognized token 'payment': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"payment:{"cuurentValue":5001,"reload":0,"newbalance":"5002","posid":"987654321","something":"new3","additionalFields":2}"; line: 1, column: 9]>}
[2018-06-04T15:01:34,986][WARN ][logstash.filters.json ] Error parsing json {:source=>"json_data", :raw=>"management:{\"cuurentValue\":5002,\"payment\":0,\"newbalance\":\"5001\",\"posid\":\"123456789\",\"something\":\"new2\",\"additionalFields\":1}", :exception=>#<LogStash::Json::ParserError: Unrecognized token 'management': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"management:{"cuurentValue":5002,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}"; line: 1, column: 12]>}
Please check the result:
You can use GREEDYDATA to assign entire block of json to a separate field like this,
%{TIMESTAMP_ISO8601}%{SPACE}%{GREEDYDATA:json_data}
This will create a separate file for your json data,
{
"TIMESTAMP_ISO8601": [
[
"2018-03-28 13:23:01"
]
],
"json_data": [
[
"charge:{"oldbalance":5000,"managefee":0,"afterbalance":"5001","cardid":"123456789","txamt":1}"
]
]
}
Then apply a json filter on json_data field as follows,
json{
source => "json_data"
target => "parsed_json"
}
I have two maps as below(these are the logs of my output...sorry for the bad groovy)
map1 = [
[ "name1":value1, "name2":value2, "name3":value3 ],
[ "name1":value1, "name2":value20, "name3":value30 ]
]
map2 = [
[ "name1":value1, "name2":value4, "name3":value5, "name4":value6 ],
[ "name1":value1, "name2":value7, "name3":value8, "name4":value9 ]
]
I need to set the values of name2 and name3 of map1 to name2 and name3 of map2 when "name1":value1 in both the maps
Required output:
map2 = [
[ "name1":value1, "name2":value2, "name3":value3, "name4":value6 ],
[ "name1":value1, "name2":value20, "name3":value30, "name4":value9 ]
]
I tried looping through both of them, but there is an overwrite(as it is a map) and the result is as below
map2 = [
[ "name1":value1, "name2":value20, "name3":value30, "name4":value9 ],
[ "name1":value1, "name2":value20, "name3":value30, "name4":value9 ]
]
First of all, they (map1 and map2) are lists and not maps.
Taking into consideration, the cardinality of both the lists are same, simplistically you can achieve the same by:
list2.eachWithIndex{item, i ->
if(list2[i].name1 == list1[i].name1){
list2[i].name2 = list1[i].name2
list2[i].name3 = list1[i].name3
}
}
assert list2 == [
[ "name1":'value1', "name2":'value2', "name3":'value3', "name4":'value6' ],
[ "name1":'value1', "name2":'value20', "name3":'value30', "name4":'value9' ]
]