Haskel - Iterating through a list of lists of lists - haskell

I am currently trying to print the integers from a list of lists of lists and am struggling to know the most effective way of doing this.
An example of this list is as follows:
[ [ [ 2,3 ], [ 1,6 ] ]
, [ [ 5,9 ], [ 2,9 ] ]
, [ [ 6,2 ], [ 7,7 ] ] ]
My hope is to print a string such as "231659296277".
Any advice would be greatly appreciated.

Since it's a 3 layered list, you can just concatenate it twice:
concat . concat $
[ [ [ 2,3 ], [ 1,6 ] ]
, [ [ 5,9 ], [ 2,9 ] ]
, [ [ 6,2 ], [ 7,7 ] ]
]
-- [2,3,1,6,5,9,2,9,6,2,7,7]
If you want to convert it to a string, then you can >>= show:
[2,3,1,6,5,9,2,9,6,2,7,7] >>= show
-- "231659296277"

Related

Logstash filter for ip

Need logstash filter for client ip , 12.34.56.78:1234
I need to filter the client Ip , only I require 12.34.56.78 not the things after :.
Try this:
GROK pattern:
%{IP:ip}:%{GREEDYDATA:others}
OUTPUT:
{
"ip": [
[
"12.34.56.78"
]
],
"IPV6": [
[
null
]
],
"IPV4": [
[
"12.34.56.78"
]
],
"others": [
[
"1234"
]
]
}
This should work (I haven't tested it):
mutate {
gsub => ["ip_field_name", ":\d+", ""]
}
The :\d+ will capture the : and all following digits and the mutate#gsub option will replace this with an empty string.

logstash grok, parse a line with json filter

I am using ELK(elastic search, kibana, logstash, filebeat) to collect logs. I have a log file with following lines, every line has a json, my target is to using Logstash Grok to take out of key/value pair in the json and forward it to elastic search.
2018-03-28 13:23:01 charge:{"oldbalance":5000,"managefee":0,"afterbalance":"5001","cardid":"123456789","txamt":1}
2018-03-28 13:23:01 manage:{"cuurentValue":5000,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}
I am using Grok Debugger to make regex pattern and see the result. My current regex is:
%{TIMESTAMP_ISO8601} %{SPACE} %{WORD:$:data}:{%{QUOTEDSTRING:key1}:%{BASE10NUM:value1}[,}]%{QUOTEDSTRING:key2}:%{BASE10NUM:value2}[,}]%{QUOTEDSTRING:key3}:%{QUOTEDSTRING:value3}[,}]%{QUOTEDSTRING:key4}:%{QUOTEDSTRING:value4}[,}]%{QUOTEDSTRING:key5}:%{BASE10NUM:value5}[,}]
As one could see it is hard coded since the keys in json in real log could be any word, the value could be integer, double or string, what's more, the length of the keys varies. so my solution is not acceptable. My solution result is shown as follows, just for reference. I am using Grok patterns.
My question is that trying to extract keys in json is wise or not since elastic search use json also? Second, if I try to take keys/values out of json, are there correct,concise Grok patterns?
current result of Grok patterns give following output when parsing first line in above lines.
{
"TIMESTAMP_ISO8601": [
[
"2018-03-28 13:23:01"
]
],
"YEAR": [
[
"2018"
]
],
"MONTHNUM": [
[
"03"
]
],
"MONTHDAY": [
[
"28"
]
],
"HOUR": [
[
"13",
null
]
],
"MINUTE": [
[
"23",
null
]
],
"SECOND": [
[
"01"
]
],
"ISO8601_TIMEZONE": [
[
null
]
],
"SPACE": [
[
""
]
],
"WORD": [
[
"charge"
]
],
"key1": [
[
""oldbalance""
]
],
"value1": [
[
"5000"
]
],
"key2": [
[
""managefee""
]
],
"value2": [
[
"0"
]
],
"key3": [
[
""afterbalance""
]
],
"value3": [
[
""5001""
]
],
"key4": [
[
""cardid""
]
],
"value4": [
[
""123456789""
]
],
"key5": [
[
""txamt""
]
],
"value5": [
[
"1"
]
]
}
second edit
Is it possible to use Json filter of Logstash? but in my case Json is part of line/event, not whole event is Json.
===========================================================
Third edition
I do not see updated solution functions well to parse json. My regex is as follows:
filter {
grok {
match => {
"message" => [
"%{TIMESTAMP_ISO8601}%{SPACE}%{GREEDYDATA:json_data}"
]
}
}
}
filter {
json{
source => "json_data"
target => "parsed_json"
}
}
It does not have key:value pair, instead it is msg+json string. The parsed json is not parsed.
Testing data is as below:
2018-03-28 13:23:01 manage:{"cuurentValue":5000,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}
2018-03-28 13:23:03 payment:{"cuurentValue":5001,"reload":0,"newbalance":"5002","posid":"987654321","something":"new3","additionalFields":2}
2018-03-28 13:24:07 management:{"cuurentValue":5002,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}
[2018-06-04T15:01:30,017][WARN ][logstash.filters.json ] Error parsing json {:source=>"json_data", :raw=>"manage:{\"cuurentValue\":5000,\"payment\":0,\"newbalance\":\"5001\",\"posid\":\"123456789\",\"something\":\"new2\",\"additionalFields\":1}", :exception=>#<LogStash::Json::ParserError: Unrecognized token 'manage': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"manage:{"cuurentValue":5000,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}"; line: 1, column: 8]>}
[2018-06-04T15:01:30,017][WARN ][logstash.filters.json ] Error parsing json {:source=>"json_data", :raw=>"payment:{\"cuurentValue\":5001,\"reload\":0,\"newbalance\":\"5002\",\"posid\":\"987654321\",\"something\":\"new3\",\"additionalFields\":2}", :exception=>#<LogStash::Json::ParserError: Unrecognized token 'payment': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"payment:{"cuurentValue":5001,"reload":0,"newbalance":"5002","posid":"987654321","something":"new3","additionalFields":2}"; line: 1, column: 9]>}
[2018-06-04T15:01:34,986][WARN ][logstash.filters.json ] Error parsing json {:source=>"json_data", :raw=>"management:{\"cuurentValue\":5002,\"payment\":0,\"newbalance\":\"5001\",\"posid\":\"123456789\",\"something\":\"new2\",\"additionalFields\":1}", :exception=>#<LogStash::Json::ParserError: Unrecognized token 'management': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"management:{"cuurentValue":5002,"payment":0,"newbalance":"5001","posid":"123456789","something":"new2","additionalFields":1}"; line: 1, column: 12]>}
Please check the result:
You can use GREEDYDATA to assign entire block of json to a separate field like this,
%{TIMESTAMP_ISO8601}%{SPACE}%{GREEDYDATA:json_data}
This will create a separate file for your json data,
{
"TIMESTAMP_ISO8601": [
[
"2018-03-28 13:23:01"
]
],
"json_data": [
[
"charge:{"oldbalance":5000,"managefee":0,"afterbalance":"5001","cardid":"123456789","txamt":1}"
]
]
}
Then apply a json filter on json_data field as follows,
json{
source => "json_data"
target => "parsed_json"
}

What is the GROK pattern for this log?

Can anyone please tell me the GROK pattern for this log
I am new to Logstash. Any help is appreciated
: "ppsweb1 [ERROR] [JJN01234313887b4319ad0536bf6324j34h5469624340M] [913h56a5-e359-4a75-be9a-fae60d1a5ecb] 2016-07-28 13:14:58.848 [http-nio-8080-exec-4] PaymentAction - Net amount 149644"
I tried the following:
%{WORD:field1} \[%{LOGLEVEL:field2}\] \[%{NOTSPACE:field3}\] \[%{NOTSPACE:field4}\] %{TIMESTAMP_ISO8601:timestamp} \[%{NOTSPACE:field5}\] %{WORD:field6} - %{GREEDYDATA:field7} %{NUMBER:filed8}
And I got the output as:
{
"field1": [
[
"ppsweb1"
]
],
"field2": [
[
"ERROR"
]
],
"field3": [
[
"JJN01234313887b4319ad0536bf6324j34h5469624340M"
]
],
"field4": [
[
"913h56a5-e359-4a75-be9a-fae60d1a5ecb"
]
],
"timestamp": [
[
"2016-07-28 13:14:58.848"
]
],
"field5": [
[
"http-nio-8080-exec-4"
]
],
"field6": [
[
"PaymentAction"
]
],
"field7": [
[
"Net amount"
]
],
"filed8": [
[
"149644"
]
]
}
You can change the names of fields as you want. You haven't mentioned anything about expected output in your question. So this is just to give you a basic idea. For further modifications you can use http://grokdebug.herokuapp.com/ to verify your filter.
Note: I have used basic patterns, there are complex patterns available and you can play around with the debugger to suit your requirements.
Good luck!

comparing maps and setting the values of one map to anothe rmap

I have two maps as below(these are the logs of my output...sorry for the bad groovy)
map1 = [
[ "name1":value1, "name2":value2, "name3":value3 ],
[ "name1":value1, "name2":value20, "name3":value30 ]
]
map2 = [
[ "name1":value1, "name2":value4, "name3":value5, "name4":value6 ],
[ "name1":value1, "name2":value7, "name3":value8, "name4":value9 ]
]
I need to set the values of name2 and name3 of map1 to name2 and name3 of map2 when "name1":value1 in both the maps
Required output:
map2 = [
[ "name1":value1, "name2":value2, "name3":value3, "name4":value6 ],
[ "name1":value1, "name2":value20, "name3":value30, "name4":value9 ]
]
I tried looping through both of them, but there is an overwrite(as it is a map) and the result is as below
map2 = [
[ "name1":value1, "name2":value20, "name3":value30, "name4":value9 ],
[ "name1":value1, "name2":value20, "name3":value30, "name4":value9 ]
]
First of all, they (map1 and map2) are lists and not maps.
Taking into consideration, the cardinality of both the lists are same, simplistically you can achieve the same by:
list2.eachWithIndex{item, i ->
if(list2[i].name1 == list1[i].name1){
list2[i].name2 = list1[i].name2
list2[i].name3 = list1[i].name3
}
}
assert list2 == [
[ "name1":'value1', "name2":'value2', "name3":'value3', "name4":'value6' ],
[ "name1":'value1', "name2":'value20', "name3":'value30', "name4":'value9' ]
]

Export uitable's data to a spreadsheet Excel in Matlab

I have designed a GUI which has an uitable and a push button which, when is pressed, allows to export the uitable's data to an Excel spreadsheet. My problem is that I want to add the uitable's headers to the matrix Select which has the numeric values. This matrix is used by the pushbutton callback, as seen below:
htable = uitable(...);
...
SelecY = get(htable,'Data');
Callback of the pushbutton
function hExpExcelCallback(src,evt)
FileName = uiputfile('*.xls','Save as');
xlswrite(FileName,SelecY),
end
Example:
headers = cellstr(num2str((1:5)','header %d'))';
data = rand(10,5);
A = [headers ; num2cell(data)];
xlswrite('file.xls',A)
the content:
>> A
A =
'header 1' 'header 2' 'header 3' 'header 4' 'header 5'
[ 0.34998] [ 0.28584] [ 0.12991] [ 0.60198] [ 0.82582]
[ 0.1966] [ 0.7572] [ 0.56882] [ 0.26297] [ 0.53834]
[ 0.25108] [ 0.75373] [ 0.46939] [ 0.65408] [ 0.99613]
[ 0.61604] [ 0.38045] [0.011902] [ 0.68921] [ 0.078176]
[ 0.47329] [ 0.56782] [ 0.33712] [ 0.74815] [ 0.44268]
[ 0.35166] [0.075854] [ 0.16218] [ 0.45054] [ 0.10665]
[ 0.83083] [ 0.05395] [ 0.79428] [0.083821] [ 0.9619]
[ 0.58526] [ 0.5308] [ 0.31122] [ 0.22898] [0.0046342]
[ 0.54972] [ 0.77917] [ 0.52853] [ 0.91334] [ 0.77491]
[ 0.91719] [ 0.93401] [ 0.16565] [ 0.15238] [ 0.8173]

Resources