Why does Logstash put the wrong time zone in ~/.logstash_jdbc_last_run? - logstash

Logstash 5.2.1
The configuration below is Ok, the partial updates are woking. I just misunderstood the results and how time zone is used by Logstash.
jdbc_default_timezone
Timezone conversion. SQL does not allow for timezone data in timestamp fields. This plugin will automatically convert your SQL timestamp fields to Logstash timestamps, in relative UTC time in ISO8601 format.
Using this setting will manually assign a specified timezone offset, instead of using the timezone setting of the local machine. You must use a canonical timezone, Europe/Rome, for example.
I want to index some data from a PostgreSQL to Elasticseach with help of Logstash. The partial updates should be working.
But in my case, Logstash puts the wrong time zone in ~/.logstash_jdbc_last_run.
$cat ~/.logstash_jdbc_last_run
--- 2017-03-08 09:29:00.259000000 Z
My PC/Server time:
$date
mer 8 mar 2017, 10.29.31, CET
$cat /etc/timezone
Europe/Rome
My Logstash configuration.:
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:postgresql://localhost:5432/postgres"
# The user we wish to execute our statement as
jdbc_user => "logstash"
jdbc_password => "logstashpass"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/home/trex/Development/ship_to_elasticsearch/software/postgresql-42.0.0.jar"
# The name of the driver class for Postgresql
jdbc_driver_class => "org.postgresql.Driver"
jdbc_default_timezone => "Europe/Rome"
# our query
statement => "SELECT * FROM contacts WHERE timestamp > :sql_last_value"
# every 1 min
schedule => "*/1 * * * *"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "database.%{+yyyy.MM.dd.HH}"
}
}
Without jdbc_default_timezone the time zone is wrong too.
My PostgeSQL data:
postgres=# select * from "contacts"; uid | timestamp | email | first_name | last_name
-----+----------------------------+-------------------------+------------+------------
1 | 2017-03-07 18:09:25.358684 | jim#example.com | Jim | Smith
2 | 2017-03-07 18:09:25.3756 | | John | Smith
3 | 2017-03-07 18:09:25.384053 | carol#example.com | Carol | Smith
4 | 2017-03-07 18:09:25.869833 | sam#example.com | Sam |
5 | 2017-03-08 10:04:26.39423 | trex#example.com | T | Rex
The DB data is imported like this:
INSERT INTO contacts(timestamp, email, first_name, last_name) VALUES(current_timestamp, 'sam#example.com', 'Sam', null);
Why does Logstash put the wrong time zone in ~/.logstash_jdbc_last_run? And how to fix it?

2017-03-08 09:29:00.259000000 Z mean UTC timezone, it's correct.

It is defaulting to UTC time. If you would like to store it in a different timezone, you can convert the timestamp by adding a filter like so:
filter {
mutate {
add_field => {
# Create a new field with string value of the UTC event date
"timestamp_extract" => "%{#timestamp}"
}
}
date {
# Parse UTC string value and convert it to my timezone into a new field
match => [ "timestamp_extract", "yyyy-MM-dd HH:mm:ss Z" ]
timezone => "Europe/Rome"
locale => "en"
remove_field => [ "timestamp_extract" ]
target => "timestamp_europe"
}
}
This will convert the timezone, by first extracting the timestamp into a timestamp_extract field and then converting it into Europe/Rome timezone. And the new converted timestamp is put in the timestamp_europe field.
Hope its clearer now.

Related

How to restart multiple conf files in logstash

I have 16 conf files and all of them scheduled to run every day at 09:05 am. Today these files could not run at intended time. After i fix the problem tried to restart logstash but conf files are not able to generate indices.
Example dash_KPI_1.conf file:
input {
jdbc {
jdbc_driver_library => "/var/OJDBC-Full/ojdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#a/b"
jdbc_user => "KIBANA"
jdbc_password => "pass"
statement => "
SELECT /*+ PARALLEL(16) */
* from
dual"
# jdbc_paging_enabled => "true"
# jdbc_page_size => "50000"
type => "dash_kpi_1"
schedule => "05 09 * * *"
}
}
output { if [type]=="dash_kpi_1"{
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://XX.XX.XX.XXX:9200","http://XX.XX.XX.XXX:9200","http://XX.XX.XX.XXX:9200"]
index => "dash_kpi_1-%{+YYYY.ww}"
user => "elastic"
password => "pass2"
}
}
}
How i start and stop logstash:
systemctl stop logstash.service
systemctl start logstash.service -r
What i have tried:
/usr/share/logstash/bin/logstash -f dash_KPI_1.conf
How can i restart these 16 conf files and make them generate indices as intended in the first place ?
I see you are creating index weekly. If you want to create it daily, you need to change the index pattern to "dash_kpi_1-%{+YYYY.MM.dd}".

Grokparsefailure and type problems in logstash configuration file

I have several problems with my configuration file. My goal is to parse three types of logs (for the moment). Here they are :
[29/05/2020 07:41:51.354] - ih912865 - 10.107.119.121 - 93 - Transaction 7635 COMPLETED 318 ms wait time 3183 ms
[29/05/2020 10:30:01.318] - Process status database sync - us1salx08167.corpnet2.com:8400(#52279) (load 0 grace period 5 minutes) : current date 2020/02/02 21:30:01 update date 2020/02/02 21:29:58 old state OK new state OK
31730 31626 464 10980020 52:25 /plw/modules/bin/Lx86_64/opx2-intranet.exe -I /plw/modules/bin/Lx86_64/opx2-intranet.dxl -H /plw/modules/bin/Lx86_64 -L /plw/PLW_PROD/modules/preload-intranet.ini -- plw-sysconsole -port 8400 -logdir /plw/PLW_PROD/httpdocs/admin/log/ -slaves 2
Two of these logs can be in slave files named intranet-2020-06-25-8401.log or intranet-2020-06-25-8400.log the last one is in a master file named intranet-2020-06-25-8402.log
For my tests I simplified the architecture of my log files, so I have a Log-test folder in which I put a slave file and a master file.
In these files I only put the corresponding logs and a different log to be able to see how to manage this case.
Here is the content of a "slave" :
[29/05/2020 07:41:51.354] - ih912865 - 10.107.199.125 - 93 - Transaction 7635 COMPLETED 318 ms wait time 3183 ms
[29/05/2020 10:30:01.318] - Process status database sync - us1salx08167.corpnet2.com:8400(#52279) (load 0 grace period 5 minutes) : current date 2020/02/02 21:30:01 update date 2020/02/02 21:29:58 old state OK new state OK
[29/05/2020 13:49:20.635] - Main process - Transaction SYSTEM 105238-12 SQL done 1 ms
Here is the content of a "master" :
31730 31626 464 10980020 52:25 /plw/modules/bin/Lx86_64/opx2-intranet.exe -I /plw/modules/bin/Lx86_64/opx2-intranet.dxl -H /plw/modules/bin/Lx86_64 -L /plw/PLW_PROD/modules/preload-intranet.ini -- plw-sysconsole -port 8400 -logdir /plw/PLW_PROD/httpdocs/admin/log/ -slaves 2
[26/06/2020 21:38:01.386] - Main process - Starting HTTP service on port 8402 (socket #<MULTIVALENT stream socket waiting for connection at */8402 # #x1022d2ddbb2>)
Now that you have a better understanding of my environment and my purpose, here's the problem. When I launch my logstash configuration, I retrieve my data in kibana. But kibana shows me that each log has been treated as coming from a slave file while I also have a log coming from a master file which doesn't have the same processing.
For a better understanding here is my configuration file :
input {
file {
path => "/home/mathis/Documents/**/intranet*.log"
exclude =>"*8402.log"
sincedb_path => '/dev/null'
start_position => beginning
type => "slave"
}
file {
path => "/home/mathis/Documents/**/intranet*8402.log"
sincedb_path => '/dev/null'
type => "master"
}
}
filter {
if [type] == "slave" {
grok {
match => { "message" => ["\[%{DATESTAMP:eventtime}\] \- %{USERNAME:user} \- %{IPV4:clientip} \- %{NUMBER} \- %{WORD} %{NUMBER:exectime} %{WORD} %{NUMBER:time} %{GREEDYDATA:data} %{NUMBER:waittime}","\[%{DATESTAMP:eventtime}\] \- Process status database sync \- %{WORD}\.%{WORD}\.%{WORD}\:%{NUMBER:slavenumb}\(\#%{NUMBER}\) \(load %{NUMBER:nbutilisateur} grace period 5 minutes\) %{GREEDYDATA}"] }
remove_field => "message"
}
date {
match => [ "eventtime", "dd/MM/YYYY HH:mm:ss.SSS" ]
target => "#timestamp"
}
}
if [type] == "master" {
grok {
match => {"message" => ["%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}(?<starttime>((?!<[0-9])%{HOUR}:)?%{MINUTE}(?::%{SECOND})(?![0-9]))"]}
remove_field => "message"
}
date {
match => [ "starttime", "HH:mm:ss","mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "logstash-local3-%{+YYYY.MM.dd}"
}
}
And now this is what kibana shows me:
As you can see, the type field is slave for all logs but we can also observe that the logs of the slave file "intranet-2020-06-25-8401.log" are correctly parsed and that the line of added log that does not interest me has the field tags _grokparsefailure (the middle line in the picture).
The other problem is that the other logs (the first two lines on the image) are from a slave file (which is not true) according to kibana, so I guess they are processed in my first grok which would explain why they also have the _grokparsefailure tags field.
So I guess there are several errors in my input and filter part. I've been searching for a long time and doing a lot of testing, could you help me fix my config file please?

”_grokparsefailure” even though the grok pattern works

I am trying to parse different logs lines from two different type of file : slave and master. I did test my pattern in the Grok Dubugger and it is working fine but tags field in kibana is _grokparsefailure.
Here is my config file
input {
file {
type => "slave"
path => "/home/mathis/Documents/**/intranet*.log"
exclude =>"*8402.log"
sincedb_path => '/dev/null'
start_position => beginning
}
file {
type => "master"
path => "/home/mathis/Documents/**/intranet*8402.log"
sincedb_path => '/dev/null'
}
}
filter {
if [type] == "slave" {
grok {
match => { "message" => ["\[%{DATESTAMP:eventtime}\] \- %{USERNAME:user} \- %{IPV4:clientip} \- %{NUMBER} \- %{WORD} %{NUMBER:exectime} %{WORD} %{NUMBER:time} %{GREEDYDATA:data} %{NUMBER:waittime}","\[%{DATESTAMP:eventtime}\] \- Process status database sync \- %{WORD}\.%{WORD}\.%{WORD}\:%{NUMBER:slavenumb}\(\#%{NUMBER}\) \(load %{NUMBER:nbutilisateur} grace period 5 minutes\) %{GREEDYDATA}"] }
remove_field => "message"
}
date {
match => [ "eventtime", "dd/MM/YYYY HH:mm:ss.SSS" ]
target => "#timestamp"
}
}
if [type] == "master" {
grok {
match => {"message" => ["%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}(?<starttime>((?!<[0-9])%{HOUR}:)?%{MINUTE}(?::%{SECOND})(?![0-9]))"]}
remove_field => "message"
}
date {
match => [ "starttime", "HH:mm:ss","mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "logstash-local3-%{+YYYY.MM.dd}"
}
}
Here are the 3 logs lines that I want to parse :
(they are in the order of groks in my conf file)
[24/06/2020 21:57:29.548] - Process status database sync - us1salx08167.corpnet2.com:8100(#53738) (load 0 grace period 5 minutes) : current date 2020/06/24 21:57:29 update date 2020/06/24 21:55:44 old state OK new state OK
[29/05/2020 07:41:51.354] - ih912865 - 10.104.149.128 - 93 - Transaction 7635 COMPLETED 318 ms wait time 3183 ms
31730 31626 464 10970020 52:25 /plw/modules/bin/Lx86_64/opx2-intranet.exe -I /plw/modules/bin/Lx86_64/opx2-intranet.dxl -H /plw/modules/bin/Lx86_64 -L /plw/PLW_PROD/modules/preload-intranet.ini -- plw-sysconsole -port 8400 -logdir /plw/PLW_PROD/httpdocs/admin/log/ -slaves 2
So, I don't know if you've already resolved this -- but below is something you could use.
N.B. I added a couple of extra fields, but you can easily remove those [https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-remove_field].
When trying the expressions you provided, one of them actually failed in the grok debugger, so I just took it upon myself to rewrite them all from scratch while still maintaining variable names.
I noticed there was a lot of data that you simply didn't glean. If you want more captured, let me know.
Line 1:
[24/06/2020 21:57:29.548] - Process status database sync - us1salx08167.corpnet2.com:8100(#53738) (load 0 grace period 5 minutes) : current date 2020/06/24 21:57:29 update date 2020/06/24 21:55:44 old state OK new state OK
Pattern 1:
\[(?<eventtime>%{DATESTAMP})\] - Process status database sync - (?<host>%{HOSTNAME}):(?<slavenumber>%{NUMBER})(?<zz>\(#[\d]+\)) \(load (?<nbutilisateur>%{NUMBER}) grace period 5 minutes\)%{GREEDYDATA}
Line 2:
[29/05/2020 07:41:51.354] - ih912865 - 10.104.149.128 - 93 - Transaction 7635 COMPLETED 318 ms wait time 3183 ms
Pattern 2:
\[(?<eventtime>%{DATESTAMP})\] - (?<user>%{USER}) - (?<clientip>%{IPV4}) - %{NUMBER} - %{WORD} (?<exectime>%{NUMBER}) %{WORD} (?<ctime>%{NUMBER}) (?<ctimeunits>%{WORD}) wait time (?<waittime>%{NUMBER}) (?<waittimeunits>%{WORD})
Line 3:
31730 31626 464 10970020 52:25 /plw/modules/bin/Lx86_64/opx2-intranet.exe -I /plw/modules/bin/Lx86_64/opx2-intranet.dxl -H /plw/modules/bin/Lx86_64 -L /plw/PLW_PROD/modules/preload-intranet.ini -- plw-sysconsole -port 8400 -logdir /plw/PLW_PROD/httpdocs/admin/log/ -slaves 2
Pattern 3:
%{GREEDYDATA}(?<starttime>(?<=[\s])([\d]+:[\d]+))%{GREEDYDATA}

logstash error : Error registering plugin, Pipeline aborted due to error (<TypeError: can’t dup Fixnum>)

I'm a beginner on ELK and trying to load data from MySQL to elasticsearch(for next step I want to query them via javarestclient), so I used logstash-6.2.4 and elasticsearch-6.2.4. and followed an example here.
when I run: bin/logstash -f /path/to/my.conf, I got the error:
[2018-04-22T10:15:08,713][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::Jdbc jdbc_connection_string=>\"jdbc:mysql://localhost:3306/testdb\", jdbc_user=>\"root\", jdbc_password=><password>, jdbc_driver_library=>\"/usr/local/logstash-6.2.4/config/mysql-connector-java-6.0.6.jar\", jdbc_driver_class=>\"com.mysql.jdbc.Driver\", statement=>\"SELECT * FROM testtable\", id=>\"7ff303d15d8fc2537248f48fae5f3925bca7649bbafc30d2cd52394ea9961797\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_f8d44c47-8421-4bb9-a6b9-0b34e0aceb13\", enable_metric=>true, charset=>\"UTF-8\">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>\"info\", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, last_run_metadata_path=>\"/Users/chu/.logstash_jdbc_last_run\", use_column_value=>false, tracking_column_type=>\"numeric\", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>", :error=>"can't dup Fixnum", :thread=>"#<Thread:0x3fae16e2 run>"}
[2018-04-22T10:15:09,256][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<TypeError: can't dup Fixnum>, :backtrace=>["org/jruby/RubyKernel.java:1882:in `dup'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date/format.rb:838:in `_parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date.rb:1830:in `parse'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:87:in `set_value'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:36:in `initialize'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:29:in `build_last_value_tracker'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:216:in `register'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:342:in `register_plugin'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:353:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:353:in `register_plugins'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:500:in `start_inputs'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:394:in `start_workers'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:290:in `run'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:250:in `block in start'"], :thread=>"#<Thread:0x3fae16e2 run>"}
[2018-04-22T10:15:09,314][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}
here is the testdbinit.conf(utf-8 encoding):
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
jdbc_user => "root"
jdbc_password => "mypassword"
jdbc_driver_library => "/usr/local/logstash-6.2.4/config/mysql-connector-java-6.0.6.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM testtable"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "testdemo"
document_id => "%{personid}"
"document_type" => "person"
}
}
here is the table(database:testdb--->table:testtable):
mysql> select * from testtable;
+----------+----------+-----------+-----------+-------+
| PersonID | LastName | FirstName | City | flag |
+----------+----------+-----------+-----------+-------+
| 1003 | McWell | Sharon | Cape Town | exist |
| 1002 | Baron | Richard | Cape Town | exist |
| 1001 | Kallis | Jaques | Cape Town | exist |
| 1004 | Zhaosi | Nicholas | Iron Hill | exist |
+----------+----------+-----------+-----------+-------+
I try to google the issue, but still have no clue; I think maybe some type conversion errors(TypeError: can't dup Fixnum ) cause this issue, but what exactly is this"dup Fixnum", how to solve them?
And one more thing also confused me is: I run the same code yesterday, and succeed loaded data into elasticsearch and I could also search them via localhost:9200, but next morning when I try the same thing(on the same cpmputer), I met these issues. I have tossed this a whole day, please help me get some hints.
I also asked the same question on logstash community, with their help, I think I find the solution to my issue:
the exception trace exception=>#<TypeError: can't dup Fixnum> means there is a type conversion error. The sql_last_value which is initialized as 0 for numeric values or 1970-01-01 for datetime values. I think my sql_last_value stored in last_run_metadata_path is not the numeric or datetime value, so I add clean_run => true in conf file and run logstash again, no more error occurs. After clean_run => true was added, the wrong value of sql_last_value was reset to 0 or 1970-01-01 , the thread goes on and data be indexed successfully.

conditional matching with grok for logstash

I have php log of this format
[Day Mon DD HH:MM:SS YYYY] [Log-Type] [client <ipv4 ip address>] <some php error type>: <other msg with /path/of/a/php/script/file.php and something else>
[Day Mon DD HH:MM:SS YYYY] [Log-Type] [client <ipv4 ip address>] <some php error type>: <other msg without any file name in it>
[Day Mon DD HH:MM:SS YYYY] [Log-Type] [client <ipv4 ip address>] <some msg with out semicolon in it but /path/of/a/file inside the message>
This I am trying to send to Graylog2 after processing through logstash. Using this post here, I was able to start. now I would like to get some additional fields, so that my final version would look something like this.
{
"message" => "<The entire error message goes here>",
"#version" => "1",
"#timestamp" => "converted timestamp from Day Mon DD HH:MM:SS YYYY",
"host" => "<ipv4 ip address>",
"logtime" => "Day Mon DD HH:MM:SS YYYY",
"loglevel" => "Log-Type",
"clientip" => "<ipv4 ip address>",
"php_error_type" => "<some php error type>"
"file_name_from_the_log" => "/path/of/a/file || /path/of/a/php/script/file.php"
"errormsg" => "<the error message after first colon (:) found>"
}
I have the expression for individual line, or atleast I think these should be able to parse, using grokdebugger. something like this:
%{DATA:php_error_type}: %{DATA:message_part1}%{URIPATHPARAM:file_name}%{GREEDYDATA:errormsg}
%{DATA:php_error_type}: %{GREEDYDATA:errormsg}
%{DATA:message_part1}%{URIPATHPARAM:file_name}%{GREEDYDATA:errormsg}
But somehow I am finding it very difficult to make it work for the entire log file.
Any suggestion please? Also, not sure if there would be any other type of error messages coming in the log file. but the intention is to get the same format for all. Any suggestions how to tackle these logs to get the above mentioned format?
The grok filter can be configured with multiple patterns:
grok {
match => [
"message", "%{DATA:php_error_type}: %{DATA:message_part1}%{URIPATHPARAM:file_name}%{GREEDYDATA:errormsg}",
"message", "%{DATA:php_error_type}: %{GREEDYDATA:errormsg}",
"message", "%{DATA:message_part1}%{URIPATHPARAM:file_name}%{GREEDYDATA:errormsg}"
]
}
(Instead of a single filter with multiple patterns you could have multiple grok filters, but then you'd probably want to disable the _grokparsefailure tagging with tag_on_failure => [].)
If you have some part of your log line missing sometime you can use the following syntax :
(?:%{PATTERN1}|%{PATTERN2})
or
(?:%{PATTERN1}|)
To allow PATTERN1 OR ''. (empty)
Using this, you can have have only one pattern to manage :
grok {
match => [
"message", "(?:%{DATA:php_error_type}: |)(?:%{DATA:message_part1}:)(?:%{URIPATHPARAM:file_name}|)%{GREEDYDATA:errormsg}",
]
}
If you have problems, maybe replace %{DATA} by a more restrictive pattern.
You can also use this syntax (more regex like)
(?:%{PATTERN1})?
To debug a complex grok pattern, I recommend :
https://grokconstructor.appspot.com/do/match (multiline option + multiple input lines at same time + others options)
https://grokdebug.herokuapp.com/ (simpler to use)

Resources