Logstash with Filebeat error: Could not execute action - logstash

Hi Im trying to set up a log analysis with Filebeat and Logstash.
Below are the changes i made in
filebeat.inputs:
- type: log
enabled: true
paths:
- D:\elasticsearch-5.4.3\elasticsearch-5.4.3\logs\elasticsearch.log
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
And here is my logstash configuration file.
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{plugins}" }
}
date {
match => [ "timestamp" , "yyyy-MM-DD:HH:mm:ss" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
While running the above, i see the below error:
[2019-10-22T06:07:32,915][ERROR][logstash.javapipeline ] Pipeline aborted due
to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{plu
gins} not defined>, :backtrace=>["D:/logstash-7.1.0/logstash-7.1.0/vendor/bundle
/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in `block in compile'", "
org/jruby/RubyKernel.java:1425:in `loop'", "D:/logstash-7.1.0/logstash-7.1.0/ven
dor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in `compile'", "
D:/logstash-7.1.0/logstash-7.1.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-
grok-4.0.4/lib/logstash/filters/grok.rb:281:in `block in register'", "org/jruby/
RubyArray.java:1792:in `each'", "D:/logstash-7.1.0/logstash-7.1.0/vendor/bundle/
jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:275:in
`block in register'", "org/jruby/RubyHash.java:1419:in `each'", "D:/logstash-7.1
.0/logstash-7.1.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/
logstash/filters/grok.rb:270:in `register'", "org/logstash/config/ir/compiler/Ab
stractFilterDelegatorExt.java:56:in `register'", "D:/logstash-7.1.0/logstash-7.1
.0/logstash-core/lib/logstash/java_pipeline.rb:191:in `block in register_plugins
'", "org/jruby/RubyArray.java:1792:in `each'", "D:/logstash-7.1.0/logstash-7.1.0
/logstash-core/lib/logstash/java_pipeline.rb:190:in `register_plugins'", "D:/log
stash-7.1.0/logstash-7.1.0/logstash-core/lib/logstash/java_pipeline.rb:446:in `m
aybe_setup_out_plugins'", "D:/logstash-7.1.0/logstash-7.1.0/logstash-core/lib/lo
gstash/java_pipeline.rb:203:in `start_workers'", "D:/logstash-7.1.0/logstash-7.1
.0/logstash-core/lib/logstash/java_pipeline.rb:145:in `run'", "D:/logstash-7.1.0
/logstash-7.1.0/logstash-core/lib/logstash/java_pipeline.rb:104:in `block in sta
rt'"], :thread=>"#<Thread:0x15997940 run>"}
[2019-10-22T06:07:32,970][ERROR][logstash.agent ] Failed to execute ac
tion {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message
=>"Could not execute action: PipelineAction::Create<main>, action_result: false"
, :backtrace=>nil}
Im rather new to this intergration, im not sure of where i should look into.
Please help me.

The problem looks to be with
grok {
match => { "message" => "%{plugins}" }
}
What is %{plugins} here? It is NOT a pre-defined grok pattern. The list of grok patterns can be found here.
Also, the syntax for a grok pattern from documentation is %{SYNTAX:SEMANTIC}. You could do something like
grok {
match => { "message", "%{GREEDYDATA:plugins}" }
}

Try giving data type of the "%{plugins}".
filter {
grok {
match => { "message" => "%{WORD:plugins}" }
}
}
You can find data types from here
If this not working try removing date filter and try again.

Apparently these kind of errors can happen because of some regexp syntax error deep into a config file. That's just crack.

Related

grok: where do I define custom grok tags

I need to parse nginx log using Logstash, I found out this question:
Nginx grok pattern for logstash
I want to try the pattern in the question, so I create the configuration file with this content:
input {
file {
path => "/var/log/nginx/access.log"
type => "access"
}
}
filter {
grok {
match => {
"message" => '%{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|-)\"%{NUMBER:response} (?:%{NUMBER:bytes}|-) %{NOTSPACE:querystring} (?:"(?:%{URI:referrer}|-)"|%{QS:referrer})%{QS:agent} %{IPORHOST:forwardedfor} %{IPORHOST:host} %{NUMBER:upstreamresponse} (?:-|%{NUMBER:cache})'
}
}
}
output {
stdout {}
}
But I get the error:
[2021-03-24T13:25:53,424][ERROR][logstash.javapipeline ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{NGUSER:ident} not defined>, :backtrace=>["/home/ubuntu/logstash-7.12.0/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in `block in compile'", "org/jruby/RubyKernel.java:1442:in `loop'", "/home/ubuntu/logstash-7.12.0/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in `compile'", "/home/ubuntu/logstash-7.12.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.4.0/lib/logstash/filters/grok.rb:282:in `block in register'", "org/jruby/RubyArray.java:1809:in `each'", "/home/ubuntu/logstash-7.12.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.4.0/lib/logstash/filters/grok.rb:276:in `block in register'", "org/jruby/RubyHash.java:1415:in `each'", "/home/ubuntu/logstash-7.12.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.4.0/lib/logstash/filters/grok.rb:271:in `register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in `register'", "/home/ubuntu/logstash-7.12.0/logstash-core/lib/logstash/java_pipeline.rb:228:in `block in register_plugins'", "org/jruby/RubyArray.java:1809:in `each'", "/home/ubuntu/logstash-7.12.0/logstash-core/lib/logstash/java_pipeline.rb:227:in `register_plugins'", "/home/ubuntu/logstash-7.12.0/logstash-core/lib/logstash/java_pipeline.rb:586:in `maybe_setup_out_plugins'", "/home/ubuntu/logstash-7.12.0/logstash-core/lib/logstash/java_pipeline.rb:240:in `start_workers'", "/home/ubuntu/logstash-7.12.0/logstash-core/lib/logstash/java_pipeline.rb:185:in `run'", "/home/ubuntu/logstash-7.12.0/logstash-core/lib/logstash/java_pipeline.rb:137:in `block in start'"], "pipeline.sources"=>["/home/ubuntu/logstash-7.12.0/config/pipeline.conf"], :thread=>"#<Thread:0x175b972c run>"}
In this particular case, I want to make NGUSER equal to [a-zA-Z\.\#\-\+_%]+
So there comes my question: Where do I define custom grok tags?
You can use the pattern_definitions option to the grok filter
grok {
pattern_definition => {
"NGUSER" => "[a-zA-Z\.\#\-\+_%]+"
}
match => { "message" => '%{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|-)\"%{NUMBER:response} (?:%{NUMBER:bytes}|-) %{NOTSPACE:querystring} (?:"(?:%{URI:referrer}|-)"|%{QS:referrer})%{QS:agent} %{IPORHOST:forwardedfor} %{IPORHOST:host} %{NUMBER:upstreamresponse} (?:-|%{NUMBER:cache})' }
}
or, if you think you might want to share the patterns across multiple instances you can use the patterns_dir option
grok {
patterns_dir => [ "/home/user/patterns/" ]
match => { "message" => '%{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|-)\"%{NUMBER:response} (?:%{NUMBER:bytes}|-) %{NOTSPACE:querystring} (?:"(?:%{URI:referrer}|-)"|%{QS:referrer})%{QS:agent} %{IPORHOST:forwardedfor} %{IPORHOST:host} %{NUMBER:upstreamresponse} (?:-|%{NUMBER:cache})' }
}
and create a file in that directory that contains
NGUSER [a-zA-Z\.\#\-\+_%]+

Error in logstash while passing if statement

I am new to logstash.When I am trying to put a if statement in logstash config file it gives me error
if statement used is:
if {await} > 10
{ mutate {add_field => {"RULE_DATA" => "Value is above threshold"}
add_field => {"ACTUAL_DATA" => "%{await}"}
}
}
the error faced is given below:
[ERROR] 2018-07-20 16:52:21.327 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, => at line 18, column 10 (byte 729) after filter{\n grok {\n patterns_dir => [\"./patterns\"]\n match => { \"message\" => [\"%{TIME:time}%{SPACE}%{USERNAME:device}%{SPACE}%{USERNAME:tps}%{SPACE}%{SYSLOGPROG:rd_sec/s}%{SPACE}%{SYSLOGPROG:wr_sec/s}%{SPACE}%{SYSLOGPROG:avgrq-sz}%{SPACE}%{SYSLOGPROG:avgqu-sz}%{SPACE}%{NUMBER:await}%{SPACE}%{SYSLOGPROG:svctm}%{SPACE}%{SYSLOGPROG:%util}\"]\n }\n overwrite => [\"message\"]\n } \n if \"_grokparsefailure\" in [tags] {\n drop { }\n }\nif {await", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:42:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:incompile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:12:in block in compile_sources'", "org/jruby/RubyArray.java:2486:inmap'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in compile_sources'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:51:ininitialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:inexecute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:in block in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:inwith_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:in block in converge_state'", "org/jruby/RubyArray.java:1734:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:inblock in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:inconverge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:inblock in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
Please suggest what has caused this error.
You have a syntax error. If you have a field as name it await. Like output of grok parse etc.
use the below
if [await] > 10
{
mutate {
add_field => {"RULE_DATA" => "Value is above threshold"}
add_field => {"ACTUAL_DATA" => "%{await}"}
}
}
Logstash conditional's expression enclosed in [] not {}, have a look at the following example from conditional documentation,
filter {
if [action] == "login" {
mutate { remove_field => "secret" }
}
}

Logstash - ArgumentError: Setting “” hasn’t been registered

I want to send a +8Go csv file to my ES server form my machine.
I use Logstash to send the file with this conf :
input {
file {
path => "/Users/karnag/Downloads/siren201703.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
#Date,Open,High,Low,Close,Volume (BTC),Volume (Currency),Weighted Price
columns => ["SIREN", "NIC", "L1_NORMALISEE", "L2_NORMALISEE", "L3_NORMALISEE", "L4_NORMALISEE", "L5_NORMALISEE", "L6_NORMALISEE", "L7_NORMALISEE", "L1_DECLAREE", "L2_DECLAREE", "L3_DECLAREE", "L4_DECLAREE", "L5_DECLAREE", "L6_DECLAREE", "L7_DECLAREE", "NUMVOIE", "INDREP", "TYPVOIE", "LIBVOIE", "CODPOS", "CEDEX", "RPET", "LIBREG", "DEPET", "ARRONET", "CTONET", "COMET", "LIBCOM", "DU", "TU", "UU", "EPCI", "TCD", "ZEMET", "SIEGE", "ENSEIGNE", "IND_PUBLIPO", "DIFFCOM", "AMINTRET", "NATETAB", "LIBNATETAB", "APET700", "LIBAPET", "DAPET", "TEFET", "LIBTEFET", "EFETCENT", "DEFET", "ORIGINE", "DCRET", "DDEBACT", "ACTIVNAT", "LIEUACT", "ACTISURF", "SAISONAT", "MODET", "PRODET", "PRODPART", "AUXILT", "NOMEN_LONG", "SIGLE", "NOM", "PRENOM", "CIVILITE", "RNA", "NICSIEGE", "RPEN", "DEPCOMEN", "ADR_MAIL", "NJ", "LIBNJ", "APEN700", "LIBAPEN", "DAPEN", "APRM", "ESS", "DATEESS", "TEFEN", "LIBTEFEN", "EFENCENT", "DEFEN", "CATEGORIE", "DCREN", "AMINTREN", "MONOACT", "MODEN", "PRODEN", "ESAANN", "TCA", "ESAAPEN", "ESASEC1N", "ESASEC2N", "ESASEC3N", "ESASEC4N", "VMAJ", "VMAJ1", "VMAJ2", "VMAJ3", "DATEMAJ"]
}
}
output {
elasticsearch {
hosts => "http://192.168.10.19:8080/"
index => "siren"
}
stdout {}
}
And I got this error:
[2017-03-15T10:23:04,628][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: Setting "" hasn't been registered>, :backtrace=>["/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:29:in `get_setting'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:61:in `set_value'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:80:in `merge'", "org/jruby/RubyHash.java:1342:in `each'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:80:in `merge'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:115:in `validate_all'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/runner.rb:210:in `execute'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/runner.rb:183:in `run'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/lib/bootstrap/environment.rb:71:in `(root)'"]}
I can't find where is the typo in my conf file (clearly there is something wrong here).
Thanks.

LogStash::ConfigurationError but Configuration OK

I verified Logstash config:
root#learn-elk:/etc/logstash/conf.d# /opt/logstash/bin/logstash -t /etc/logstash/conf.d/
Configuration OK
but still getting error and pipeline aborted after
==> /var/log/logstash/logstash.log <==
{:timestamp=>"2016-10-22T17:48:28.391000+0000", :message=>"Pipeline aborted due to error", :exception=>"LogStash::ConfigurationError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/config/mixin.rb:88:in `config_init'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/config/mixin.rb:72:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/outputs/base.rb:79:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
{:timestamp=>"2016-10-22T17:48:31.424000+0000", :message=>"stopping pipeline", :id=>"main"}
after running logstash with '-v --debug --verbose' I've got much more information:
starting agent {:level=>:info}
starting pipeline {:id=>"main", :level=>:info}
Settings: Default pipeline workers: 1
Registering file input {:path=>["/opt/logstash/GOOG.csv"], :level=>:info}
No sincedb_path set, generating one based on the file path {:sincedb_path=>"/root/.sincedb_0a3b7d0b4841f166ec450717c6ce4124", :path=>["/opt/logstash/GOOG.csv"], :level=>:info}
Pipeline aborted due to error {:exception=>"LogStash::ConfigurationError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/config/mixin.rb:88:in `config_init'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/config/mixin.rb:72:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/outputs/base.rb:79:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
stopping pipeline {:id=>"main"}
Closing inputs {:level=>:info}
Closed inputs {:level=>:info}
After fixing logstash { hosts => ["localhost"] } } vs { host => localhost } } issue I consolidated config into one file below and used stdout instead elasticsearch
input{
file{
path =>"/opt/logstash/GOOG.csv"
start_position =>"beginning"
type => google
}
}
filter{
if [type] == "google" {
csv{
columns =>
["date_of_record","open","high","low","close","volume","adj_close"]
separator => ","
}
date {
match => ["date_of_record","yyyy-MM-dd"]
}
mutate {
convert => ["open","float"]
convert => ["high","float"]
convert => ["low","float"]
convert => ["close","float"]
convert => ["volume","integer"]
convert => ["adj_close","float"]
}
}
output {
stdout {
}
}

logstash ArgumentError

I have trouble to get the logstash (2.4.0) tutorial to work on Windows 7.
This is working: bin\logstash.bat -f pipe.conf
# pipe.conf
input {
stdin { }
}
output {
stdout { }
}
When I enter then code in the msdos-window, I get expected log messages.
C:\Users\foo\Workspace\Reporting\Stack5.0 pipe.conf
Settings: Default pipeline workers: 4
Pipeline main started
configuration in a file
2016-10-10T14:32:13.506Z foopc configuration in a file
yehaaaa
2016-10-10T14:32:18.320Z foopc yehaaaa
Tweaking the configuration file to get close to the tutorial, does not work. Then I get the following error message:
{
:timestamp=>"2016-10-10T16:45:25.605000+0200",
:message=>"Pipeline aborted due to error",
:exception=>"ArgumentError",
:backtrace=>["C:/Users/foo/Workspace/Reporting/Stack5.0/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-input-file-2.2.5/lib/logstash/inputs/file.rb:187:in `register'",
"org/jruby/RubyArray.java:1613:in `each'",
"C:/Users/foo/Workspace/Reporting/Stack5.0/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-input-file-2.2.5/lib/logstash/inputs/file.rb:185:in `register'",
"C:/Users/foo/Workspace/Reporting/Stack5.0/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:330:in `start_inputs'",
"org/jruby/RubyArray.java:1613:in `each'",
"C:/Users/foo/Workspace/Reporting/Stack5.0/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:329:in `start_inputs'",
"C:/Users/foo/Workspace/Reporting/Stack5.0/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:180:in `start_workers'",
"C:/Users/foo/Workspace/Reporting/Stack5.0/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'",
"C:/Users/foo/Workspace/Reporting/Stack5.0/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"],
:level=>:error} {:timestamp=>"2016-10-10T16:45:28.608000+0200",
:message=>"stopping pipeline",
:id=>"main"
}
I call the script like before with: bin\logstash.bat -f pipe.conf
# pipe.conf
input {
# stdin { }
# https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html#configuring-file-input
# logstash 2.4.0
file {
path => "logstash-tutorial-dataset"
start_position => beginning
ignore_older => 0
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
}
output {
stdout { }
}
The logfile: logstash-tutorial-dataset is available and accessable. I downloaded the file from the tutorial.
What did I miss and how do I get logstash to work with this configuration?
According to the doc:
Paths must be absolute and cannot be relative.

Resources