I've installed Graylog v2.1.1 as a virtual appliance inside VirtualBox on a Windows 7 PC.
I'm trying to read a simple log file and forward it to Graylog by using logstash v5.0.0 with the logstash-output-gelf-3.1.1 plugin, as described here: https://stackoverflow.com/a/31054064/4863804.
I've set up the following logstash.conf output:
input {
file {...}
}
output {
gelf {
host => "199.99.99.179"
port => 12203
}
}
But after running logstash -f logstash.conf I get the following error:
[2016-10-28T14:52:17,756][INFO ][logstash.pipeline ] Pipeline main started
[2016-10-28T14:52:17,817][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2016-10-28T14:52:18,594][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<NameError: no method 'debug' for arguments (org.jruby.RubyArray,org.jruby.RubyHash) on Java::OrgApacheLoggingLog4jCore::Logger
available overloads:
(org.apache.logging.log4j.Marker,java.lang.String,java.lang.Object[])
(org.apache.logging.log4j.Marker,java.lang.String,org.apache.logging.log4j.util.Supplier[])
(java.lang.String,org.apache.logging.log4j.util.Supplier[])
(java.lang.String,java.lang.Object[])>, :backtrace=>["C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/logging/logger.rb:41:in `debug'", "C:/SDKs/logstash-5.0.0/vendor/bundle/jruby/1.9/gems/logstash-output-gelf-3.1.1/lib/logstash/outputs/gelf.rb
:190:in `receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "C:/S
DKs/logstash-5.0.0/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in `multi_receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/output_delegator.rb:42:in `multi_receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logst
ash/pipeline.rb:297:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:296:in `output_batch'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:252:in `worker_loo
p'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:225:in `start_workers'"]}
Update:
It seems to be caused by a version mismatch between logstash and the logstash-output-gelf as the same configuration works fine with logstash-2.4.0.
Perhaps the output plugin needs to be updated for 5.0.0.
Related
I'm new to the elastic stack and im trying to set it up with RabbitMQ using this guide(but in .NET):
https://piotrminkowski.com/2017/02/03/how-to-ship-logs-with-logstash-elasticsearch-and-rabbitmq/
When I startup Logstash I get the errors
[2020-11-14T09:51:50,997][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [0-9], [ \\t\\r\\n], \"#\", \"}\" at line 2, column 16 (byte 35) after input { rabbitmq {\nhost => 192.168", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:184:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:365:in `block in converge_state'"]}
[2020-11-14T09:51:51,296][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-11-14T09:51:56,179][INFO ][logstash.runner ] Logstash shut down.
[2020-11-14T09:51:56,209][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
I don't know what is wrong but I can see that the nhost is "192.168" which probably isnt right, my ip is 192.168.0.29
I'm thankfull for any help
The host option for a rabbitmq input takes a string. A string should be surrounded by double (or single) quotes.
The configuration compiler is quite forgiving, and in many places will accept a "bareword" in place of a string, so it would accept localhost, but you cannot have punctuation in a "bareword", so example.com would result in an error. Likewise, once it sees the periods in the IP address it throws an exception.
Try
host => "192.168.0.29"
I have recently taken over running a Logstash system which runs on debian 9. The previous owner had installed an older version of Logstash and has left incomplete documentation on the project. I have successfully configured Logstash 7.2 locally on windows 10 and have tried to transfer this across to the Debian system replacing the necessary paths etc. I'm comming up against the following error and despite hours searching for a clue I'm left scratching my head. Any pointers would be appreciated!
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/home/user/logstash/logstash-7.2.0/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /home/user/logstash/logstash-7.2.0/ which is now configured via log4j2.properties
[2020-07-21T08:04:35,773][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-07-21T08:04:35,781][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2020-07-21T08:04:37,165][INFO ][logstash.outputs.jdbc ] JDBC - Starting up
[2020-07-21T08:04:37,195][INFO ][com.zaxxer.hikari.HikariDataSource] HikariPool-1 - Starting...
[2020-07-21T08:04:45,302][INFO ][com.zaxxer.hikari.HikariDataSource] HikariPool-1 - Start completed.
[2020-07-21T08:04:45,404][ERROR][logstash.javapipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<ArgumentError: invalid byte sequence in UTF-8>, :backtrace=>["org/jruby/RubyRegexp.java:1113:in `=~'", "org/jruby/RubyString.java:1664:in `=~'", "/home/user/logstash/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:72:in `block in add_patterns_from_file'", "org/jruby/RubyIO.java:3329:in `each'", "/home/user/logstash/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:70:in `add_patterns_from_file'", "/home/user/logstash/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:403:in `block in add_patterns_from_files'", "org/jruby/RubyArray.java:1792:in `each'", "/home/user/logstash/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:399:in `add_patterns_from_files'", "/home/user/logstash/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:279:in `block in register'", "org/jruby/RubyArray.java:1792:in `each'", "/home/user/logstash/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:275:in `block in register'", "org/jruby/RubyHash.java:1419:in `each'", "/home/user/logstash/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:270:in `register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:56:in `register'", "/home/user/logstash/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:192:in `block in register_plugins'", "org/jruby/RubyArray.java:1792:in `each'", "/home/user/logstash/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:191:in `register_plugins'", "/home/user/logstash/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:463:in `maybe_setup_out_plugins'", "/home/user/logstash/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:204:in `start_workers'", "/home/user/logstash/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:146:in `run'", "/home/user/logstash/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:105:in `block in start'"], :thread=>"#<Thread:0x1bda40f7 run>"}
[2020-07-21T08:04:45,422][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2020-07-21T08:04:45,553][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-07-21T08:04:50,602][INFO ][logstash.runner ] Logstash shut down.
solved by adding ":ISO-8859-1:UTF-8" to grok-pure.rb:72
file = File.new(path, "r:ISO-8859-1:UTF-8")
I later noticed that the file encoding of the patterns file was set to text/plain; charset=us-ascii through the command "file -bi file_name". Setting this to UTF8 may also have had an impact.
the issue is because you should not have any other file under patterns_dir except grok patterns. I was having some rpm in that folder that caused the issue
I am not able to get any output on the command prompt screen
E:\kibana\logstash-7.1.1\logstash-7.1.1>bin\logstash -f E:\kibana\logstash-7.1.1\logstash-7.1.1\config\pipeline.conf --config.reload.automatic
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.runtime.encoding.EncodingService (file:/E:/kibana/logstash-7.1.1/logstash-7.1.1/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.Console.cs
WARNING: Please consider reporting this to the maintainers of org.jruby.runtime.encoding.EncodingService
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to E:/kibana/logstash-7.1.1/logstash-7.1.1/logs which is now configured via log4j2.properties
[2019-06-14T12:33:19,407][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-14T12:33:19,427][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.1"}
[2019-06-14T12:33:22,210][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x6177c4b4 run>"}
[2019-06-14T12:33:23,035][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"E:/kibana/logstash-7.1.1/logstash-7.1.1/data/plugins/inputs/file/.sincedb_039f8a57349afd1e3fb106bf0e1c330b", :path=>["/E/kibana/logstash-7.1.1/logstash-7.1.1/data/event-data/apache_access.log"]}
[2019-06-14T12:33:23,119][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-06-14T12:33:23,189][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-06-14T12:33:23,198][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-06-14T12:33:23,479][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I am only getting this and not the output. What could be going wrong?
logstash is configured with elasticsearch which should store data coming from logstash. configuration has been done properly still not fetching.
input {
file {
path => "C:\Users\vishadub\Documents\elkstackTools\logs\error_log.log"
type => "error_logs"
start_position => beginning
sincedb_path => "C:\Users\vishadub\Documents\elkstackTools\sincedb-access"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "error_log"
}
}
this is written in my config file..
o/p is below====
C:\Users\vishadub\Documents\elkstackTools\logstash-6.4.2\bin>logstash -f logstash.conf
Sending Logstash logs to C:/Users/vishadub/Documents/elkstackTools/logstash-6.4.2/logs which is now configured via log4j2.properties [2018-10-30T11:35:39,167]
[WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-10-30T11:35:39,667][INFO ][logstash.runner] Starting Logstash {"logstash.version"=>"6.4.2"} [2018-10-30T11:35:41,645][INFO ][logstash.pipeline] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-10-30T11:35:42,020][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [2018-10-30T11:35:42,036][INFO][logstash.outputs.elasticsearch]
Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}[2018-10-30T11:35:42,208][WARN ][logstash.outputs.elasticsearch]
Restored connection to ES instance {:url=>"http://localhost:9200/"}[2018-10-30T11:35:42,286][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-10-30T11:35:42,301][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-10-30T11:35:42,348][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-10-30T11:35:42,380][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-10-30T11:35:42,426][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-10-30T11:35:42,861][INFO ][logstash.pipeline ]
Pipeline started successfully {:pipeline_id=>"main", :thread=>"3 <Thread:0x45c02cea run>"} [2018-10-30T11:35:42,908][INFO ][logstash.agent]
Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2018-10-30T11:35:42,940][INFO ][filewatch.observingtail ] START,creating Discoverer, Watch with file and sincedb collections [2018-10-30T11:35:43,221][INFO ][logstash.agent ]
Successfully started Logstash API endpoint {:port=>9600}
I recently upgraded from Elasticsearch 1.5 to Elasticsearch 5.5, and I can connect fine remotely, but not from localhost.
I updated elasticsearch.yml to be able to connect remotely:
network.host: 0.0.0.0
And the elasticsearch logs look fine to me:
[2017-11-05T22:44:23,441][INFO ][o.e.n.Node ] [node1] starting ...
[2017-11-05T22:44:23,655][INFO ][o.e.t.TransportService ] [node1] publish_address {[ip address]:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}, {[ip address]:9300}
[2017-11-05T22:44:23,666][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-11-05T22:44:26,712][INFO ][o.e.c.s.ClusterService ] [node1] new_master {node1}{s-J7aStjQFuwor-WY6bSCQ}{nv4GVIQ6SwScPiebRHBQBQ}{localhost}{[ip address]:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-11-05T22:44:26,733][INFO ][o.e.h.n.Netty4HttpServerTransport] [node1] publish_address {[ip address]:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}, {[ip address]:9200}
[2017-11-05T22:44:26,734][INFO ][o.e.n.Node ] [node1] started
I am wondering if it is related to the proxy, since when I try run curl localhost:9200, I get the following:
...
<p>The following error was encountered while trying to retrieve the URL: http://localhost:9200/</p>
<blockquote id="error">
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>
<p id="sysmsg">The system returned: <i>(111) Connection refused</i></p>
<p>The remote host or network may be down. Please try the request again.</p>
...
Any ideas or tips on how to narrow down the issue would be helpful.
It turned out a http_proxy bash variable was set which was proxying all outbound traffic. Once this variable was unset, the curl command worked.
It also turned out that before this fix was made, I realized our Java applications were still able to connect to Elasticsearch locally, it was just the curl command that wasn't working.