We have a logstash pipeline in which numerous logstash-forwarders forward logs to a single logstash instance. Many times we have observed that the logstash hangs with the below error:-
[2016-07-22 03:01:12.619] WARN -- Concurrent::Condition: [DEPRECATED] Will be replaced with Synchronization::Object in v1.0.
called on: /opt/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/logstash-input-lumberjack-1.0.2/lib/logstash/sized_queue_timeout.rb:16:in `initialize'
Exception in thread ">output" java.lang.UnsupportedOperationException
at java.lang.Thread.stop(Thread.java:869)
at org.jruby.RubyThread.exceptionRaised(RubyThread.java:1221)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:112)
at java.lang.Thread.run(Thread.java:745)
Our logstash config looks like below:-
input {
lumberjack {
port => 6782
codec => json {}
ssl_certificate => "/opt/logstash-1.5.3/cert/logstash-forwarder.crt"
ssl_key => "/opt/logstash-1.5.3/cert/logstash-forwarder.key"
type => "lumberjack"
}
}
filter {
if [env] != "prod" and [env] != "common" {
drop {}
}
if [message] =~ /^\s*$/ {
drop { }
}
}
output {
if "_jsonparsefailure" in [tags] {
file {
path => "/var/log/shop/parse_error/%{env}/%{app}/%{app}_%{host}_%{+YYYY-MM-dd}.log"
}
} else {
kafka {
broker_list => ["kafka:9092"]
topic_id => "logstash_logs2"
}
}
}
On restarting the logstash it starts working again. Can some one let me know why this problem comes and how can we get around this without restarting logstash everytime?
Related
I tried starting the logstash with the below command
logstash-7.10.2\logstash -f logstash.conf
logstash.conf
input{
file{
path => "D://server.log" start_position=> "beginning" type => "logs"
}
}
filter {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:logtime} \[%{NOTSPACE:thread}\] \[%{LOGLEVEL:loglevel}\] %{GREEDYDATA:line}"
}
}
}
output {
if "ERROR" in [loglevel]
{
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash"
}
}
}
command prompt displayed the below text and did not start logstash.
Using JAVA_HOME defined java: C:\Program Files\Java\jdk1.8.0_221;
WARNING, using JAVA_HOME while Logstash distribution comes with a bundled JDK
warning: ignoring JAVA_OPTS=-Xms64m -Xmx128m -XX:NewSize=64m -XX:MaxNewSize=64m -XX:PermSize=64m -XX:MaxPermSize=64m; pass JVM parameters via LS_JAVA_OPTS
No error logs were created.
Have you tried staring logstash in debug mode .
--log.level DEBUG
Pipeline looks okay. Can you try adding below output to see if you the pattern and log data matches. Just to rule out any grokparsefailures.
output {
stdout { codec => rubydebug }
if "ERROR" in [loglevel]
{
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash"
}
}
}
I am using ELK stack with filebeat.
filebeat.conf
filebeat:
prospectors:
-
paths:
- /home/ubuntu/logs_*
input_type: log
output:
logstash:
hosts: [${LOGSTASH_PORT_5044_TCP_ADDR}]
index: filebeat
console:
pretty: true
This is passing logs from a file logs_test
A sample log
{"name":"test","statusCode":0,"deployment":"production","hostname":"ip-random-address","level":30,"jobName":"testJob","date":"2016-07-18T03:15:02.075Z","jobType":"script","msg":"","time":"2016-07-18T03:15:02.076Z","v":0}
I want to make a HTTP call to an external URL when the field statusCode is 1
The entire log object is being passed to logstash.
My logstash config
input {
beats {
port => 5044
codec => "json"
}
}
output {
if ([statusCode] and [statusCode] == 1) {
http {
format=>"message"
http_method=>"post"
url=>"http://www.example.com"
message=>'{"text": "%{some_pattern_matcher}"}'
}
}
}
[Question] What should the "some_pattern_matcher" be to send all fields to HTTP request.
PS: %{mesage} does not work.
input {
beats {
port => 5044
codec => "json"
}
}
filter{
grok{
match => { "message" => "%{GREEDYDATA:data}" }
}
}
output {
if ([statusCode] and [statusCode] == 1) {
http {
format=>"message"
http_method=>"post"
url=>"http://www.example.com"
message=> %{data}
}
}
}
I haven't tried it out. So try this one and let me know if this solution works. If not please post the error(s) you got.
In EC2 I have configured logstash as belows
input {
# beats{
# port => 5044
# }
file {
type => "adjustlog"
path => "/etc/logstash/conf.d/sample.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
if[type] == 'adjustlog'{
grok {
match => {
"message" => [
"%{TIMESTAMP_ISO8601:timestamp},(%{USERNAME:userId})?,%{USERNAME:setlkey},%{USERNAME:uniqueId},%{NUMBER:providerId},%{USERNAME:itemCode},%{USERNAME:voucherCode},%{USERNAME:samsCode},(%{USERNAME:serviceType})?"
]
}
}
}else {
drop{ }
}
}
output {
elasticsearch{
hosts => ["search-*.es.amazonaws.com:80"]
index => "test"
}
stdout {codec => rubydebug}
}
but logstash can't make index in AWS elasticsearch and
send log data.
(However, curl and wget commands are working well.
I can make index using curl command)
Error logs are
Attempted to send a bulk request to Elasticsearch configured at '["http://search-*.es.amazonaws.com/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:error_message=>"search*.es.amazonaws.com:80 failed to respond", :error_class=>"Manticore::ClientProtocolException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/http/manticore.rb:84:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/http/manticore.rb:67:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/client.rb:128:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.17/lib/elasticsearch/api/actions/bulk.rb:88:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in `non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:172:in `safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:101:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:86:in `retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `multi_receive'", "org/jruby/RubyArray.java:1653:in `each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:28:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:301:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:301:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:232:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:201:in `start_workers'"], :client_config=>{:hosts=>["http://search*.es.amazonaws.com/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"http", :user=>nil, :password=>nil, :port=>80}}, :level=>:error}
What is the check point for debug?
I found this when trying to fix a similar issue. AWS has changed how it implements Elasticsearch node discovery. It will work fine until logstash tries to discover more hosts at which point it breaks. Restarting logstash temporarily but inconsistently fixes the issue. curl and wget work fine too.
:message=>"Cannot get new connection from pool.", :class=>"Elasticsearch::Transport::Transport::Error", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:193:in `perform_request'",
ElasticSearch would work for a bit but then stop ingesting data.
Old config which failed
output {
elasticsearch {
hosts => ["https://search-*.us-east-1.es.amazonaws.com"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Logstash tries to get a list of hosts from Elasticsearch but AWS's implementation has changed the format of the data returned. For more details on the specifics. https://forums.aws.amazon.com/thread.jspa?threadID=222600
https://discuss.elastic.co/t/elasitcsearch-ruby-raises-cannot-get-new-connection-from-pool-error/36252/11
The working config.
output
{
elasticsearch {
hosts => ["https://search-*.us-east-1.es.amazonaws.com"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
tomwj
The config file:
# input are the kafka messages
input
{
kafka
{
topic_id => 'test2'
}
}
# Try to match sensor info
filter
{
json { source => "message"}
}
# StatsD and stdout output
output
{
stdout
{
codec => line
{
format => "%{[testmessage][0][key]}"
}
}
stdout { codec=>rubydebug }
statsd
{
host => "localhost"
port => 8125
increment => ["test.%{[testmessage][0][key]}"]
}
}
Input kafka message:
{"testmessage":[{"key":"key-1234"}]}
Output:
key-1234
{
"testmessage" => [
[0] {
"key" => "key-1234"
}
],
"#version" => "1",
"#timestamp" => "2015-11-09T20:11:52.374Z"
}
Log:
{:timestamp=>"2015-11-09T20:29:03.562000+0000", :message=>"Done running kafka input", :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.563000+0000", :message=>"Plugin is finished", :plugin=><LogStash::Outputs::Stdout codec=><LogStash::Codecs::Line format=>"%{[testmessage][0][key]}", charset=>"UTF-8">, workers=>1>, :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.564000+0000", :message=>"Plugin is finished", :plugin=><LogStash::Outputs::Statsd increment=>["test1.test", "test.%{[testmessage][0][key]}"], codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, host=>"localhost", port=>8125, namespace=>"logstash", sender=>"%{host}", sample_rate=>1, debug=>false>, :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.564000+0000", :message=>"Pipeline shutdown complete.", :level=>:info}
Very wired why statsd does not work in my logstash. Looking into lots of examples by Google, no idea why. Any suggestions are welcome. Thanks.
I found the reason, logstash-output-statsd is using UDP by default. But my statsd server is set to use TCP.
While trying to configure the logstash forwarder on the central server, below error is observed in logstash.log:
{:timestamp=>"2015-07-07T09:05:14.742000-0500", :message=>"Unknown setting 'timestamp' for date", :level=>:error}
{:timestamp=>"2015-07-07T09:05:14.744000-0500", :message=>"Error: Something is wrong with your configuration."}
Could someone please help to resolve this issue?
Here is the configuration file:/etc/logstash/conf.d/central.conf:
input {
lumberjack {
port => 6782
ssl_certificate => "/etc/logstash/server.crt"
ssl_key => "/etc/logstash/server.key"
type => "lumberjack"
}
}
output {
stdout { }
elasticsearch {
cluster => "logstash"
}
}