Logstash got exception in running - logstash

I got this exception in logstash log when I run it.
[2018-01-14T15:42:00,912]
[ERROR][logstash.outputs.elasticsearch]
Unknown setting 'host' for elasticsearch
[2018-01-14T15:42:00,921][ERROR][logstash.agent] Failed to execute action {:action=>LogStash::PipelineAction::Create/
pipeline_id:main, :exception=>"LogStash::ConfigurationError",
:message=>"Something is wrong with your configuration.",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config
/mixin.rb:89:in config_init
"/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:63:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:3:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:25:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/plugins/plugin_factory.rb:86:in
plugin'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:114:in
plugin'", "(eval):87:in <eval>'","org/jruby/RubyKernel.java:994:in
eval'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:86:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:171:in
initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in
execute'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:335:inblock
in converge_state'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:332:inblock
in converge_state'", "org/jruby/RubyArray.java:1734:in each'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:319:in
converge_state'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in block
in converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in
converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in
execute'",
"/usr/share/logstash/logstash-core/lib/logstash/runner.rb:343:in
block in execute'",
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in
block in initialize'"]}
It is my configure:
input{
lumberjack {
port => 5044
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter{
if[type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:sysylog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => ["received_at", "%{#timestamp}" ]
add_field => ["received_from", "%{host}" ]
}
syslog_pri {}
date {
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output{
elasticsearch { host =>localhost }
stdout { codec => rubydebug }
}
How can I solve it . thank you.
I use latest version of ELK

If you check your output elasticsearch plugin, it has host parameter.
It needs a hosts parameter and a string array.
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-hosts
My logstash->elastic plugin looks like this:
elasticsearch{
hosts=>["localhost:9200"]
index=>"logstash-%{+YYYY.MM.dd}"
}
You might need the index parameter set too.

Related

Grok pattern for creating separate section in Kibana dashboard

I having been trying since long time to extract and mark data from my customized log using logstash, but not getting anywhere, I having a customized haproxy log like below:
Feb 22 21:17:32 ap haproxy[1235]: 10.172.80.45:32071 10.31.33.34:44541 10.31.33.34:32772 13.127.229.72:443 [22/Feb/2020:21:17:32.006] this_machine~ backend_test-tui/test-tui_32772 40/0/5/1/836 200 701381 - - ---- 0/0/0/0/0 0/0 {testtui.net} {cache_hit} "GET /ob/720/output00007.ts HTTP/1.1"
I want to extract and mark specific content in kibana dashboard from log, like:
from "40/0/5/1/836" section i want to mark the only the last section digit (836) as "response_time"
"701381" as "response_bytes"
"/ob/720/output00007.ts" as "content_url"
And want to use the timestamp in the log file and not the default one
I have created a grok filter using https://grokdebug.herokuapp.com/ but whenever i apply it i m seeing "_grokparsefailure" message and the kibana dashboard stops getting populated
Below is the logstash debug log
{
"#version" => "1",
"message" => "Mar 8 13:53:59 ap haproxy[22158]: 10.172.80.45:30835 10.31.33.34:57886 10.31.33.34:32771 43.252.91.147:443 [08/Mar/2020:13:53:59.827] this_machine~ backend_noida/noida_32771 55/0/1/0/145 200 2146931 - - ---- 0/0/0/0/0 0/0 {testalef1.adcontentamtsolutions.} {cache_hit} \"GET /felaapp/virtual_videos/og/1080/output00006.ts HTTP/1.1\"",
"#timestamp" => 2020-03-08T10:24:07.348Z,
"path" => "/home/alef/haproxy.log",
"host" => "com1",
"tags" => [
[0] "_grokparsefailure"
]
}
Below is the Filter which i have created
%{MONTH:[Month]} %{MONTHDAY:[date]} %{TIME:[time]} %{WORD:[source]} %{WORD:[app]}\[%{DATA:[class]}\]: %{IPORHOST:[UE_IP]}:%{NUMBER:[UE_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Source_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Destination_Port]} %{IPORHOST:[WAN_IP]}:%{NUMBER:[WAN_Port]} \[%{HAPROXYDATE:[accept_date]}\] %{NOTSPACE:[frontend_name]}~ %{NOTSPACE:[backend_name]} %{NOTSPACE:[ty_name]}/%{NUMBER:[response_time]} %{NUMBER:[http_status_code]} %{INT:[response_bytes]} - - ---- %{NOTSPACE:[df]} %{NOTSPACE:[df]} %{DATA:[domain_name]} %{DATA:[cache_status]} %{DATA:[domain_name]} %{NOTSPACE:[content]} HTTP/%{NUMBER:[http_version]}
Below is my logstash conf file:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{MONTH:[Month]} %{MONTHDAY:[date]} %{TIME:[time]} %{WORD:[source]} %{WORD:[app]}\[%{DATA:[class]}\]: %{IPORHOST:[UE_IP]}:%{NUMBER:[UE_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Source_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Destination_Port]} %{IPORHOST:[WAN_IP]}:%{NUMBER:[WAN_Port]} \[%{HAPROXYDATE:[accept_date]}\] %{NOTSPACE:[frontend_name]}~ %{NOTSPACE:[backend_name]} %{NOTSPACE:[ty_name]}/%{NUMBER:[response_time]} %{NUMBER:[http_status_code]} %{INT:[response_bytes]} - - ---- %{NOTSPACE:[df]} %{NOTSPACE:[df]} %{DATA:[domain_name]} %{DATA:[cache_status]} %{DATA:[domain_name]} %{NOTSPACE:[content]} HTTP/%{NUMBER:[http_version]} " }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
}
Using the below filter resolved my issue had to do debugging in the logstash itself to get proper filter:
input { beats {
port => 5044 } }
filter { grok {
match => { "message" => "%{MONTH:month} %{MONTHDAY:date} %{TIME:time} %{WORD:[source]} %{WORD:[app]}[%{DATA:[class]}]:
%{IPORHOST:[UE_IP]}:%{NUMBER:[UE_Port]}
%{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Source_Port]}
%{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Destination_Port]}
%{IPORHOST:[WAN_IP]}:%{NUMBER:[WAN_Port]}
[%{HAPROXYDATE:[accept_date]}] %{NOTSPACE:[frontend_name]}~
%{NOTSPACE:[backend_name]}
%{NOTSPACE:[ty_name]}/%{NUMBER:[response_time]:int}
%{NUMBER:[http_status_code]} %{NUMBER:[response_bytes]:int} - - ----
%{NOTSPACE:[df]} %{NOTSPACE:[df]} %{DATA:[domain_name]}
%{DATA:[cache_status]} %{DATA:[domain_name]} %{URIPATHPARAM:[content]}
HTTP/%{NUMBER:[http_version]}" }
add_tag => [ "response_time", "response_time" ]
} date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] } }
output { elasticsearch { hosts => ["localhost:9200"] }
stdout {
codec => rubydebug
} }

Error in logstash while passing if statement

I am new to logstash.When I am trying to put a if statement in logstash config file it gives me error
if statement used is:
if {await} > 10
{ mutate {add_field => {"RULE_DATA" => "Value is above threshold"}
add_field => {"ACTUAL_DATA" => "%{await}"}
}
}
the error faced is given below:
[ERROR] 2018-07-20 16:52:21.327 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, => at line 18, column 10 (byte 729) after filter{\n grok {\n patterns_dir => [\"./patterns\"]\n match => { \"message\" => [\"%{TIME:time}%{SPACE}%{USERNAME:device}%{SPACE}%{USERNAME:tps}%{SPACE}%{SYSLOGPROG:rd_sec/s}%{SPACE}%{SYSLOGPROG:wr_sec/s}%{SPACE}%{SYSLOGPROG:avgrq-sz}%{SPACE}%{SYSLOGPROG:avgqu-sz}%{SPACE}%{NUMBER:await}%{SPACE}%{SYSLOGPROG:svctm}%{SPACE}%{SYSLOGPROG:%util}\"]\n }\n overwrite => [\"message\"]\n } \n if \"_grokparsefailure\" in [tags] {\n drop { }\n }\nif {await", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:42:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:incompile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:12:in block in compile_sources'", "org/jruby/RubyArray.java:2486:inmap'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in compile_sources'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:51:ininitialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:inexecute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:in block in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:inwith_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:in block in converge_state'", "org/jruby/RubyArray.java:1734:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:inblock in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:inconverge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:inblock in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
Please suggest what has caused this error.
You have a syntax error. If you have a field as name it await. Like output of grok parse etc.
use the below
if [await] > 10
{
mutate {
add_field => {"RULE_DATA" => "Value is above threshold"}
add_field => {"ACTUAL_DATA" => "%{await}"}
}
}
Logstash conditional's expression enclosed in [] not {}, have a look at the following example from conditional documentation,
filter {
if [action] == "login" {
mutate { remove_field => "secret" }
}
}

ELK-logstash.conf is always wrong

I want use Filebeat with logstash.But the logstash.conf is wrong.
logstash.conf:
```
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
```
It reponse this:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, input, filter, output at line 3, column 1 (byte 76) after ", :backtrace=>["/opt/logstash/logstash-core/lib/logstash/compiler.rb:42:in compile_imperative'", "/opt/logstash/logstash-core/lib/logstash/compiler.rb:50:incompile_graph'", "/opt/logstash/logstash-core/lib/logstash/compiler.rb:12:in block in compile_sources'", "org/jruby/RubyArray.java:2486:inmap'", "/opt/logstash/logstash-core/lib/logstash/compiler.rb:11:in compile_sources'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:51:ininitialize'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:171:in initialize'", "/opt/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:inexecute'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:335:in block in converge_state'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:141:inwith_pipelines'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:332:in block in converge_state'", "org/jruby/RubyArray.java:1734:ineach'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:319:in converge_state'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:166:inblock in converge_state_and_update'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:141:in with_pipelines'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:164:inconverge_state_and_update'", "/opt/logstash/logstash-core/lib/logstash/agent.rb:90:in execute'", "/opt/logstash/logstash-core/lib/logstash/runner.rb:343:inblock in execute'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
The beats plugin is wrong. The port should be a number.
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html
Also you have no filter plugin, maybe that will be necessary too:
input {
beats {
port => 5044
}
}
filter{}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}

Logstash - ArgumentError: Setting “” hasn’t been registered

I want to send a +8Go csv file to my ES server form my machine.
I use Logstash to send the file with this conf :
input {
file {
path => "/Users/karnag/Downloads/siren201703.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
#Date,Open,High,Low,Close,Volume (BTC),Volume (Currency),Weighted Price
columns => ["SIREN", "NIC", "L1_NORMALISEE", "L2_NORMALISEE", "L3_NORMALISEE", "L4_NORMALISEE", "L5_NORMALISEE", "L6_NORMALISEE", "L7_NORMALISEE", "L1_DECLAREE", "L2_DECLAREE", "L3_DECLAREE", "L4_DECLAREE", "L5_DECLAREE", "L6_DECLAREE", "L7_DECLAREE", "NUMVOIE", "INDREP", "TYPVOIE", "LIBVOIE", "CODPOS", "CEDEX", "RPET", "LIBREG", "DEPET", "ARRONET", "CTONET", "COMET", "LIBCOM", "DU", "TU", "UU", "EPCI", "TCD", "ZEMET", "SIEGE", "ENSEIGNE", "IND_PUBLIPO", "DIFFCOM", "AMINTRET", "NATETAB", "LIBNATETAB", "APET700", "LIBAPET", "DAPET", "TEFET", "LIBTEFET", "EFETCENT", "DEFET", "ORIGINE", "DCRET", "DDEBACT", "ACTIVNAT", "LIEUACT", "ACTISURF", "SAISONAT", "MODET", "PRODET", "PRODPART", "AUXILT", "NOMEN_LONG", "SIGLE", "NOM", "PRENOM", "CIVILITE", "RNA", "NICSIEGE", "RPEN", "DEPCOMEN", "ADR_MAIL", "NJ", "LIBNJ", "APEN700", "LIBAPEN", "DAPEN", "APRM", "ESS", "DATEESS", "TEFEN", "LIBTEFEN", "EFENCENT", "DEFEN", "CATEGORIE", "DCREN", "AMINTREN", "MONOACT", "MODEN", "PRODEN", "ESAANN", "TCA", "ESAAPEN", "ESASEC1N", "ESASEC2N", "ESASEC3N", "ESASEC4N", "VMAJ", "VMAJ1", "VMAJ2", "VMAJ3", "DATEMAJ"]
}
}
output {
elasticsearch {
hosts => "http://192.168.10.19:8080/"
index => "siren"
}
stdout {}
}
And I got this error:
[2017-03-15T10:23:04,628][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: Setting "" hasn't been registered>, :backtrace=>["/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:29:in `get_setting'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:61:in `set_value'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:80:in `merge'", "org/jruby/RubyHash.java:1342:in `each'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:80:in `merge'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/settings.rb:115:in `validate_all'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/runner.rb:210:in `execute'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/logstash-core/lib/logstash/runner.rb:183:in `run'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/Users/karnag/Documents/Epitech/ElasticStack/Logstash/lib/bootstrap/environment.rb:71:in `(root)'"]}
I can't find where is the typo in my conf file (clearly there is something wrong here).
Thanks.

How to send only error logs via logstash shipper

I am using Logstash to output JSON message to an API. I am reading logs from a log file. My configurations are working fine and it is also sending all the messages to the API.
Following is the sample log file:
Log File:
TID: [-1234] [] [2016-06-07 12:52:59,862] INFO {org.apache.synapse.core.axis2.ProxyService} - Successfully created the Axis2 service for Proxy service : TestServiceHttp {org.apache.synapse.core.axis2.ProxyService}
TID: [-1234] [] [2016-06-07 12:59:04,893] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: /services/TestServiceHttp.TestServiceHttpHttpSoap12Endpoint********* Sending Message to the Queue*****WSAction: urn:mediate********* Sending Message to the Queue*****SOAPAction: urn:mediate********* Sending Message to the Queue*****MessageID: urn:uuid:d1bbe24a-2ce3-497f-8224-d260b0632506********* Sending Message to the Queue*****Direction: request********* Sending Message to the Queue*****Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><name> Omer</name></soapenv:Body></soapenv:Envelope> {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-06-07 12:59:04,925] INFO {org.apache.synapse.core.axis2.TimeoutHandler} - This engine will expire all callbacks after : 120 seconds, irrespective of the timeout action, after the specified or optional timeout {org.apache.synapse.core.axis2.TimeoutHandler}
TID: [-1234] [] [2016-06-07 12:59:04,933] ERROR {org.apache.axis2.description.ClientUtils} - The system cannot infer the transport information from the jms:/Customer.01.Request.Queue.01?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&java.naming.provider.url=tcp://localhost:61616&transport.jms.DestinationType=queue URL. {org.apache.axis2.description.ClientUtils}
TID: [-1234] [] [2016-06-07 12:59:04,949] ERROR {org.apache.synapse.core.axis2.Axis2Sender} - Unexpected error during sending message out {org.apache.synapse.core.axis2.Axis2Sender}
org.apache.axis2.AxisFault: The system cannot infer the transport information from the jms:/Customer.01.Request.Queue.01?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&java.naming.provider.url=tcp://localhost:61616&transport.jms.DestinationType=queue URL.
at org.apache.axis2.description.ClientUtils.inferOutTransport(ClientUtils.java:81)
at org.apache.axis2.client.OperationClient.prepareMessageContext(OperationClient.java:288)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
TID: [-1234] [] [2016-06-07 12:59:05,009] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: /services/TestServiceHttp.TestServiceHttpHttpSoap12Endpoint, WSAction: urn:mediate, SOAPAction: urn:mediate, MessageID: urn:uuid:d1bbe24a-2ce3-497f-8224-d260b0632506, Direction: request, MESSAGE = Executing default 'fault' sequence, ERROR_CODE = 0, ERROR_MESSAGE = Unexpected error during sending message out, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><name> Omer</name></soapenv:Body></soapenv:Envelope> {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-06-07 13:00:04,890] INFO {org.apache.axis2.transport.http.HTTPSender} - Unable to sendViaPost to url[http://Omer-PC:8280/services/TestServiceHttp.TestServiceHttpHttpSoap12Endpoint] {org.apache.axis2.transport.http.HTTPSender}
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78)
at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106)
ent.ServiceClient.sendReceive(ServiceClient.java:530)
at org.apache.jsp.admin.jsp.WSRequestXSSproxy_005fajaxprocessor_jsp._jspService(WSRequestXSSproxy_005fajaxprocessor_jsp.java:294)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:155)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
TID: [-1234] [] [2016-06-07 13:01:40,447] INFO {org.wso2.carbon.core.init.CarbonServerManager} - Shutdown hook triggered.... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-06-07 13:01:40,464] INFO {org.wso2.carbon.core.init.CarbonServerManager} - Gracefully shutting down WSO2 Enterprise Service Bus... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-06-07 13:01:40,477] INFO {org.wso2.carbon.core.ServerManagement} - Starting to switch to maintenance mode... {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-06-07 13:01:40,481] INFO {org.apache.axis2.transport.jms.JMSListener} - JMS Listener Shutdown {org.apache.axis2.transport.jms.JMSListener}
Following is my configuration file:
Configuration File:
input {
stdin {}
file {
path => "C:\WSO2Environment\wso2esb-4.9.0\repository\logs\wso2carbon.log"
type => "wso2"
start_position => "beginning"
codec => multiline {
pattern => "(^\s*at .+)|^(?!TID).*$"
negate => false
what => "previous"
}
}
}
filter {
if [type] == "wso2" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:SourceSystemId}\]%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:MessageType}%{SPACE}{%{JAVACLASS:MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:Message}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
}
if !( "_grokparsefailure" in [tags] ) {
grok{
match => [ "message", "%{GREEDYDATA:StackTrace}" ]
add_tag => [ "grokked" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
timezone => "UTC"
}
}
if ( "multiline" in [tags] ) {
grok {
match => [ "message", "%{GREEDYDATA:StackTrace}" ]
add_tag => [ "multiline" ]
tag_on_failure => [ "multiline" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
}
}
}
output {
stdout { }
http {
url => "http://localhost:8086/messages"
http_method => "post"
format => "json"
mapping => ["TimeStamp","%{TimeStamp}","MessageType","%{MessageType}","MessageTitle","%{MessageTitle}","Message","%{log_EventMessage}","SourceSystemId","%{SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
Problem Statement:
The configuration file is working correctly and sending all the log entries to the API, but I only want to send error logs to the API. So, I want to place a check on "MessageType" in which I am getting the Log Level that If it's value is "ERROR" only then it should send messages through to the API otherwise logstash should discard the message.
In your logstash configuration in the filter section you can use add tag based on your if condition. And in the output add if statement that checks if the tag error is present it will send otherwise it ignores.
After the following if statement:
if [type] == "wso2" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:SourceSystemId}\]%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:MessageType}%{SPACE}{%{JAVACLASS:MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:Message}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
}
Add the following statement in your filter:
if "grokked" in [tags] {
grok {
match => ["MessageType", "ERROR"]
add_tag => [ "loglevelerror" ]
}
}
Then in your output make following changes:
output {
if "loglevelerror" in [tags] {
stdout { }
http {
url => "http://localhost:8086/messages"
http_method => "post"
format => "json"
mapping => ["TimeStamp","%{TimeStamp}","MessageType","%{MessageType}","MessageTitle","%{MessageTitle}","Message","%{log_EventMessage}","SourceSystemId","%{SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
I tested it out on my machine using stdout. It works fine. Hope it helps!

Resources