How to send only error logs via logstash shipper - logstash

I am using Logstash to output JSON message to an API. I am reading logs from a log file. My configurations are working fine and it is also sending all the messages to the API.
Following is the sample log file:
Log File:
TID: [-1234] [] [2016-06-07 12:52:59,862] INFO {org.apache.synapse.core.axis2.ProxyService} - Successfully created the Axis2 service for Proxy service : TestServiceHttp {org.apache.synapse.core.axis2.ProxyService}
TID: [-1234] [] [2016-06-07 12:59:04,893] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: /services/TestServiceHttp.TestServiceHttpHttpSoap12Endpoint********* Sending Message to the Queue*****WSAction: urn:mediate********* Sending Message to the Queue*****SOAPAction: urn:mediate********* Sending Message to the Queue*****MessageID: urn:uuid:d1bbe24a-2ce3-497f-8224-d260b0632506********* Sending Message to the Queue*****Direction: request********* Sending Message to the Queue*****Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><name> Omer</name></soapenv:Body></soapenv:Envelope> {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-06-07 12:59:04,925] INFO {org.apache.synapse.core.axis2.TimeoutHandler} - This engine will expire all callbacks after : 120 seconds, irrespective of the timeout action, after the specified or optional timeout {org.apache.synapse.core.axis2.TimeoutHandler}
TID: [-1234] [] [2016-06-07 12:59:04,933] ERROR {org.apache.axis2.description.ClientUtils} - The system cannot infer the transport information from the jms:/Customer.01.Request.Queue.01?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&java.naming.provider.url=tcp://localhost:61616&transport.jms.DestinationType=queue URL. {org.apache.axis2.description.ClientUtils}
TID: [-1234] [] [2016-06-07 12:59:04,949] ERROR {org.apache.synapse.core.axis2.Axis2Sender} - Unexpected error during sending message out {org.apache.synapse.core.axis2.Axis2Sender}
org.apache.axis2.AxisFault: The system cannot infer the transport information from the jms:/Customer.01.Request.Queue.01?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&java.naming.provider.url=tcp://localhost:61616&transport.jms.DestinationType=queue URL.
at org.apache.axis2.description.ClientUtils.inferOutTransport(ClientUtils.java:81)
at org.apache.axis2.client.OperationClient.prepareMessageContext(OperationClient.java:288)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
TID: [-1234] [] [2016-06-07 12:59:05,009] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: /services/TestServiceHttp.TestServiceHttpHttpSoap12Endpoint, WSAction: urn:mediate, SOAPAction: urn:mediate, MessageID: urn:uuid:d1bbe24a-2ce3-497f-8224-d260b0632506, Direction: request, MESSAGE = Executing default 'fault' sequence, ERROR_CODE = 0, ERROR_MESSAGE = Unexpected error during sending message out, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><name> Omer</name></soapenv:Body></soapenv:Envelope> {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-06-07 13:00:04,890] INFO {org.apache.axis2.transport.http.HTTPSender} - Unable to sendViaPost to url[http://Omer-PC:8280/services/TestServiceHttp.TestServiceHttpHttpSoap12Endpoint] {org.apache.axis2.transport.http.HTTPSender}
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78)
at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106)
ent.ServiceClient.sendReceive(ServiceClient.java:530)
at org.apache.jsp.admin.jsp.WSRequestXSSproxy_005fajaxprocessor_jsp._jspService(WSRequestXSSproxy_005fajaxprocessor_jsp.java:294)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:155)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
TID: [-1234] [] [2016-06-07 13:01:40,447] INFO {org.wso2.carbon.core.init.CarbonServerManager} - Shutdown hook triggered.... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-06-07 13:01:40,464] INFO {org.wso2.carbon.core.init.CarbonServerManager} - Gracefully shutting down WSO2 Enterprise Service Bus... {org.wso2.carbon.core.init.CarbonServerManager}
TID: [-1234] [] [2016-06-07 13:01:40,477] INFO {org.wso2.carbon.core.ServerManagement} - Starting to switch to maintenance mode... {org.wso2.carbon.core.ServerManagement}
TID: [-1234] [] [2016-06-07 13:01:40,481] INFO {org.apache.axis2.transport.jms.JMSListener} - JMS Listener Shutdown {org.apache.axis2.transport.jms.JMSListener}
Following is my configuration file:
Configuration File:
input {
stdin {}
file {
path => "C:\WSO2Environment\wso2esb-4.9.0\repository\logs\wso2carbon.log"
type => "wso2"
start_position => "beginning"
codec => multiline {
pattern => "(^\s*at .+)|^(?!TID).*$"
negate => false
what => "previous"
}
}
}
filter {
if [type] == "wso2" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:SourceSystemId}\]%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:MessageType}%{SPACE}{%{JAVACLASS:MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:Message}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
}
if !( "_grokparsefailure" in [tags] ) {
grok{
match => [ "message", "%{GREEDYDATA:StackTrace}" ]
add_tag => [ "grokked" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
timezone => "UTC"
}
}
if ( "multiline" in [tags] ) {
grok {
match => [ "message", "%{GREEDYDATA:StackTrace}" ]
add_tag => [ "multiline" ]
tag_on_failure => [ "multiline" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
}
}
}
output {
stdout { }
http {
url => "http://localhost:8086/messages"
http_method => "post"
format => "json"
mapping => ["TimeStamp","%{TimeStamp}","MessageType","%{MessageType}","MessageTitle","%{MessageTitle}","Message","%{log_EventMessage}","SourceSystemId","%{SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
Problem Statement:
The configuration file is working correctly and sending all the log entries to the API, but I only want to send error logs to the API. So, I want to place a check on "MessageType" in which I am getting the Log Level that If it's value is "ERROR" only then it should send messages through to the API otherwise logstash should discard the message.

In your logstash configuration in the filter section you can use add tag based on your if condition. And in the output add if statement that checks if the tag error is present it will send otherwise it ignores.
After the following if statement:
if [type] == "wso2" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:SourceSystemId}\]%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:MessageType}%{SPACE}{%{JAVACLASS:MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:Message}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
}
Add the following statement in your filter:
if "grokked" in [tags] {
grok {
match => ["MessageType", "ERROR"]
add_tag => [ "loglevelerror" ]
}
}
Then in your output make following changes:
output {
if "loglevelerror" in [tags] {
stdout { }
http {
url => "http://localhost:8086/messages"
http_method => "post"
format => "json"
mapping => ["TimeStamp","%{TimeStamp}","MessageType","%{MessageType}","MessageTitle","%{MessageTitle}","Message","%{log_EventMessage}","SourceSystemId","%{SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
I tested it out on my machine using stdout. It works fine. Hope it helps!

Related

How to parse the custom logs

I am new to logstash , can someone help me on grok filter to parse the data from multiple newline characters in the same log
2018-10-08 13:38:34,280 [https-openssl-apr-0:0:0:0:0:0:0:0-8443-exec-424] INFO Rq:144839 ControllerInterceptor - afterCompletion()
url: GET::/system/data/connect/service
response: 200
elapsed: 10 ms
1.Using Grok
http://grokdebug.herokuapp.com/
[First Input Box] INPUT
2018-10-08 13:38:34,280 [https-openssl-apr-0:0:0:0:0:0:0:0-8443-exec-424] INFO Rq:144839 ControllerInterceptor - afterCompletion()
response: 200
elapsed: 10 ms
[Second Input Box] Grok Parse ==>%{UPTONEWLINE:Part1}%{UPTONEWLINE:Part2}
Check Add custom patterns and add the following line
UPTONEWLINE (?:(.+?)(\n))
OUTPUT
{
"Part1": [
[
"2018-10-08 13:38:34,280 [https-openssl-apr-0:0:0:0:0:0:0:0-8443-exec-424] INFO Rq:144839 ControllerInterceptor - afterCompletion()\n"
]
],
"Part2": [
[
"response: 200\n"
]
]
}
2.Without using Grok filter - Logstash configuration file
INPUT
2018-10-08 13:38:34,280 [https-openssl-apr-0:0:0:0:0:0:0:0-8443-exec-424] INFO Rq:144839 ControllerInterceptor - afterCompletion()\nresponse: 200\nelapsed: 10 ms
Logstash Config File
input {
http {
port => 5043
response_headers => {
"Access-Control-Allow-Origin" => "*"
"Content-Type" => "text/plain"
"Access-Control-Allow-Headers" => "Origin, X-Requested-With, Content-Type,
Accept"
}
}
}
filter {
mutate {
split => ['message','\n']
add_field => {
"Part1" => "%{[message][0]}"
"Part2" => "%{[message][1]}"
"Part3" => "%{[message][2]}"
}
}
}
output {
stdout {
codec => rubydebug
}
}
OUTPUT
{
"host"=>"0:0:0:0:0:0:0:1",
"#version"=>"1",
"message"=>[
[0]"2018-10-08 13:38:34,280 [https-openssl-apr-0:0:0:0:0:0:0:0-8443-exe c-424] INFO Rq:144839 ControllerInterceptor - afterCompletion()",
[1]"response: 200",
[2]"elapsed: 10 ms"
],
"Part1"=>"2018-10-08 13:38:34,280 [https-openssl-apr-0:0:0:0:0:0:0:0-8443-exec-424] INFO Rq:144839 ControllerInterceptor - afterCompletion()",
"Part2"=>"response: 200",
"Part3"=>"elapsed: 10 ms",
"#timestamp"=>2018-10-09T05: 27: 41.695Z
}

Error in logstash while passing if statement

I am new to logstash.When I am trying to put a if statement in logstash config file it gives me error
if statement used is:
if {await} > 10
{ mutate {add_field => {"RULE_DATA" => "Value is above threshold"}
add_field => {"ACTUAL_DATA" => "%{await}"}
}
}
the error faced is given below:
[ERROR] 2018-07-20 16:52:21.327 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, => at line 18, column 10 (byte 729) after filter{\n grok {\n patterns_dir => [\"./patterns\"]\n match => { \"message\" => [\"%{TIME:time}%{SPACE}%{USERNAME:device}%{SPACE}%{USERNAME:tps}%{SPACE}%{SYSLOGPROG:rd_sec/s}%{SPACE}%{SYSLOGPROG:wr_sec/s}%{SPACE}%{SYSLOGPROG:avgrq-sz}%{SPACE}%{SYSLOGPROG:avgqu-sz}%{SPACE}%{NUMBER:await}%{SPACE}%{SYSLOGPROG:svctm}%{SPACE}%{SYSLOGPROG:%util}\"]\n }\n overwrite => [\"message\"]\n } \n if \"_grokparsefailure\" in [tags] {\n drop { }\n }\nif {await", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:42:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:incompile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:12:in block in compile_sources'", "org/jruby/RubyArray.java:2486:inmap'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in compile_sources'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:51:ininitialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:inexecute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:in block in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:inwith_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:in block in converge_state'", "org/jruby/RubyArray.java:1734:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:inblock in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:inconverge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:inblock in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
Please suggest what has caused this error.
You have a syntax error. If you have a field as name it await. Like output of grok parse etc.
use the below
if [await] > 10
{
mutate {
add_field => {"RULE_DATA" => "Value is above threshold"}
add_field => {"ACTUAL_DATA" => "%{await}"}
}
}
Logstash conditional's expression enclosed in [] not {}, have a look at the following example from conditional documentation,
filter {
if [action] == "login" {
mutate { remove_field => "secret" }
}
}

Logstash: grok expression for multiline data

I am new to ELK stack. I am trying to write one grok expression for the following log statement
2017-10-26 19:20:28.538 ERROR --- [logAppenderService] [Serv01] [restartedMain] ns.pcs.log.appender.LogAppender : [1234] doStuff Some statement here - {}
java.lang.Exception: Hello World
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
I have written the following logstash configuration:
input{
kafka {
type => "mylog"
topic_id => 'mylog'
}
}
filter{
if [type] == "mylog" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} \[%{DATA:serviceName}] \[%{DATA:nodeName}] \[%{DATA:trName}] %{NOTSPACE:className} %{NOTSPACE:':'} \[%{DATA:refName}] %{GREEDYDATA:msg}" }
}
}
}
output{
if [type] == "mylog" {
elasticsearch {
hosts => ["101.18.19.89:9200"]
index => "logstash-%{+YYYY-MM-dd}"
}
}
stdout {
codec => rubydebug
}
}
When I am trying to run the same I am getting json parse exception. Not sure if I am missing something or not. I am really stuck at this stage.
Your problem is that the input is not matching in your pattern
Please try this
%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} --- \[%{DATA:serviceName}\] \[%{DATA:nodeName}\] \[%{DATA:trName}\] %{NOTSPACE:className} %{NOTSPACE:':'} \[%{DATA:refName}] %{GREEDYDATA:msg}
you missing the
---
]
pattern
if you still can't please check the mutiline is sending into your log
you could add the mutiline into your input if needed
codec => multiline {
pattern => "^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}[\.,][0-9]{3,7} "
negate => true
what => "previous"
}

Get JSON from file

Logstash 5.2.1
I can't read JSON documents from a local file using Logstash. There are no documents in the stdout.
I run Logstash like this:
./logstash-5.2.1/bin/logstash -f logstash-5.2.1/config/shakespeare.conf --config.reload.automatic
Logstash config:
input {
file {
path => "/home/trex/Development/Shipping_Data_To_ES/shakespeare.json"
codec => json {}
start_position => "beginning"
}
}
output {
stdout {
codec => rubydebug
}
}
Also, I tried with charset:
...
codec => json {
charset => "UTF-8"
}
...
Also, I tried with/without json codec in the input and with filter:
...
filter {
json {
source => "message"
}
}
...
Logstash console after start:
[2017-02-28T11:37:29,947][WARN ][logstash.agent ] fetched new config for pipeline. upgrading.. {:pipeline=>"main", :config=>"input {\n file {\n path => \"/home/trex/Development/Shipping_Data_To_ES/shakespeare.json\"\n codec => json {\n charset => \"UTF-8\"\n }\n start_position => \"beginning\"\n }\n}\n#filter {\n# json {\n# source => \"message\"\n# }\n#}\noutput {\n stdout {\n codec => rubydebug\n }\n}\n\n"}
[2017-02-28T11:37:29,951][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
[2017-02-28T11:37:30,434][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-02-28T11:37:30,446][INFO ][logstash.pipeline ] Pipeline main started
^C[2017-02-28T11:40:55,039][WARN ][logstash.runner ] SIGINT received. Shutting down the agent.
[2017-02-28T11:40:55,049][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
^C[2017-02-28T11:40:55,475][FATAL][logstash.runner ] SIGINT received. Terminating immediately..
The signal INT is in use by the JVM and will not work correctly on this platform
[trex#Latitude-E5510 Shipping_Data_To_ES]$ ./logstash-5.2.1/bin/logstash -f logstash-5.2.1/config/shakespeare.conf --config.test_and_exit
^C[trex#Latitude-E5510 Shipping_Data_To_ES]$ ./logstash-5.2.1/bin/logstash -f logstash-5.2.1/config/shakespeare.conf --confireload.automatic
^C[trex#Latitude-E5510 Shipping_Data_To_ES]$ ./logstash-5.2.1/bin/logstash -f logstash-5.2.1/config/shakespeare.conf --config.reload.aumatic
Sending Logstash's logs to /home/trex/Development/Shipping_Data_To_ES/logstash-5.2.1/logs which is now configured via log4j2.properties
[2017-02-28T11:45:48,752][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-02-28T11:45:48,785][INFO ][logstash.pipeline ] Pipeline main started
[2017-02-28T11:45:48,875][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Why Logstash doesn't put my JSON documents in stdout?
Did you try including the file type within your file input:
input {
file {
path => "/home/trex/Development/Shipping_Data_To_ES/shakespeare.json"
type => "json" <-- add this
//codec => json {} <-- for the moment i'll comment this
start_position => "beginning"
}
}
And then have your filter as such:
filter{
json{
source => "message"
}
}
OR if you're going with the codec plugin make sure to have the synopsis as such within your input:
codec => "json"
OR you might want to try out json_lines plugin as well. Hope this thread comes in handy.
It appears that sincedb_path is important to read JSON files. I was able to import the JSON only after adding this option. It is needed to maintain the current position in the file to be able to resume from that position in case the import is interrupted. I don't need any position tracking, so I just set this to /dev/null and it works.
The basic working Logstash configuration:
input {
file {
path => ["/home/trex/Development/Shipping_Data_To_ES/shakespeare.json"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output {
stdout {
codec => json_lines
}
elasticsearch {
hosts => ["localhost:9200"]
index => "shakespeare"
}
}

statsd not wok in my logstash

The config file:
# input are the kafka messages
input
{
kafka
{
topic_id => 'test2'
}
}
# Try to match sensor info
filter
{
json { source => "message"}
}
# StatsD and stdout output
output
{
stdout
{
codec => line
{
format => "%{[testmessage][0][key]}"
}
}
stdout { codec=>rubydebug }
statsd
{
host => "localhost"
port => 8125
increment => ["test.%{[testmessage][0][key]}"]
}
}
Input kafka message:
{"testmessage":[{"key":"key-1234"}]}
Output:
key-1234
{
"testmessage" => [
[0] {
"key" => "key-1234"
}
],
"#version" => "1",
"#timestamp" => "2015-11-09T20:11:52.374Z"
}
Log:
{:timestamp=>"2015-11-09T20:29:03.562000+0000", :message=>"Done running kafka input", :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.563000+0000", :message=>"Plugin is finished", :plugin=><LogStash::Outputs::Stdout codec=><LogStash::Codecs::Line format=>"%{[testmessage][0][key]}", charset=>"UTF-8">, workers=>1>, :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.564000+0000", :message=>"Plugin is finished", :plugin=><LogStash::Outputs::Statsd increment=>["test1.test", "test.%{[testmessage][0][key]}"], codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, host=>"localhost", port=>8125, namespace=>"logstash", sender=>"%{host}", sample_rate=>1, debug=>false>, :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.564000+0000", :message=>"Pipeline shutdown complete.", :level=>:info}
Very wired why statsd does not work in my logstash. Looking into lots of examples by Google, no idea why. Any suggestions are welcome. Thanks.
I found the reason, logstash-output-statsd is using UDP by default. But my statsd server is set to use TCP.

Resources