statsd not wok in my logstash - logstash

The config file:
# input are the kafka messages
input
{
kafka
{
topic_id => 'test2'
}
}
# Try to match sensor info
filter
{
json { source => "message"}
}
# StatsD and stdout output
output
{
stdout
{
codec => line
{
format => "%{[testmessage][0][key]}"
}
}
stdout { codec=>rubydebug }
statsd
{
host => "localhost"
port => 8125
increment => ["test.%{[testmessage][0][key]}"]
}
}
Input kafka message:
{"testmessage":[{"key":"key-1234"}]}
Output:
key-1234
{
"testmessage" => [
[0] {
"key" => "key-1234"
}
],
"#version" => "1",
"#timestamp" => "2015-11-09T20:11:52.374Z"
}
Log:
{:timestamp=>"2015-11-09T20:29:03.562000+0000", :message=>"Done running kafka input", :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.563000+0000", :message=>"Plugin is finished", :plugin=><LogStash::Outputs::Stdout codec=><LogStash::Codecs::Line format=>"%{[testmessage][0][key]}", charset=>"UTF-8">, workers=>1>, :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.564000+0000", :message=>"Plugin is finished", :plugin=><LogStash::Outputs::Statsd increment=>["test1.test", "test.%{[testmessage][0][key]}"], codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, host=>"localhost", port=>8125, namespace=>"logstash", sender=>"%{host}", sample_rate=>1, debug=>false>, :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.564000+0000", :message=>"Pipeline shutdown complete.", :level=>:info}
Very wired why statsd does not work in my logstash. Looking into lots of examples by Google, no idea why. Any suggestions are welcome. Thanks.

I found the reason, logstash-output-statsd is using UDP by default. But my statsd server is set to use TCP.

Related

Logstash receive strange "<133>" code at the start of receiving TrendMicro log

My Logstash server is CentOS Linux release 8.1.1911.
logstash.version"=>"7.7.0"
I have a capture of what I received on port UDP 5514 with :
nc -lvu 5514 -o log.txt
The content of log.txt
<133>Jun 05 09:23:35 TMCM:EVT_URL_CONTENT_FILTERING Security product="OfficeScan" Security product node="N/A" Security product IP="xx.xx.xx.xx;xxxx::xxxx:xxxx:xxxx:4490" Event time="4/25/2020 11:46:01 PM (UTC)" URL="http://xxxxxxx.xxxxxxx.intranet/SMS_MP/.sms_pol?DEP-Z0120115-ScopeId_B14503FF-F7AA-49EC-A38C-F50D813EEC6E/Application_57a673e1-3e65-4f1c-8ce2-0f4cc1b38acc.SHA256:5EF20484EEC38EA203D7A885EAA48BE2DFDC4F130ED8BF5BEA333378875B2516" Source IP="" Destination IP="yyy.yyy.yyy.yyy" Policy rule="" Blocking type="Web reputation" Domain="xxxx-xxxxx" Event time (local)="4/25/2020 7:46:01 PM" Client host name="N/A" Reputation Score="81"`
myfilter.conf
input
{
udp
{
port => 5514
type => syslog
}
}
filter
{
grok
{
match =>
{ "message" => "(?<user_agent>[^>]*)(?<user_agent>[^:]*)%{POSINT}\s%{WORD:logfrom}\s%{WORD:logtag}\:\s%{NOTSPACE:eventname}\s([^=]*)\=%{QUOTEDSTRING:security_product} ([^=]*)\=%{QUOTEDSTRING:security_prod_node}\s([^=]*)\=\"%{IPV4:security_prod_ip}([^=]*)\=\"(?<agent_detected_time>%{MONTHNUM}\/%{MONTHDAY}\/%{YEAR} %{TIME}\s(?:AM|am|PM|pm)\s*\s\(%{TZ:tz}\)).*URL\=\"%{URI:url}\" ([^=]*)\=%{QUOTEDSTRING:src_ip}\s([^=]*)\=\"%{IPV4:dest_ipv4}\"\s([^=]*)\=%{QUOTEDSTRING:policy_rule} ([^=]*)\=%{QUOTEDSTRING:bloking_type} ([^=]*)\=%{QUOTEDSTRING:domain} ([^=]*)\=\"(?<server_alert_time>%{MONTHNUM}\/%{MONTHDAY}\/%{YEAR} %{TIME}\s(?:AM|am|PM|pm))\"\s([^=]*)\=%{QUOTEDSTRING:client_hostname} ([^=]*)\=\"%{BASE10NUM:reputation_score}/?"
}
}
}
output
{
stdout { codec => rubydebug }
}
The example of the output of logstash:
[2020-06-08T13:11:02,253][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"type" => "syslog",
"#timestamp" => 2020-06-08T18:06:39.090Z,
"message" => "<133>Jun 08 14:06:38 TMCM:EVT_URL_CONTENT_FILTERING Security product=\"OfficeScan\" Security product node=\"N/A\" Security product IP=\"xx.xx.xx.xx;xxxx::xxxx:xxx:xxxx:4490\" Event time=\"4/26/2020 7:33:36 AM (UTC)\" URL=\"http://blabnlabla.bla-blabla.intranet/SMS_MP/.sms_pol?DEP-Z0120105-ScopeId_B14503FF-F7AA-49EC-A38C-F50D813EEC6E/Application_2be50193-9121-4239-a70f-ba06ad7bbfbd.SHA256:6FF12991BBA769F9C15F7E1FA3E3058E22B4D918F6C5659CF7B976059082510D\" Source IP=\"\" Destination IP=\"xxx.xx.xxx.xx\" Policy rule=\"\" Blocking type=\"Web reputation\" Domain=\"bla-blabla\" Event time (local)=\"4/26/2020 3:33:36 AM\" Client host name=\"N/A\" Reputation Score=\"81\"",
"#version" => "1",
"host" => "xx.xxx.xx.xx",
"tags" => [
[0] "_grokparsefailure"
]
}
I have tried also "\<133\>" but it still appears. I have no idea what this <133> is.
P.S. I'm learning by myself since last 2 weeks.

Parsing json using logstash (ELK stack)

I have created a simple json like below
[
{
"Name": "vishnu",
"ID": 1
},
{
"Name": "vishnu",
"ID": 1
}
]
I am holding this values in file named simple.txt . Then i used file beat to listen the file and send the new updates to port 5043,on other side i started the log-stash service which listen to this port in order to parse and pass the json to elastic search.
log-stash is not processing the json values,it hangs in the middle.
logstash
input {
beats {
port => 5043
host => "0.0.0.0"
client_inactivity_timeout => 3600
}
}
filter {
json {
source => "message"
}
}
output {
stdout { codec => rubydebug }
}
filebeat config:
filebeat.prospectors:
- input_type: log
paths:
- filepath
output.logstash:
hosts: ["localhost:5043"]
Logstash output
**
Sending Logstash's logs to D:/elasticdb/logstash-5.6.3/logstash-5.6.3/logs which is now configured via log4j2.properties
[2017-10-31T19:01:17,574][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"D:/elasticdb/logstash-5.6.3/logstash-5.6.3/modules/fb_apache/configuration"}
[2017-10-31T19:01:17,578][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"D:/elasticdb/logstash-5.6.3/logstash-5.6.3/modules/netflow/configuration"}
[2017-10-31T19:01:18,301][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-10-31T19:01:18,388][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5043"}
[2017-10-31T19:01:18,573][INFO ][logstash.pipeline ] Pipeline main started
[2017-10-31T19:01:18,591][INFO ][org.logstash.beats.Server] Starting server on port: 5043
[2017-10-31T19:01:18,697][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
**
Every time when i am running log-stash using command
logstash -f logstash.conf
And since there is no processing of json i am stopping that service by pressing ctrl + c .
Please help me in finding the solution.Thanks in advance.
finally i got ended up with config like this.It works for me.
input
{
file
{
codec => multiline
{
pattern => '^\{'
negate => true
what => previous
}
path => "D:\elasticdb\logstash-tutorial.log\Test.txt"
start_position => "beginning"
sincedb_path => "D:\elasticdb\logstash-tutorial.log\null"
exclude => "*.gz"
}
}
filter {
json {
source => "message"
remove_field => ["path","#timestamp","#version","host","message"]
}
}
output {
elasticsearch { hosts => ["localhost"]
index => "logs"
"document_type" => "json_from_logstash_attempt3"
}
stdout{}
}
Json format:
{"name":"sachin","ID":"1","TS":1351146569}
{"name":"sachin","ID":"1","TS":1351146569}
{"name":"sachin","ID":"1","TS":1351146569}

Get JSON from file

Logstash 5.2.1
I can't read JSON documents from a local file using Logstash. There are no documents in the stdout.
I run Logstash like this:
./logstash-5.2.1/bin/logstash -f logstash-5.2.1/config/shakespeare.conf --config.reload.automatic
Logstash config:
input {
file {
path => "/home/trex/Development/Shipping_Data_To_ES/shakespeare.json"
codec => json {}
start_position => "beginning"
}
}
output {
stdout {
codec => rubydebug
}
}
Also, I tried with charset:
...
codec => json {
charset => "UTF-8"
}
...
Also, I tried with/without json codec in the input and with filter:
...
filter {
json {
source => "message"
}
}
...
Logstash console after start:
[2017-02-28T11:37:29,947][WARN ][logstash.agent ] fetched new config for pipeline. upgrading.. {:pipeline=>"main", :config=>"input {\n file {\n path => \"/home/trex/Development/Shipping_Data_To_ES/shakespeare.json\"\n codec => json {\n charset => \"UTF-8\"\n }\n start_position => \"beginning\"\n }\n}\n#filter {\n# json {\n# source => \"message\"\n# }\n#}\noutput {\n stdout {\n codec => rubydebug\n }\n}\n\n"}
[2017-02-28T11:37:29,951][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
[2017-02-28T11:37:30,434][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-02-28T11:37:30,446][INFO ][logstash.pipeline ] Pipeline main started
^C[2017-02-28T11:40:55,039][WARN ][logstash.runner ] SIGINT received. Shutting down the agent.
[2017-02-28T11:40:55,049][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
^C[2017-02-28T11:40:55,475][FATAL][logstash.runner ] SIGINT received. Terminating immediately..
The signal INT is in use by the JVM and will not work correctly on this platform
[trex#Latitude-E5510 Shipping_Data_To_ES]$ ./logstash-5.2.1/bin/logstash -f logstash-5.2.1/config/shakespeare.conf --config.test_and_exit
^C[trex#Latitude-E5510 Shipping_Data_To_ES]$ ./logstash-5.2.1/bin/logstash -f logstash-5.2.1/config/shakespeare.conf --confireload.automatic
^C[trex#Latitude-E5510 Shipping_Data_To_ES]$ ./logstash-5.2.1/bin/logstash -f logstash-5.2.1/config/shakespeare.conf --config.reload.aumatic
Sending Logstash's logs to /home/trex/Development/Shipping_Data_To_ES/logstash-5.2.1/logs which is now configured via log4j2.properties
[2017-02-28T11:45:48,752][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-02-28T11:45:48,785][INFO ][logstash.pipeline ] Pipeline main started
[2017-02-28T11:45:48,875][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Why Logstash doesn't put my JSON documents in stdout?
Did you try including the file type within your file input:
input {
file {
path => "/home/trex/Development/Shipping_Data_To_ES/shakespeare.json"
type => "json" <-- add this
//codec => json {} <-- for the moment i'll comment this
start_position => "beginning"
}
}
And then have your filter as such:
filter{
json{
source => "message"
}
}
OR if you're going with the codec plugin make sure to have the synopsis as such within your input:
codec => "json"
OR you might want to try out json_lines plugin as well. Hope this thread comes in handy.
It appears that sincedb_path is important to read JSON files. I was able to import the JSON only after adding this option. It is needed to maintain the current position in the file to be able to resume from that position in case the import is interrupted. I don't need any position tracking, so I just set this to /dev/null and it works.
The basic working Logstash configuration:
input {
file {
path => ["/home/trex/Development/Shipping_Data_To_ES/shakespeare.json"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output {
stdout {
codec => json_lines
}
elasticsearch {
hosts => ["localhost:9200"]
index => "shakespeare"
}
}

Logstash - Data from Kafka to ES

Using logstash 5.0.0, Taking kafka source as the input -> taking the data and producing the output in Elasticsearch. (ElasticSearch version 5.0.0)
Logstash conf:
input{
kafka{
bootstrap_servers => "XXX.XXX.XX.XXX:9092","XXX.XXX.XX.XXX:9092","XXX.XXX.XX.XXX:9092"
topics => ["a-data","f-data","n-data"]
group_id => "sound"
auto_offset_reset => "earliest"
consumer_threads => 2
}
}
filter{
json{
source => "message"
}
}
output {
elasticsearch {
hosts => [ "XXX.XXX.XX.XXX:9200" ]
}
}
When I run the below configuration , i am getting this following error.
$ ./logstash -f sound.conf
Sending Logstash logs to /logstash-5.0.0/logs which is now configured vi a log4j2.properties.
[2017-01-17T10:53:29,273][ERROR][logstash.agent ] fetched an invalid c onfig {:config=>"input{\nkafka{\nbootstrap_servers => \"XX.XXX.XXX.XX:9092\",\"XXX.XXX.XX.XXX:9092\",\"XXX.XXX.XX.XXX:9092\"\ntopics => [\"a-data\",\"f-data\ ",\"n-data\"]\ngroup_id => \"sound\"\nauto_offset_reset => \"earliest\"\nc onsumer_threads => 2\n}\n}\nfilter{\njson{\nsource => \"message\"\n}\n}\noutput {\nelasticsearch {\nhosts => [ \"XX.XX.XXX.XX:9200\" ]\n}\n}\n\n", :reason=>"Ex pected one of #, {, } at line 3, column 40 (byte 54) after input{\nkafka{\nboots trap_servers => \"XX.XX.XXX.XX:9092\""}
Can anyone help me with this configuration.
Shouldn't your topic be topics which is an array, where you've inserted the values as a hash:
topics => ["a-data","f-data","n-data"] <-- try changing this line

logstash hangs with error sized_queue_timeout

We have a logstash pipeline in which numerous logstash-forwarders forward logs to a single logstash instance. Many times we have observed that the logstash hangs with the below error:-
[2016-07-22 03:01:12.619] WARN -- Concurrent::Condition: [DEPRECATED] Will be replaced with Synchronization::Object in v1.0.
called on: /opt/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/logstash-input-lumberjack-1.0.2/lib/logstash/sized_queue_timeout.rb:16:in `initialize'
Exception in thread ">output" java.lang.UnsupportedOperationException
at java.lang.Thread.stop(Thread.java:869)
at org.jruby.RubyThread.exceptionRaised(RubyThread.java:1221)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:112)
at java.lang.Thread.run(Thread.java:745)
Our logstash config looks like below:-
input {
lumberjack {
port => 6782
codec => json {}
ssl_certificate => "/opt/logstash-1.5.3/cert/logstash-forwarder.crt"
ssl_key => "/opt/logstash-1.5.3/cert/logstash-forwarder.key"
type => "lumberjack"
}
}
filter {
if [env] != "prod" and [env] != "common" {
drop {}
}
if [message] =~ /^\s*$/ {
drop { }
}
}
output {
if "_jsonparsefailure" in [tags] {
file {
path => "/var/log/shop/parse_error/%{env}/%{app}/%{app}_%{host}_%{+YYYY-MM-dd}.log"
}
} else {
kafka {
broker_list => ["kafka:9092"]
topic_id => "logstash_logs2"
}
}
}
On restarting the logstash it starts working again. Can some one let me know why this problem comes and how can we get around this without restarting logstash everytime?

Resources