Logstash stopping {:plugin=>"LogStash::Inputs::Http"} - logstash

I'm trying to run Logstash in an EC2 Ubuntu instance,
but when I run:
logstash-5.2.0/bin/logstash -f logstash.conf --debug
I get:
Starting puma
Trying to start WebServer {:port=>9600}
start
Trying to start WebServer {:port=>9601}
[api-service] start
Successfully started Logstash API endpoint {:port=>9601}
PeriodicPoller: Stopping
stopping pipeline {:id=>"main"}
Closing inputs
stopping {:plugin=>"LogStash::Inputs::Http"}
Closed inputs
This is logstash.conf
input
{
http
{
host => "127.0.0.1"
port => 31311
}
}
output
{
elasticsearch
{
hosts => ["localhost:9200"]
}
stdout
{
codec => rubydebug
}
}
When I run
curl 'http://localhost:9200/?pretty'
I get:
{
"name" : "QrRfI_U",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "LLdvAaAsQSCULTfl_b4xIA",
"version" : {
"number" : "5.2.0",
"build_hash" : "24e05b9",
"build_date" : "2017-01-24T19:52:35.800Z",
"build_snapshot" : false,
"lucene_version" : "6.4.0"
},
"tagline" : "You Know, for Search"
}
So elasticsearch is running fine.

What if you have your hosts as:
hosts => "localhost"
And make sure that the http port which you've mentioned above is not bound to any other process.
If that's not the case just to make sure, run plugin list and check whether http-input plugin does exist.

Related

Logstash receive strange "<133>" code at the start of receiving TrendMicro log

My Logstash server is CentOS Linux release 8.1.1911.
logstash.version"=>"7.7.0"
I have a capture of what I received on port UDP 5514 with :
nc -lvu 5514 -o log.txt
The content of log.txt
<133>Jun 05 09:23:35 TMCM:EVT_URL_CONTENT_FILTERING Security product="OfficeScan" Security product node="N/A" Security product IP="xx.xx.xx.xx;xxxx::xxxx:xxxx:xxxx:4490" Event time="4/25/2020 11:46:01 PM (UTC)" URL="http://xxxxxxx.xxxxxxx.intranet/SMS_MP/.sms_pol?DEP-Z0120115-ScopeId_B14503FF-F7AA-49EC-A38C-F50D813EEC6E/Application_57a673e1-3e65-4f1c-8ce2-0f4cc1b38acc.SHA256:5EF20484EEC38EA203D7A885EAA48BE2DFDC4F130ED8BF5BEA333378875B2516" Source IP="" Destination IP="yyy.yyy.yyy.yyy" Policy rule="" Blocking type="Web reputation" Domain="xxxx-xxxxx" Event time (local)="4/25/2020 7:46:01 PM" Client host name="N/A" Reputation Score="81"`
myfilter.conf
input
{
udp
{
port => 5514
type => syslog
}
}
filter
{
grok
{
match =>
{ "message" => "(?<user_agent>[^>]*)(?<user_agent>[^:]*)%{POSINT}\s%{WORD:logfrom}\s%{WORD:logtag}\:\s%{NOTSPACE:eventname}\s([^=]*)\=%{QUOTEDSTRING:security_product} ([^=]*)\=%{QUOTEDSTRING:security_prod_node}\s([^=]*)\=\"%{IPV4:security_prod_ip}([^=]*)\=\"(?<agent_detected_time>%{MONTHNUM}\/%{MONTHDAY}\/%{YEAR} %{TIME}\s(?:AM|am|PM|pm)\s*\s\(%{TZ:tz}\)).*URL\=\"%{URI:url}\" ([^=]*)\=%{QUOTEDSTRING:src_ip}\s([^=]*)\=\"%{IPV4:dest_ipv4}\"\s([^=]*)\=%{QUOTEDSTRING:policy_rule} ([^=]*)\=%{QUOTEDSTRING:bloking_type} ([^=]*)\=%{QUOTEDSTRING:domain} ([^=]*)\=\"(?<server_alert_time>%{MONTHNUM}\/%{MONTHDAY}\/%{YEAR} %{TIME}\s(?:AM|am|PM|pm))\"\s([^=]*)\=%{QUOTEDSTRING:client_hostname} ([^=]*)\=\"%{BASE10NUM:reputation_score}/?"
}
}
}
output
{
stdout { codec => rubydebug }
}
The example of the output of logstash:
[2020-06-08T13:11:02,253][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"type" => "syslog",
"#timestamp" => 2020-06-08T18:06:39.090Z,
"message" => "<133>Jun 08 14:06:38 TMCM:EVT_URL_CONTENT_FILTERING Security product=\"OfficeScan\" Security product node=\"N/A\" Security product IP=\"xx.xx.xx.xx;xxxx::xxxx:xxx:xxxx:4490\" Event time=\"4/26/2020 7:33:36 AM (UTC)\" URL=\"http://blabnlabla.bla-blabla.intranet/SMS_MP/.sms_pol?DEP-Z0120105-ScopeId_B14503FF-F7AA-49EC-A38C-F50D813EEC6E/Application_2be50193-9121-4239-a70f-ba06ad7bbfbd.SHA256:6FF12991BBA769F9C15F7E1FA3E3058E22B4D918F6C5659CF7B976059082510D\" Source IP=\"\" Destination IP=\"xxx.xx.xxx.xx\" Policy rule=\"\" Blocking type=\"Web reputation\" Domain=\"bla-blabla\" Event time (local)=\"4/26/2020 3:33:36 AM\" Client host name=\"N/A\" Reputation Score=\"81\"",
"#version" => "1",
"host" => "xx.xxx.xx.xx",
"tags" => [
[0] "_grokparsefailure"
]
}
I have tried also "\<133\>" but it still appears. I have no idea what this <133> is.
P.S. I'm learning by myself since last 2 weeks.

Two configs for logstash not working together

I am having a ELK setup for processing haproxy and nginx logs, for this i have used separate config files for logstash, the main data which i want from logs are the "content url" and the "response time", in haproxy the responsetime is in milliseconds like 1345 and in nginx the response time is in seconds like 1.23. In order to bring the response time in same format i changed the haproxy response time to seconds using ruby plugin in logstash. And i m getting the desired results from both when ran individually, in kibana also i changed the response time field to duration on which input is in seconds and output also in seconds. But when i run both configs together the response time for ngnix logs returns 0.000 value and i can see tag of "_grokparsefailure" in json response, but when i run the ngnix config individually to debug it everything works fine, in kibana dashboard i can see proper response time values.
Below is the config for my Nginx logstash Config:
input {
beats {
port => 5045
}
}
filter {
grok {
match => { "message" => "%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] \"%{WORD:verb} %{URIPATHPARAM:content} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} %{NUMBER:response_bytes:int} \"-\" \"%{GREEDYDATA:junk}\" %{NUMBER:response_time}"}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
Below is the config of my Haproxy logstash config:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{MONTH:month} %{MONTHDAY:date} %{TIME:time} %{WORD:[source]} %{WORD:[app]}\[%{DATA:[class]}\]: %{IPORHOST:[UE_IP]}:%{NUMBER:[UE_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Source_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Destination_Port]} %{IPORHOST:[WAN_IP]}:%{NUMBER:[WAN_Port]} \[%{HAPROXYDATE:[timestamp]}\] %{NOTSPACE:[frontend_name]}~ %{NOTSPACE:[backend_name]} %{NOTSPACE:[ty_name]}/%{NUMBER:[response_time]} %{NUMBER:[http_status_code]} %{NUMBER:[response_bytes]:int} - - ---- %{NOTSPACE:[df]} %{NOTSPACE:[df]} %{DATA:[domain_name]} %{DATA:[cache_status]} %{DATA:[domain_name]} %{URIPATHPARAM:[content]} HTTP/%{NUMBER:[http_version]}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
ruby {
code => "event.set('response_time', event.get('response_time').to_f / 1000)"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout {
codec => rubydebug
}
}
I m suspecting the response_time pattern ie %{NUMBER:[response_time]} in haproxy and nginx is creating problem. Don't know what is causing this issue tried every possible thing.

Parsing json using logstash (ELK stack)

I have created a simple json like below
[
{
"Name": "vishnu",
"ID": 1
},
{
"Name": "vishnu",
"ID": 1
}
]
I am holding this values in file named simple.txt . Then i used file beat to listen the file and send the new updates to port 5043,on other side i started the log-stash service which listen to this port in order to parse and pass the json to elastic search.
log-stash is not processing the json values,it hangs in the middle.
logstash
input {
beats {
port => 5043
host => "0.0.0.0"
client_inactivity_timeout => 3600
}
}
filter {
json {
source => "message"
}
}
output {
stdout { codec => rubydebug }
}
filebeat config:
filebeat.prospectors:
- input_type: log
paths:
- filepath
output.logstash:
hosts: ["localhost:5043"]
Logstash output
**
Sending Logstash's logs to D:/elasticdb/logstash-5.6.3/logstash-5.6.3/logs which is now configured via log4j2.properties
[2017-10-31T19:01:17,574][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"D:/elasticdb/logstash-5.6.3/logstash-5.6.3/modules/fb_apache/configuration"}
[2017-10-31T19:01:17,578][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"D:/elasticdb/logstash-5.6.3/logstash-5.6.3/modules/netflow/configuration"}
[2017-10-31T19:01:18,301][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-10-31T19:01:18,388][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5043"}
[2017-10-31T19:01:18,573][INFO ][logstash.pipeline ] Pipeline main started
[2017-10-31T19:01:18,591][INFO ][org.logstash.beats.Server] Starting server on port: 5043
[2017-10-31T19:01:18,697][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
**
Every time when i am running log-stash using command
logstash -f logstash.conf
And since there is no processing of json i am stopping that service by pressing ctrl + c .
Please help me in finding the solution.Thanks in advance.
finally i got ended up with config like this.It works for me.
input
{
file
{
codec => multiline
{
pattern => '^\{'
negate => true
what => previous
}
path => "D:\elasticdb\logstash-tutorial.log\Test.txt"
start_position => "beginning"
sincedb_path => "D:\elasticdb\logstash-tutorial.log\null"
exclude => "*.gz"
}
}
filter {
json {
source => "message"
remove_field => ["path","#timestamp","#version","host","message"]
}
}
output {
elasticsearch { hosts => ["localhost"]
index => "logs"
"document_type" => "json_from_logstash_attempt3"
}
stdout{}
}
Json format:
{"name":"sachin","ID":"1","TS":1351146569}
{"name":"sachin","ID":"1","TS":1351146569}
{"name":"sachin","ID":"1","TS":1351146569}

ELK not passing metadata from filebeat into logstash

Installed an ELK server via: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
It seems to work except for the filebeat connection; filebeat does not appear to be forwarding anything or at least I can't find anything in the logs to indicate anything is happening.
My filebeat configuration is as follows:
filebeat:
prospectors:
-
paths:
- /var/log/*.log
- /var/log/messages
- /var/log/secure
encoding: utf-8
input_type: log
timeout: 30s
idle_timeout: 30s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["my_elk_fqdn:5044"]
bulk_max_size: 1024
compression_level: 3
worker: 1
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
to_files: true
files:
path: /var/log/filebeat
name: filebeat.log
rotateeverybytes: 10485760 # = 10MB
keepfiles: 7
level: debug
The log file output I keep getting from filebeat is just not very helpful:
2016-07-14T17:32:21-04:00 DBG Start next scan
2016-07-14T17:32:31-04:00 DBG Start next scan
2016-07-14T17:32:41-04:00 DBG Start next scan
2016-07-14T17:32:46-04:00 DBG Flushing spooler because of timeout. Events flushed: 0
2016-07-14T17:32:51-04:00 DBG Start next scan
Is there anything wrong with my configuration file?
When I test on the ELK server to see if I am getting anything:
[root#my_elk_server ~]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}
Oh and my logstash configuration for filebeats:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
UPDATE: It is not filebeat. Somewhat relieved that messages are indeed being passed but still have an issue I can't track:
Discovered it wasn't filebeat that was causing the issue. It appears that the configuration file in logstash to send to elasticsearch is not properly labeling the index (and the type) to make it searchable as shown in the question. Instead of putting filebeat in the index name it gives a result like this:
"_index" : "%{[#metadata][beat]}-2016.07.14",
The elasticsearch output put in the file turned out to be incorrect in the
output {
elasticsearch {
hosts => "my_elk_fqdn:9200"
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Apparently this #metadata is not being passed in correctly. Has anyone been able to get the _index and _type fields to populate correctly???
This might be a bug with filebeat??
https://github.com/logstash-plugins/logstash-input-beats/issues/6

How to see the requests sent by LogStash to the elasticsearch output in Fiddler?

I have LS_JAVA_OPTS = -DproxySet=true -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8888
And yet, I see no traffic to my elasticsearch node from logstash in Fiddler.
I know my elasticsearch is up and running. When I curl it, Fiddler clearly shows the requests, so it is something about jruby that does not route requests through Fiddler.
I am not calling jruby directly. Rather I use the bin\logstash.bat script.
Appendix
My conf file:
input {
file {
path => 'c:/log/bje-Error.log'
sincedb_path => "NUL"
codec => plain {
charset => "ISO-8859-1"
}
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => previous
}
start_position => beginning
ignore_older => 0
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{BASE10NUM:thread:int}] %{WORD:machine}:%{WORD:service} \[%{BASE10NUM:localId:int}?:%{UUID:logId}?:(?<jobKind>[^:]+)?:%{BASE10NUM:jobDefinitionId:int}? %{WORD:namespace}?:%{WORD:job}?:(?<customCtx>[^\]]*)\] %{LOGLEVEL:level} %{NOTSPACE:logger} - (?<text>(?m:.*))" }
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
document_type => 'logs_bje'
hosts => ["ncesearch01"]
}
}
Testing in powershell:
PS E:\logstash-2.3.2\bin> (ConvertFrom-Json((Invoke-WebRequest "http://ncesearch01:9200/logstash-*/_count").Content)).count
24666
PS E:\logstash-2.3.2\bin> .\logstash.bat -f C:\dayforce\DayforceDEV\elk\logstach.conf
LS_JAVA_OPTS was set to [-DproxySet=true -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8888]. This will be appended to the JAVA_OPTS [ -XX:HeapDumpPath="$LS_HOME/heapdump.hprof"]
io/console not supported; tty will not be manipulated
Settings: Default pipeline workers: 12
Pipeline main started
{
"message" => "2016-05-02 16:00:05.7079 [111] CANWS212:MyBJE [2251:e2737eeb-40d6-4b0e-9608-75ee3de894d3:ScheduledInstance:16 DFUnitTest:BillingDataCollectionJob:] ERROR
SharpTop.Engine.BackgroundJobs.Billing.BillingDataCollectionJob - The client database version is not defined in DFDatabaseIdentification \r",
"#version" => "1",
"#timestamp" => "2016-05-03T03:40:50.531Z",
"path" => "c:/log/bje-Error.log",
"host" => "CANWS212",
"timestamp" => "2016-05-02 16:00:05.7079",
"thread" => 111,
"machine" => "CANWS212",
"service" => "MyBJE",
"localId" => 2251,
"logId" => "e2737eeb-40d6-4b0e-9608-75ee3de894d3",
"jobKind" => "ScheduledInstance",
"jobDefinitionId" => 16,
"namespace" => "DFUnitTest",
"job" => "BillingDataCollectionJob",
"level" => "ERROR",
"logger" => "SharpTop.Engine.BackgroundJobs.Billing.BillingDataCollectionJob",
"text" => "The client database version is not defined in DFDatabaseIdentification \r"
}
^CTerminate batch job (Y/N)? ←[33mSIGINT received. Shutting down the agent. {:level=>:warn}←[0m
stopping pipeline {:id=>"main"}
Pipeline main has been shutdown
The signal HUP is in use by the JVM and will not work correctly on this platform
^CPS E:\logstash-2.3.2\bin> (ConvertFrom-Json((Invoke-WebRequest "http://ncesearch01:9200/logstash-*/_count").Content)).count
24667
PS E:\logstash-2.3.2\bin>
As you can see, http://ncesearch01:9200/logstash-*/_count returns incremented count, hence running logstash did send a request to the elasticsearch. However, it bypassed Fiddler, despite the LS_JAVA_OPTS.
I find some possible reasons for this condition,although I did not try.May this answer should be called "discussion",I`m sorry.
1.You may need a linux OS instead of windows,for the reason,
I am not sure this question has been deal in the latest logstash version
you may be interested in this,Make JAVA_OPTS and LS_JAVA_OPTS work consistently on Windows
2.As we see,the most possible is that
logstash ES_output plugin use the http way to send message
after logstash-2.0,you may use the old version?
moreInfo about ES_output_plugin,logstash-output-plugin-elasticsearch
If anyone has any ideas,your share will be expected~

Resources