How configure multiple files using logstash.conf file - logstash

I am going to configure all log files present in location(D:\Logs folder)
log files are
1.Magna_Log4Net.log.20150623_bak
2.Magna_Log4Net.log.20150624_bak
3.Magna_Log4Net.log.20150625_bak
4.Magna_Log4Net.log.20150626_bak
logstash.conf file
input {
file {
path =["C:\Test\Logs\Magna_Log4Net.log.*_bak"]
start_position => "beginning"
}
}
filter {
grok { match => [ "message", "%{HTTPDATE:[#metadata][timestamp]}" ] }
date { match => [ "[#metadata][timestamp]", "dd/MMM/yyyy:HH:mm:ss Z" ] }
}
output {
elasticsearch { host => localhost}
stdout { codec => rubydebug }
}
I am not able to load all files into elastic search , I didn't understand the problem here. can any body help to to how to parse multiple files into logstash config files ???

Related

Logstash only reads file when I make manual edits into the file

Currently, I have a program that writes a json array into a json file. The json file is initially blank. I also have an instance of logstash running with the following config file.
input{
file{
path => "/Users/CP4/Downloads/gs-accessing-data-mysql-master/complete/example.json"
codec => "json"
start_position => "beginning"
ignore_older => 0
}
}
filter {
mutate {
gsub => [ "message","\[",""]
gsub => [ "message","\n",""]
gsub => [ "event","\},\{",","]
}
json { source => message }
}
output{
elasticsearch{
hosts => "localhost:9200"
index => "test123"
}
stdout { codec => rubydebug }
}
For some reason, logstash will only read the file once I go in and make some changes manually, rather than when my program writes json to it. What could be causing this? Is the way my config file is set up wrong? Thanks.

Data missed in Logstash?

Data missed a lot in logstash version 5.0,
is it a serous bug ,when a config the config file so many times ,it useless,data lost happen again and agin, how to use logstash to collect log event property ?
any reply will thankness
Logstash is all about reading logs from specific location and based on you interested information you can create index in elastic search or other output also possible.
Example of logstash conf
input {
file {
# PLEASE SET APPROPRIATE PATH WHERE LOG FILE AVAILABLE
#type => "java"
type => "json-log"
path => "d:/vox/logs/logs/vox.json"
start_position => "beginning"
codec => json
}
}
filter {
if [type] == "json-log" {
grok {
match => { "message" => "UserName:%{JAVALOGMESSAGE:UserName} -DL_JobID:%{JAVALOGMESSAGE:DL_JobID} -DL_EntityID:%{JAVALOGMESSAGE:DL_EntityID} -BatchesPerJob:%{JAVALOGMESSAGE:BatchesPerJob} -RecordsInInputFile:%{JAVALOGMESSAGE:RecordsInInputFile} -TimeTakenToProcess:%{JAVALOGMESSAGE:TimeTakenToProcess} -DocsUpdatedInSOLR:%{JAVALOGMESSAGE:DocsUpdatedInSOLR} -Failed:%{JAVALOGMESSAGE:Failed} -RecordsSavedInDSE:%{JAVALOGMESSAGE:RecordsSavedInDSE} -FileLoadStartTime:%{JAVALOGMESSAGE:FileLoadStartTime} -FileLoadEndTime:%{JAVALOGMESSAGE:FileLoadEndTime}" }
add_field => ["STATS_TYPE", "FILE_LOADED"]
}
}
}
filter {
mutate {
# here converting data type
convert => { "FileLoadStartTime" => "integer" }
convert => { "RecordsInInputFile" => "integer" }
}
}
output {
elasticsearch {
# PLEASE CONFIGURE ES IP AND PORT WHERE LOG DOCs HAS TO PUSH
document_type => "json-log"
hosts => ["localhost:9200"]
# action => "index"
# host => "localhost"
index => "locallogstashdx_new"
# workers => 1
}
stdout { codec => rubydebug }
#stdout { debug => true }
}
To know more you can go throw many available websites like
https://www.elastic.co/guide/en/logstash/current/first-event.html

Read logs for last month with logstash

I have configured my logstash config file to read apache access logs like this:
input {
file {
type => "apache_access"
path => "/etc/httpd/logs/access_log*"
start_position => beginning
sincedb_path => "/dev/null"
}
}
filter {
if [path] =~ "access" {
mutate { replace => { "type" => "apache_access" } }
grok {
match => { "message" => "%{IPORHOST:clientip} - %{DATA:username} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-)" }
}
kv {
source => "request"
field_split => "&?"
prefix => "requestarg_"
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
host => "10.13.10.18"
cluster => "awstutorialseries"
}
}
files that i have in the directory /etc/httpd/logs are:
access_log
access_log-20161002
access_log-20161005
access_log-20161008
access_log-20161011
...
When accessing all files in path access_log* it can make time if we have a interesting number of archived files.
In the server we rotate logs avery 3 days, so we archive the access_log file to be access_log-{date} and logstash as the config says, it reads all access_log files in that directory even the archived ones are included.
after some month we are in front of a lot of files that logstash should read so it can make time to read them all.
Q1: Is there a way to read all the logs once, and then just access_log file?
Q2: Is there a way or a custom expression to do in config file to read just some log files deponds on date and not all of them ?
I have tried a plenty of conbinaison and filters on my config file based on official documentation but no chance.
Your pattern "access_log*" will match all the old files, too, but logstash will ignore any files older than a day. See the ignore_older param in the file{} input. When catching up on old files, you can set this to a higher value.
Once you're caught up, I would release a new config that only looked at "access_log" (no wildcard, this the latest file only).

Logstash not printing anything

I am using logstash for the first time and trying to setup a simple pipeline for just printing the nginx logs. Below is my config file
input {
file {
path => "/var/log/nginx/*access*"
}
}
output {
stdout { codec => rubydebug }
}
I have saved the file as /opt/logstash/nginx_simple.conf
And trying to execute the following command
sudo /opt/logstash/bin/logstash -f /opt/logstash/nginx_simple.conf
However the only output I can see is:
Logstash startup completed
Logstash shutdown completed
The file is not empty for sure. As per my understanding I should be seeing the output on my console. What am I doing wrong ?
Make sure that the character encoding of your logfile is UTF-8. If it is not, try to change it and restart the Logstash.
Please try this code as your Logstash configuration, in order to setup a simple pipeline for just printing the nginx logs.
input {
file {
path => "/var/log/nginx/*.log"
type => "nginx"
start_position => "beginning"
sincedb_path=> "/dev/null"
}
}
filter {
if [type] == "nginx" {
grok {
patterns_dir => "/home/krishna/Downloads/logstash-2.1.0/pattern"
match => {
"message" => "%{NGINX_LOGPATTERN:data}"
}
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
stdout { codec => rubydebug }
}

Retrieving RESTful GET parameters in logstash

I am trying to get logstash to parse key-value pairs in an HTTP get request from my ELB log files.
the request field looks like
http://aaa.bbb/get?a=1&b=2
I'd like there to be a field for a and b in the log line above, and I am having trouble figuring it out.
My logstash conf (formatted for clarity) is below which does not load any additional key fields. I assume that I need to split off the address portion of the URI, but have not figured that out.
input {
file {
path => "/home/ubuntu/logs/**/*.log"
type => "elb"
start_position => "beginning"
sincedb_path => "log_sincedb"
}
}
filter {
if [type] == "elb" {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp}
%{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int}
%{IP:backend_ip}:%{NUMBER:backend_port:int}
%{NUMBER:request_processing_time:float}
%{NUMBER:backend_processing_time:float}
%{NUMBER:response_processing_time:float}
%{NUMBER:elb_status_code:int}
%{NUMBER:backend_status_code:int}
%{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int}
%{QS:request}" ]
}
date {
match => [ "timestamp", "ISO8601" ]
}
kv {
field_split => "&?"
source => "request"
exclude_keys => ["callback"]
}
}
}
output {
elasticsearch { host => localhost }
}
kv will take a URL and split out the params. This config works:
input {
stdin { }
}
filter {
mutate {
add_field => { "request" => "http://aaa.bbb/get?a=1&b=2" }
}
kv {
field_split => "&?"
source => "request"
}
}
output {
stdout {
codec => rubydebug
}
}
stdout shows:
{
"request" => "http://aaa.bbb/get?a=1&b=2",
"a" => "1",
"b" => "2"
}
That said, I would encourage you to create your own versions of the default URI patterns so that they set fields. You can then pass the querystring field off to kv. It's cleaner that way.
UPDATE:
For "make your own patterns", I meant to take the existing ones and modify them as needed. In logstash 1.4, installing them was as easy as putting them in a new file the 'patterns' directory; I don't know about patterns for >1.4 yet.
MY_URIPATHPARAM %{URIPATH}(?:%{URIPARAM:myuriparams})?
MY_URI %{URIPROTO}://(?:%{USER}(?::[^#]*)?#)?(?:%{URIHOST})?(?:%{MY_URIPATHPARAM})?
Then you could use MY_URI in your grok{} pattern and it would create a field called myuriparams that you could feed to kv{}.

Resources