How can logstash identify and parse the newly created logfiles? - logstash

I'm new in ELK and currently I'm facing the following issue.
I want logstash to parse some server logfiles. Everyday, a new logfile is created which it has the following naming format: file160629.log (where 160629=current date)
Here's my config input:
input {
file {
path => "C:\LogFiles\u_ex%d.log"
start_position => beginning
}
}
But as it seems, it doesn't recognize the new logfiles..
Can someone tell me what am I doing wrong?
Thank you in advance.

For all the log files inside LogFiles folder you can use :
input {
file {
path => "C:\LogFiles\*.log"
}
}
It will tail files by default.

Related

Logstash configuration error and I think this a dumb question

I am trying to get logstash to work (well I have gotten it to work but I want to try growing my skill set) and this is my config file setup...
input {
file {
path => "C:/temp/Machine Learning/Dash.txt"
start_position => "beginning"
sincedb_path => "/tmp/since.txt"
}
}
filter {
json {
source => "message"
target => "message"
}
}
output {
file {path => "/tmp/OutPut.txt"}
}
What I want to do is parse out the message field and look at its constituent pieces, but this config doesn't work. I get this when I run it in debug...
Missing a required setting for the json filter plugin:
filter {
json {
source => # SETTING MISSING
...
}
}
[2019-12-19T10:32:44,655][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Something is wrong with your configuration.", :backtrace=>["c:/Logstash/logstash/logstash-core/lib/logstash/config/mixin.rb:86:in config_init'", "c:/Logstash/logstash/logstash-core/lib/logstash/filters/base.rb:126:ininitialize'", "org/logstash/plugins/PluginFactoryExt.java:70:in filter_delegator'", "org/logstash/plugins/PluginFactoryExt.java:244:inplugin'", "org/logstash/plugins/PluginFactoryExt.java:181:in plugin'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline.rb:71:inplugin'", "(eval):64:in <eval>'", "org/jruby/RubyKernel.java:994:ineval'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline.rb:49:in initialize'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline.rb:90:ininitialize'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:42:in block in execute'", "c:/Logstash/logstash/logstash-core/lib/logstash/agent.rb:92:inblock in exclusive'", "org/jruby/ext/thread/Mutex.java:148:in synchronize'", "c:/Logstash/logstash/logstash-core/lib/logstash/agent.rb:92:inexclusive'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in execute'", "c:/Logstash/logstash/logstash-core/lib/logstash/agent.rb:317:inblock in converge_state'"]}
And I am not sure what to do about that as it looks like I have set up the filter right according to this documentation:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html#plugins-filters-json-target
I am on windows 10 which I think is important info.
Found the problem. Like the professional that I am I made a back up config file to revert back to incase something went awry, great idea honestly. Then like the idiot I am I started making updates and changes to the backup file which was not the actually config file I was testing against.

Creating a custom grok pattern in Logstash

I'm trying to add a custom pattern to Logstash in order to capture data from this kind of log line:
[2017-11-27 12:08:22] production.INFO: {"upload duration":0.16923}
I followed the instructions on Logstash guide for grok and created a directory called patterns with a file in it called extra that contain:
POSTFIX_UPLOAD_DURATION upload duration
and added the path to the config file:
grok {
patterns_dir => ["./patterns"]
match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{POSTFIX_UPLOAD_DURATION: upload_duration} %{DATA:log_env}\.%{LOGLEVEL:severity}: %{GREEDYDATA:log_message}" }
}
However, I'm getting this error message:
Pipeline aborted due to error {:exception=>#<Grok::PatternError: pattern %{POSTFIX_UPLOAD_DURATION: upload_duration} not defined>
Also, some log lines don't contain the 'upload duration' field, will this break the pipeline?
You are able to use relative directories, as long as they are relative to the current working directory of where the process starts, not relative to the conf file or to Logstash itself.
I found out that there is better and more efficint way to capture data using the json plugin.
I've add "log_payload:" in my logs and insert the data I need to capture in a json object.
Then I've used this pipeline to capture it.
if ("log_payload:" in [log_message]) {
grok{
match => {"log_message" => 'log_payload:%{DATA:json_object}}%{GREEDYDATA}'}
}
mutate{
update => ["json_object", "%{[json_object]}}"]
}
json {
source => "json_object"
}
}
mutate {
remove_field => ["log_message", "json_object"]
}
}

gradle get relative resource path

When I iterate over source repository I do like this
def resourceDir = proj.sourceSets.main.output.resourcesDir
resourceDir.eachFileRecurse(groovy.io.FileType.FILES) { // only files will be recognized
file ->
def path = FilenameUtils.separatorsToUnix(file.toString())
if (FilenameUtils.getExtension(file.toString()) in supportedResourceExt) {
proj.logger.lifecycle("Reading file {}.", file)
//.....
}
}
In log it writes this
Reading file D:\PROJECT_FOLDER\project\subproject\subsubproject\build\resources\main\com\package\something\file.txt
How to get only the part starting with com\package\something\file.txt without explicitly reading it like file.substring(file.indexOf)?
Maybe it's posible to relativize it with project path somehow?
It seems that:
proj.logger.lifecycle("Reading file {}.", file.absolutePath - resourceDir.absolutePath)
should work. Can't check it right now.

Logstash expected one of #

I'm currently trying to run Lostash with the following config file:
input {
stdin { }
}
output {
rabbitmq {
exchange => "test_exchange"
exchange_type => "fanout"
host => "172.17.x.x"
}
}
I do however get an error:
logstash agent --configtest -f -config.conf
gives me:
Error: Expected one of #, } at line 1, column 105 (byte 105) after output { rabbitmq { exchange => test_exchange exchange_type => fanout host => 172.17
It seems that logstash has the problem when I put an IP-like address in the host field. What is wrong with my config?
The whole problem was in the method you used when created the config.conf file.
You were using the following command:
echo "input {stdin{}} output{rabbitmq{exchange=>"test_exchange" exchange_type =>"fanout" host=>"172.17.x.x"}}"
Surrounding a string containing double quotes with double quotes isn't a good idea...
By using single quotes around the string, the problem is solved...
echo 'input {stdin{}} output{rabbitmq{exchange=>"test_exchange" exchange_type =>"fanout" host=>"172.17.x.x"}}'
The real problem is that logstash doesn't report access problems to the configuration file correctly. Here is the issue on github:
https://github.com/elastic/logstash/issues/2571
Simply check access permissions and you'll be set.

Using glob on logstash server machine?

We have a separate server for logstash and logs are on a remote machine.
We ship these same logs from a remote machine to logstash server using lumberjack's plugin for logstash.
I tried this:
Client config (where logs are present):
input {
file{
path => "/home/Desktop/Logstash-Input/**/*_log"
}
}
output {
lumberjack {
hosts => ["xx.xx.xx.xx"]
port => 4545
ssl_certificate => "./logstash.pub"
}
I want to extract fields from my file input's path variable, so that accordingly for different fields values different parsing patterns can be applied.
Eg: Something like this
grok {
match => ["path", "/home/Desktop/Logstash-Input/(?<server>[^/]+)/(?<logtype>[^/]+)/(?<logdate>[\d]+.[\d]+.[\d]+)/(?<logfilename>.*)_log"]
}
Here server, logtype are directories names which i want in my fields to apply different parsing patterns like:
filter{
if [server] == "Server2" and [logtype] == "CronLog" {
grok........
}
if [server] == "Server3" and [logtype] == "CronLog" {
grok............
}
}
How shall I be able apply the above on my logstash-server config, as file input is on the client machine from which I want to extract fields from path ???
Lumberjack succesfully ships logs to server.
I tried applying the grok on client:
grok {
match => ["path", "/home/Desktop/Logstash-Input/(?<server>[^/]+)/(?<logtype>[^/]+)/(?<logdate>[\d]+.[\d]+.[\d]+)/(?<logfilename>.*)_log"]
}
I checked on client console it adds fields like server, logtype to the logs but on logstsh-server console the fields are not added.
How should I be able to achieve the above????
Two options:
Set the fields when they are originally shipped. The full logstash and logstash-forwarder (aka lumberjack) allow you to do this.
grok the information from the file path, which my documents have in a field called "file". Check your documents to find the actual field name.

Resources