I have a few log files like below
data_log_01.log
data_log_02.log
data_log_03.log
data_log_04.log
Is there any way that I can parse these logs one by one using a single config file in logstash?
How about using the file input plugin with a wildcard?
An exmaple configuration could look like this, assuming your log files are located /home/of/your/logs/:
input {
file {
path => [
"/home/of/your/logs/*.log"
]
}
}
}
The path value has to be an absolute path! You might want to see the docs on using path.
Related
If there are two separate config files in /etc/logrotate.d/ that are trying to manage the same log file in different ways, what happens? Is one preferred over the other? Would I have to delete one of the config files?
For example:
Two configurations - /etc/logrotate.d/A and /etc/logrotate.d/B
Contents of A:
/var/log/example.log {
rotate 4
weekly
}
Contents of B:
/var/log/example.log {
rotate 3
weekly
compress
}
In the usual scenario, the files in /etc/logrotate.d are all read in by an include directive in the main /etc/logrotate.conf file. If you have multiple directives matching the same file in such a setup, you will get an error:
error: /etc/logrotate.conf:line_number duplicate log entry for logfile_path
error: found error in logfile_path, skipping
I wonder if you can configure logstash in the following way:
Background Info:
Every day I get a xml file pushed to my server, which should be parsed.
To indicate a complete file transfer afterwards I get an empty .ctl (custom file) transfered to the same folder.
The files both have the following name schema 'feedback_{year}{yearday}_UTC{hoursminutesseconds}_51.{extention}' (e.g. feedback_16002_UTC235953_51.xml). So they have the same file name but one is with .xml and the other is a .ctl file.
Question:
Is there a way to configure logstash to wait parsing the xml file until the according .ctl file is present?
EDIT:
Is there maybe a way to archiev that with filebeat?
EDIT2:
It would also be enough to be able to configure logstash in a way that it will wait x minutes before starting to process a new file, if that is easier.
Thanks for any help in advance
Your problem is that you don't want to start the parser before the file transfer hasn't been completed. So, why don't push the data to a file (file-complete.xml) when you find your flag file (empty.ctl)?
Here is the possible logic for a script and runs using crontab:
if empty.ctl exists:
Clear file-complete.xml
Add the content of file.xml to file-complete.xml.
Remove empty.ctl
This way, you'd need to parse the data from file-complete.xml. I think is simpler to debug and configure.
Hope it helps,
I have zip files which has log files.My scenario from the zip file need to extract it and then parse it with logstash?.Could anyone guide me this would be very helpful.Thanks for your valuable time.
Logstash itself doesn't have any input plugins to process log files inside zip archives, but as long as you extract the files yourself to a directory that you've configured Logstash to look for log files in you'll be fine. The standard file input plugin accepts wildcards, so you could say
input {
file {
path => ["/some/path/*.log"]
}
}
to have Logstash process all files in the /some/path directory (you'd probably want to assign a type to the messages too).
If the names of the files inside the zip archives aren't unique you'll have to make sure you don't overwrite existing files when you unzip the archives. Or, probably better, extract each archive into a directory of its own and have Logstash process the whole tree:
input {
file {
path => ["/some/path/*/*.log"]
}
}
Logstash isn't capable of deleting processed logs or moving them out of the way, so that's something you'll have to take care of too.
(Logstash 1.4.2 on Windows)
In our system, a "product" is a high level grouping of related web applications. Each web application is configured to write a dedicated log file, named after the application name (eg MyProduct.ApplicationA.log and MyProduct.ApplicationB.log). All web applications for a given product write their log files to the same folder (c:\Logs\MyProduct\; c:\Logs\MyOtherProduct).
I need to set up logstash to monitor all log files for all applications for all products. I had hoped to use:
input {
file {
path => "c:/Logs/**/*.log"
exclude => ["Info.*", "Warn.*", "Error.*"]
sincedb_path => "c:/logstash/.sincedb"
sincedb_write_interval => 1
}
}
On first run, I can see lots of output going to stdout output, which I presume is what the docs refer to as "first contact".
Once all the log files (from more than one application) have been initially parsed, if applications generate log entries, they appears to be picked up and output. All is well.
However, if I restart logstash, ALL the logfiles seem to be parsed again - as if sincedb is not honoured. I have looked at the other SO questions detailing similar experience of duplicates and reparsing (eg logstash + elasticsearch : reloads the same data), however I believe that I have extra information that may indicate that I am actually using the file input incorrectly.
If I instead setup multiple file inputs like so:
file {
path => "c:/Logs/MyProduct/MyProduct.ApplicationA.log"
exclude => ["Info.*", "Warn.*", "Error.*"]
sincedb_path => "c:/logstash/.sincedb_A"
sincedb_write_interval => 1
}
file {
path => "c:/Logs/MyProduct/MyProduct.ApplicationB.log"
exclude => ["Info.*", "Warn.*", "Error.*"]
sincedb_path => "c:/logstash/.sincedb_B"
sincedb_write_interval => 1
}
Then restarts of logstash do not reparse existing files and do honour the sincedb for the logical grouping. This leads me to believe that perhaps I have been thinking about the file input in the wrong way: will I have to configure an individual file inputs for each application?
(Looking at the content of sincedb, there is only ever a single line eg
0 0 2 661042
and it becomes obvious that multiple files cannot be tracked)
Am I missing something that would allow me to have a generic globular style global declaration, without needing to do individual per-application configuration?
Looks like you're running into a known sincedb bug on Windows
Your workaround of adding a file {} block with separate sincedb_path for each file is probably the best solution until the bug is fixed.
Currently I am using file input plugin to go over my log archive but file input plugin is not the right solution for me because file input plugin inherently expects that file is stream of events and not as a static file. Now, this is causing a great deal of problem for me because my log archive has a 100,000 + log files and I logstash opens a handle on all these files which are never going to change.
I am facing following problems
1) Logstash fails with problem mentioned in SO
2) With those many open file handles log archival storage is getting very slow.
Does anybody know a way to let logstash know that treat files statically or once a file is processed do not keep file handle on it.
In logstash Jira bug, I was told to write my own plugin with some other suggestions which won't help me much.
Logstash file input can process static file. You need to add this configuration
file {
path => "/your/logs/path"
start_position => "beginning"
}
After adding the start_position, logstash reads the file from the beginning. Please refer here for more information. Remember that this option only modifies “first contact” situations where a file is new and not seen before. If a file has already been seen before, this option has no effect. Otherwise you have set your sincedb_path to /dev/null .
For the first question, I have answer in the comment. Please try to add the maximum file opened.
For my suggestion, You can try to write a script copy the log file to the logstash monitor path and move it out constantly. You have to estimate the time that logstash process a log file.
look out for this also turn on -v and --debug for logstash
{:timestamp=>"2016-05-06T18:47:35.896000+0530",
:message=>"_discover_file: /datafiles/server.log:
**skipping because it was last modified more than 86400.0 seconds ago**",
:level=>:debug, :file=>"filewatch/watch.rb", :line=>"330",
:method=>"_discover_file"}
solution is to touch the file or change the ignore_older setting