I'm configuring a centralized logging with rsyslog.
I have to specify a input-file with some kind of WildCard but can't find any examples of how to get it working, in the description of the official documentation here a link with an exact description seems broken.
I try to log tomcat7-logfiles that look like localhost_access_log.2015-07-15.txt.
The date in the file updates every day.
What I want to get is some kind of input(type="imfile" ...)
I tried it with:
input(type="imfile" tag="access_log" statefile="tomcat-access-log"
file="/var/log/tomcat7/localhost_access_log.*.txt")
but this is not working and I don't get what I'm doing wrong.
Here is my full code:
$ModLoad imfile
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
# catalina.log
$InputFileName /var/log/tomcat7/catalina.log
$InputFileTag catalina-log
$InputFileStateFile stat-catalina-log
$InputFileSeverity info
$InputRunFileMonitor
# localhost_access_log.YYYY.MM.DD.txt
input(type="imfile" tag="access_log" statefile="tomcat-access-log" file="/var/log/tomcat7/localhost_access_log.*.txt")
The catalina-logs are working as they are supposed to, however I'm not getting any access-logs in my output.
Any help would be appreciated, please remind me if I'm doing something completely wrong or if there is a better way to do this.
It will only work as of rsyslog v8.5 or newer (not 7) and only when using inotify see here for a presentation explaing the requirements.
I forced inotify (although it is the default) with:
module(load="imfile"
mode="inotify"
)
The input is defined like this:
input(type="imfile"
File="/file/path/*.log"
Tag="taskproject:"
Facility="local3"
)
After this, it should be working
Related
Sorry for my noob question as I am very new to linux. Please consider the below linux command :
/opt/mongodb-mms-automation/bin/mongodb-mms-automation-agent
-f /etc/mongodb-mms/automation-agent.config
-pidfilepath /var/run/mongodb-mms-automation/mongodb-mms-automation-agent.pid
>> /var/log/mongodb-mms-automation/automation-agent-fatal.log 2>&1
According to my understanding >> redirects standard output to file and 2>&1 means that standard error will be redirected to the same location as standard output. So in the above case I expect the standard output and standard error both to be redirected to /var/log/mongodb-mms-automation/automation-agent-fatal.log.
But obviously this is not the case. I can see that all info / error messages are being redirected to a file /var/log/mongodb-mms-automation/automation-agent.log. Can someone please explain what error I am making in reading this command?
Regards,
Meena
Standard output and standard error are just default destinations; the program could be doing a number of things which will sabotage any attempts to save the logs by redirecting to a file:
It writes straight to the terminal output, such as /dev/pts/0.
It detects whether standard output/error are connected to a file or a terminal, and changes behaviour accordingly.
Anything else the application developer considered to be the most useful behaviour.
In other words, it's application specific. You're probably better off finding the logfile configuration setting and changing that if you really need to. Usually I find it's easier and safer to leave the defaults (since they may be handy for example for security reasons such as sandboxing) and instead pointing to the default location in whatever software is trying to process that file in some way.
Okay so I've gone about so many online instructions on how to add a log file but none of them seem to log anything when i use the command:
logger hello world
I added a log file local3.log as follows in
/var/log/local3.log
now I wanted to log all local3 facility with all severities to it. I went about what some sites told me and went into /etc/rsyslog.conf and added the line:
local3.* /var/log/local3.log
but when anything boots up or any logger commands i give it doesn't update with the time and date and all that. I've already set my logrotate file properly with weekly every 8 weeks and create and dateext. I still can't get it to work I'm thinking I'm editing the wrong syslog file or the wrong command to it?
Currently I am using file input plugin to go over my log archive but file input plugin is not the right solution for me because file input plugin inherently expects that file is stream of events and not as a static file. Now, this is causing a great deal of problem for me because my log archive has a 100,000 + log files and I logstash opens a handle on all these files which are never going to change.
I am facing following problems
1) Logstash fails with problem mentioned in SO
2) With those many open file handles log archival storage is getting very slow.
Does anybody know a way to let logstash know that treat files statically or once a file is processed do not keep file handle on it.
In logstash Jira bug, I was told to write my own plugin with some other suggestions which won't help me much.
Logstash file input can process static file. You need to add this configuration
file {
path => "/your/logs/path"
start_position => "beginning"
}
After adding the start_position, logstash reads the file from the beginning. Please refer here for more information. Remember that this option only modifies “first contact” situations where a file is new and not seen before. If a file has already been seen before, this option has no effect. Otherwise you have set your sincedb_path to /dev/null .
For the first question, I have answer in the comment. Please try to add the maximum file opened.
For my suggestion, You can try to write a script copy the log file to the logstash monitor path and move it out constantly. You have to estimate the time that logstash process a log file.
look out for this also turn on -v and --debug for logstash
{:timestamp=>"2016-05-06T18:47:35.896000+0530",
:message=>"_discover_file: /datafiles/server.log:
**skipping because it was last modified more than 86400.0 seconds ago**",
:level=>:debug, :file=>"filewatch/watch.rb", :line=>"330",
:method=>"_discover_file"}
solution is to touch the file or change the ignore_older setting
When I start logstash, the old logs are not imported into ES.
Only the new request logs are recorded in ES.
Now I've see this in the doc.
Even if I set the start_position=>"beginning", old logs are not inserted.
This only happens when I run logstash on linux.
If I run it with the same config, old logs are imported.
I don't even need to set start_position=>"beginning" on windows.
Any idea about this ?
When you read an input log to Logstash, Logstash will keep an record about the position it read on this file, that's call sincedb.
Where to write the sincedb database (keeps track of the current position of monitored log files).
The default will write sincedb files to some path matching "$HOME/.sincedb*"
So, if you want to import old log files, you must delete all the .sincedb* at your $HOME.
Then, you need to set
start_position=>"beginning"
at your configuration file.
Hope this can help you.
Please see this line also.
This option only modifies "first contact" situations where a file is new and not seen before. If a file has already been seen before, this option has no effect.
I have a number of applications that I want to log to Splunk. I will be sending the data in an XML format via a UDP listener. The data that is being sent looks like:
<log4j:event logger="ASP.global_asax" level="INFO" timestamp="1303830487907" thread="15">
<log4j:message>New session started</log4j:message>
<log4j:properties>
<log4j:data name="log4japp" value="4ef113dd-9-129483040292873753(4644)" />
<log4j:data name="log4jmachinename" value="W7-SUN-JSTANTON" />
</log4j:properties>
</log4j:event>
However when it is processed by Splunk it appears like:
Apr 26 16:18:09 127.0.0.1 <log4j:message>New session started</log4j:message><log4j:properties><log4j:data name="log4japp" value="4ef113dd-9-129483040292873753(4644)"/><log4j:data name="log4jmachinename" value="W7-SUN-JSTANTON"/></log4j:properties></log4j:event>
Basically it looks like Splunk looks like it has overwritten the opening node, and as a result lossing the log level data, with the datetime that it received it. The applications that are sending it are using nLog with a log4j type target (with an Log4JXmlEventLayout layout). I have configured the sourcetype as log4jxml (custom name) but I think I need to tell it not to do something with the date/time field in the props.conf file (but not too sure what that something is).
I am also using the windows version of Splunk so the file paths are slightly different to the online manuals.
Any help would be most welcome.
It turns out I was doing 2 things wrong (maybe more but I have not found thoses yet)
In the inputs.conf file I need to add the following to my input definition:
no_priority_stripping = true
no_appending_timestamp = true
The second thing I was doing wrong was to put these files in
C:\Program Files\Splunk\etc\system\local\
when they SHOULD have been put in
C:\Program Files\Splunk\etc\apps\search\local\
I hope that this helps somebody else out