Using Log Parser Studio and UrlScan Logs - iis

I'm trying to view UrlScan 3.1 Logs in Log Parser Studio 2. I have tried selecting urlscanlog as the log type but no results are returned. I can use log parser directly from the command line and parse urlscan logs without any problem. Log Parser Studio is also working fine with IIS logs. Can anyone else open UrlScan Logs in Log Parser Studio? If so, what settings are being used?

First of all select TSVLOG as the log type. Then click on the gear symbol to the right of the log type. The iSeparator needs changing to space, the nSkip lines needs to be 4, and the header file needs pointing to a new file which contains only:
Date Time c-ip s-siteid cs-method cs-uri x-action x-reason x-context cs-data x-control
This is you setting that the separator is a space between each field, there are 4 lines in the log files to ignore at the start and then the header file contains the name of each field in the log file.
Once this was complete I was able to query urlscan logs in Log Parser Studio without a problem.

Related

Generate auto increment sequence in logstash

I am pushing logs to Elastic Search from Logstash and then i need to get back the logs in the order they were written. Sorting by time stamp does not help because there could me multiple log statements in the same time. I followed the solution in Include monotonically increasing value in logstash field? and it worked perfectly in my windows system.
But when the code was moved to the linux production environment, logstash is not starting up. Failing with the below error
reason=>"Couldn't find any filter plugin named 'seq'. Are you sure
this is correct? Trying to load the seq filter plugin resulted in this
error: no such file to load -- logstash/filters/seq", :level=>:error}
Check if the seq.rb file is in the filter folder.
Also check if the line ending of your seq.rb are linux. If you transferred the file from a windows machine to a linux, the problem might come from here.

How to add a log file and logging to it

Okay so I've gone about so many online instructions on how to add a log file but none of them seem to log anything when i use the command:
logger hello world
I added a log file local3.log as follows in
/var/log/local3.log
now I wanted to log all local3 facility with all severities to it. I went about what some sites told me and went into /etc/rsyslog.conf and added the line:
local3.* /var/log/local3.log
but when anything boots up or any logger commands i give it doesn't update with the time and date and all that. I've already set my logrotate file properly with weekly every 8 weeks and create and dateext. I still can't get it to work I'm thinking I'm editing the wrong syslog file or the wrong command to it?

Best way to manually periodically import log files into Graylog using logstash

I'm currently using logstash to import dozens of log files from different webapps into Graylog. It works great the files are tagged so I know from wich webapp they originate.
I can't change the webapp thus I can't add a GELF appender to the log4j conf of the webapp. The idea is to periodically retrieve the log files, parse them and import them with logstash into Graylog.
My problem is how do I make sure I don't import a log event I've already imported.
For example, I have a log file that has a log pattern that increments: log.1, log.2, etc. So I'll have log events that could be in log.1 the first time and 2 weeks later when I reimport them they'll maybe be in log.3.
I'm afraid I can't handle that with logstash's file input "sincedb_path" and "start_position".
So here are a few options I've gathered and I'd like your input about them, if anyone encountered the same issue:
Use a logstash filter dropping all events before a certain date,
requires to keep an index of every last log date of every file
imported (potentially 50+) and a lot of configuration writing
Use of a drool rule in GrayLog to refuse logs with timestamps prior
to last log received for a given type
Ask to change the log pattern to be something like log.date instead
of a log pattern that renames files (but I'd rather avoid this one)
Any other idea?

Old logs are not imported into ES by logstash

When I start logstash, the old logs are not imported into ES.
Only the new request logs are recorded in ES.
Now I've see this in the doc.
Even if I set the start_position=>"beginning", old logs are not inserted.
This only happens when I run logstash on linux.
If I run it with the same config, old logs are imported.
I don't even need to set start_position=>"beginning" on windows.
Any idea about this ?
When you read an input log to Logstash, Logstash will keep an record about the position it read on this file, that's call sincedb.
Where to write the sincedb database (keeps track of the current position of monitored log files).
The default will write sincedb files to some path matching "$HOME/.sincedb*"
So, if you want to import old log files, you must delete all the .sincedb* at your $HOME.
Then, you need to set
start_position=>"beginning"
at your configuration file.
Hope this can help you.
Please see this line also.
This option only modifies "first contact" situations where a file is new and not seen before. If a file has already been seen before, this option has no effect.

Splunk rewrites xml input incorrectly

I have a number of applications that I want to log to Splunk. I will be sending the data in an XML format via a UDP listener. The data that is being sent looks like:
<log4j:event logger="ASP.global_asax" level="INFO" timestamp="1303830487907" thread="15">
<log4j:message>New session started</log4j:message>
<log4j:properties>
<log4j:data name="log4japp" value="4ef113dd-9-129483040292873753(4644)" />
<log4j:data name="log4jmachinename" value="W7-SUN-JSTANTON" />
</log4j:properties>
</log4j:event>
However when it is processed by Splunk it appears like:
Apr 26 16:18:09 127.0.0.1 <log4j:message>New session started</log4j:message><log4j:properties><log4j:data name="log4japp" value="4ef113dd-9-129483040292873753(4644)"/><log4j:data name="log4jmachinename" value="W7-SUN-JSTANTON"/></log4j:properties></log4j:event>
Basically it looks like Splunk looks like it has overwritten the opening node, and as a result lossing the log level data, with the datetime that it received it. The applications that are sending it are using nLog with a log4j type target (with an Log4JXmlEventLayout layout). I have configured the sourcetype as log4jxml (custom name) but I think I need to tell it not to do something with the date/time field in the props.conf file (but not too sure what that something is).
I am also using the windows version of Splunk so the file paths are slightly different to the online manuals.
Any help would be most welcome.
It turns out I was doing 2 things wrong (maybe more but I have not found thoses yet)
In the inputs.conf file I need to add the following to my input definition:
no_priority_stripping = true
no_appending_timestamp = true
The second thing I was doing wrong was to put these files in
C:\Program Files\Splunk\etc\system\local\
when they SHOULD have been put in
C:\Program Files\Splunk\etc\apps\search\local\
I hope that this helps somebody else out

Resources