IIS Logs and Event Logs - logstash

First off thank you for any advice and your time.
I recently setup an Elk stack for the company I just started working for. (This is my first experience using Logstash and Nxlog.) What I would like to do is send both IIS logs and EventLogs from the same webserver to logstash using nxlog.
I just don't understand how to send two types of logs from one source and have the logstash.conf filter this data correctly.
This is my nxlog.conf
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
<Extension json>
Module xm_json
</Extension>
<Input iis_1>
Module im_file
File "F:\inetpub\logs\LogFiles\W3SVC1\u_ex*.log"
ReadFromLast True
SavePos True
Exec if $raw_event =~ /^#/ drop();
</Input>
<Input iis_2>
Module im_file
File "F:\inetpub\logs\LogFiles\W3SVC2\u_ex*.log"
ReadFromLast True
SavePos True
Exec if $raw_event =~ /^#/ drop();
</Input>
<Input iis_4>
Module im_file
File "F:\inetpub\logs\LogFiles\W3SVC4\u_ex*.log"
ReadFromLast True
SavePos True
Exec if $raw_event =~ /^#/ drop();
</Input>
<Input eventlog>
Module im_msvistalog
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
<Output out_iis>
Module om_tcp
Host 10.191.132.86
Port 5555
OutputType LineBased
</Output>
<Route 1>
Path iis_1, iis_2, iis_4, eventlog=> out_iis
</Route>
My Current logstash.conf
input {
tcp {
type => "iis"
port => 5555
host => "10.191.132.86"
}
}
filter {
if [type] == "iis" {
grok {
match => ["#message", "%{TIMESTAMP_ISO8601:timestamp} %{IPORHOST:hostip} %{WORD:method} %{URIPATH:page} %{NOTSPACE:query} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientip} %{NOTSPACE:useragent} %{NOTSPACE:referrer} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:scstatus} %{NUMBER:timetaken}"]
}
}
}
output {
elasticsearch {
protocol => "http"
host => "10.191.132.86"
port => "9200"
}
}
It looks like you can filter different data by setting the type and doing if type else this type. But if they are coming from the same source how do I specify different types?
:) Thanks!

NXLog sets the field SourceModuleName with the value iis_1, iis_2, etc. You may want to use this instead.

A way to do this is filter by a known record entry in each log and wont exist in the other, for example [cs_bytes etc]:
e.g.
if [iisfield] {
mark type as IIS
else
mark type as EventLog
}
I have written a IIS and Event log agent that captures logs for Logit.io they might already do everything you already want

Related

logstash custom patterns don´t get resolved

I´m trying to setup an environment for grok debugging and made this with a docker.
Everything works fine, until logstash tries to resolve a custom pattern.
Here is my environment
I start the docker with
docker run -it --name logstash_debug -v
/home/cloud/docker-elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
-v /home/cloud/docker-elk/logstash/pipeline/:/usr/share/logstash/pipeline/
-v /home/cloud/docker-elk/logstash/patterns/:/usr/share/logstash/patterns
docker.elastic.co/logstash/logstash:7.2.0
As I said, logstash starts up, loads the pipeline (debug.conf)
input { stdin {} }
filter {
grok {
patterns_dir => ["/usr/share/logstash/patterns"]
match => ["message", "%{YEAR1} \[%{LOGLEVEL:loglvl}\] %{GREEDYDATA:message}"]
}
date {
match => ["customer_time", "${YEAR1}"]
target => "#timestamp"
}
}
output { stdout { codec => rubydebug } }
and gives me this error:
Cannot evaluate ${YEAR1}. Replacement variable YEAR1 is not
defined in a Logstash secret store or as an Environment entry and
there is no default value given.
the patterns_dir contains a file "dateformats" which contains (stripped it down to a minimum)
YEAR1 %{YEAR}
the logstash debug output gives me this:
[DEBUG][logstash.filters.grok ] config
LogStash::Filters::Grok/#patterns_dir =
["/usr/share/logstash/patterns"]
[DEBUG][logstash.filters.grok ] config
LogStash::Filters::Grok/#match = {"message"=>"%{YEAR1}
\[%{LOGLEVEL:loglvl}\] %{GREEDYDATA:message}"}
.....
[DEBUG][logstash.filters.grok ] config
LogStash::Filters::Grok/#patterns_files_glob = "*"
Normally logstash should be able to grab this file (I even started the docker with --user 0 to be sure that I have no permission problem) but it somehow can´t.
Anyone can me give a hint to what´s going on ?
Thanks and cheers,
Wurzelseppi

Logstash doesnt read from configured input file

I am trying to configure my Logstash to read from a specified log file. When I configure it to read from stdin it works as expected, my input results in a message from Logstash and displays in my Kibana UI.
$ cat /tmp/logstash-stdin.conf
input {
stdin {}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
$./logstash -f /tmp/logstash-stdin.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
The stdin plugin is now waiting for input:
hellloooo
{
"#version" => "1",
"host" => "myhost.com",
"#timestamp" => 2017-11-17T16:05:41.595Z,
"message" => "hellloooo"
}
However, when I run Logstash with a file input I get no indication that the file is loaded into Logstash, and it does not show in Kibana.
$ cat /tmp/logstash-simple.conf
input {
file {
path => "/tmp/test_log.txt"
type => "syslog"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
$ ./logstash -f /tmp/logstash-simple.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Any suggestions of how I can troubleshoot why my Logstash is not ingesting the configured file?
By default the file input plugin starts reading at the end of the file, so only lines added after Logstash starts will be processed. To read all existing lines upon startup add the option "start_position" => "beginning" to the configuration, as explained in documentation.

Can Fluentd send logs to Logstash?

I've been trying to do this all day. I want to send logs from Docker to FluentD via the fluentd logging engine and then from fluentd send those logs to logstash for processing.
I keep getting this error from logstash though:
{:timestamp=>"2016-03-09T23:29:19.388000+0000",
:message=>"An error occurred. Closing connection",
:client=>"172.18.0.1:57259", :exception=>#<TypeError: can't convert String into Integer>,
:backtrace=>["org/jruby/RubyTime.java:1073:in `at'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-event-2.2.2-java/lib/logstash/timestamp.rb:27:in `at'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-fluent-2.0.2-java/lib/logstash/codecs/fluent.rb:41:in `decode'",
"org/msgpack/jruby/MessagePackLibrary.java:195:in `each'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-fluent-2.0.2-java/lib/logstash/codecs/fluent.rb:40:in `decode'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-3.0.2/lib/logstash/inputs/tcp.rb:153:in `handle_socket'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-3.0.2/lib/logstash/inputs/tcp.rb:143:in `server_connection_thread'"], :level=>:error}
fairly basic logstash config:
input {
tcp {
port => 4000
codec => "fluent"
}
}
output {
stdout {
}
}
fairly basic fluentd config:
<source>
#type forward
</source>
<match docker.json>
#type forward
send_timeout 60s
recover_wait 10s
heartbeat_type none
phi_threshold 16
hard_timeout 60s
<server>
name logstash
host 172.18.0.2
port 4000
weight 60
</server>
</match>
<match docker.**>
#type stdout
</match>
One would think this would work, but I've already found that Logstash won't:
Work with fluentd's forward_out heartbeat configuration.
Logstash doesn't open a UDP port on the same port as the TCP.
The above error.
The above configuration does work if I craft Fluentd message pack messages in Ruby and send them manually.The key though is I want Fluentd to manage the logs locally and send them to an external logstash server to process the messages correctly into JSON.
We found a way to make fluent -> logstash work. Set time_as_integer true. A minimal configuration on the fluentd side would be
<source>
#type http
#id input_http
port 8888
</source>
<match **>
#type forward
time_as_integer true
<server>
host localhost
port 24114
</server>
</match>
It's mentioned quite hidden in https://docs.fluentd.org/v0.12/articles/in_forward#i-got-messagepackunknownexttypeerror-error-why .
On the logstash side, use a recent release (6.2.4), then simply configure the fluent codec, tcp input like this:
input {
tcp {
codec => fluent
port => 24114
}
}
filter {
}
output {
stdout { codec => rubydebug }
}
test with
curl -X POST -d 'json={"json":"message"}' http://localhost:8888/debug.test
as in the documentation. With the time_as_integer setting, the logstash output will look nice, like.
{
"port" => 32844,
"#version" => "1",
"host" => "localhost",
"json" => "message",
"#timestamp" => 2018-04-26T15:14:28.000Z,
"tags" => [
[0] "debug.test"
]
}
Without it, I get
[2018-04-26T15:16:00,115][ERROR][logstash.codecs.fluent ] Fluent parse error, original data now in message field {:error=>#<MessagePack::UnknownExtTypeError: unexpected extension type>, :data=>["fluent.info", "\x92\xD7\u0000Z\xE1\xEC\xF4\u0006$\x96傦worker\u0000\xA7message\xD9&fluentd worker is now running worker=0", {"size"=>1, "compressed"=>"text"}]}
{
"port" => 32972,
"#version" => "1",
"message" => [
[0] "fluent.info",
[1] "\x92\xD7\u0000Z\xE1\xEC\xF4\u0006$\x96傦worker\u0000\xA7message\xD9&fluentd worker is now running worker=0",
[2] {
"size" => 1,
"compressed" => "text"
}
],
"host" => "localhost",
"#timestamp" => 2018-04-26T15:16:00.116Z,
"tags" => [
[0] "_fluentparsefailure"
]
}
AFAIK, there's no way to transport data from Fluentd to Logstash. We need to write any Fluentd output plugins to send data to Logstash, or to write any Logstash input plugins to receive data from Fluentd.
FYI: there are some plugins for direction of Logstash -> Fluentd:
fluent-plugin-beats (fluentd input plugin for Elastic beats protocol)
logstash-output-fluentd (logstash output plugin to send data to Fluentd)
You can forward it directly to logstash tcp input.
This open-source flunetd output plugin will send the data directly to logstash tcp input (or any other receiver) in json format (also supports ssl/tls).
seen first at this question.

Logstash custom log parsing

Need your help in custom log parsing through logstash
Here is the log format that I am trying to parse through logstash
2015-11-01 07:55:18,952 [abc.xyz.com] - /Enter, G, _null, 2702, 2, 2, 2, 2, PageTotal_1449647718950_1449647718952_2_App_e9c00521-eeec-4d47-bf5b-b842ec14a4ff_178.255.153.2___, , , NEW,
And my logstash conf file looks like below
input {
file {
path => [ "/tmp/access.log" ]
}
}
filter{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{GREEDYDATA:message}" }
}
date {
match => ["timestamp","yyyy-MM-dd HH:mm:ss,SSSS"]
}
}
For some reason running the logstash command passing the conf file doesnt parse the logs, not sure whats wrong with the config. Any help would be highly appreciated.
bin/logstash -f conf/access_log.conf
Settings: Default filter workers: 6
Logstash startup completed
I have checked your Grok Match filter and is fine with:
Grok Debugger
You don't have to use the date matcher because the grok matcher already correctly match the TIMESTAMP_ISO8601 timestamp.
I think your problem is with "since_db" file.
Here is the documentation:
since_db
In few words, logstash remember if a file is already read and doesn't read it anymore. Logstash remember if one file was already read because write it in the since Database.
If you would like to test your filter reading always the same file, you could try:
input {
file {
path => [ "/tmp/access.log" ]
sincedb_path => "/dev/null"
}
}
Regards

how to implement the unit or integration tests for logstash configuration?

With the logstash 1.2.1 one can now have conditional to do various stuff. Even the earlier version's conf file can get complicated if one is managing many log files and implement metric extraction.
After looking at this comprehensive example, I really wondered my self, how can I detect any breakages in this configuration?
Any ideas.
For a syntax check, there is --configtest:
java -jar logstash.jar agent --configtest --config <yourconfigfile>
To test the logic of the configuration you can write rspec tests. This is an example rspec file to test a haproxy log filter:
require "test_utils"
describe "haproxy logs" do
extend LogStash::RSpec
config <<-CONFIG
filter {
grok {
type => "haproxy"
add_tag => [ "HTTP_REQUEST" ]
pattern => "%{HAPROXYHTTP}"
}
date {
type => 'haproxy'
match => [ 'accept_date', 'dd/MMM/yyyy:HH:mm:ss.SSS' ]
}
}
CONFIG
sample({'#message' => '<150>Oct 8 08:46:47 syslog.host.net haproxy[13262]: 10.0.1.2:44799 [08/Oct/2013:08:46:44.256] frontend-name backend-name/server.host.net 0/0/0/1/2 404 1147 - - ---- 0/0/0/0/0 0/0 {client.host.net||||Apache-HttpClient/4.1.2 (java 1. 5)} {text/html;charset=utf-8|||} "GET /app/status HTTP/1.1"',
'#source_host' => '127.0.0.1',
'#type' => 'haproxy',
'#source' => 'tcp://127.0.0.1:60207/',
}) do
insist { subject["#fields"]["backend_name"] } == [ "backend-name" ]
insist { subject["#fields"]["http_request"] } == [ "/app/status" ]
insist { subject["tags"].include?("HTTP_REQUEST") }
insist { subject["#timestamp"] } == "2013-10-08T06:46:44.256Z"
reject { subject["#timestamp"] } == "2013-10-08T06:46:47Z"
end
end
This will, based on a given filter configuration, run input samples and test if the expected output is produced.
To run the test, save the test as haproxy_spec.rb and run `logstash rspec:
java -jar logstash.jar rspec haproxy_spec.rb
There are lots of spec examples in the Logstash source repository.
since logstash has been upgraded and now the command will be something like (give the folder)
/opt/logstash/bin/logstash agent --configtest -f /etc/logstash/logstash-indexer/conf.d
If you see some warning, but the error message is mixed together, and you didn't know which one have issue. You have to check its file one by one

Resources