How to send selected logs to a logstash output - logstash

I am upgrading from logstash-1.1.3 to logstash-1.3.3.
The problem is, that tags and fields configuration that were there in 1.1.3 are deprecated in version 1.3.3. These allowed to send only those events to the output which had given tags or contained given fields.
I just want to know what replaces these in logstash-1.3.3. How do I get the same functionality of sending selected events to an output. I don't want to send all the events to an output.

You can use if statement to do this.
output {
if [type] == "tech" {
stdout{}
}
}
This page has the introduction about how to configure.

Related

how to add bytes, session and source parameter in kibana to visualise suricata logs?

I redirected all the logs(suricata logs here) to logstash using rsyslog. I used template for rsyslog as below:
template(name="json-template"
type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"#version\":\"1")
constant(value="\",\"message\":\"") property(name="msg" format="json")
constant(value="\",\"sysloghost\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"programname\":\"") property(name="programname")
constant(value="\",\"procid\":\"") property(name="procid")
constant(value="\"}\n")
}
for every incoming message, rsyslog will interpolate log properties into a JSON formatted message, and forward it to Logstash, listening on port 10514.
Reference link: https://devconnected.com/monitoring-linux-logs-with-kibana-and-rsyslog/
(I have also configured logstash as mention on the above reference link)
I am getting all the column in Kibana discover( as mentioned in json-template of rsyslog) but I also require bytes, session and source column in kibana which I am not getting here. I have attached the snapshot of the column I am getting on Kibana here
Available fields(or say column) on Kibana are:
#timestamp
t #version
t _type
t facility
t host
t message
t procid
t programname
t sysloghost
t _type
t _id
t _index
# _score
t severity
Please let me know how to add bytes, session and source in the available fields of Kibana. I require these parameters for further drill down in Kibana.
EDIT: I have added how my "/var/log/suricata/eve.json" looks like (which I need to visualize in Kibana. )
For bytes, I will use (bytes_toserver+bytes_toclient) which is an available inside flow.
Session I need to calculate.
Source_IP I will use as the source.
{"timestamp":"2020-05 04T14:16:55.000200+0530","flow_id":133378948976827,"event_type":"flow","src_ip":"0000:0000:0000:0000:0000:0000:0000:0000","dest_ip":"ff02:0000:0000:0000:0000:0001:ffe0:13f4","proto":"IPv6-ICMP","icmp_type":135,"icmp_code":0,"flow":{"pkts_toserver":1,"pkts_toclient":0,"bytes_toserver":87,"bytes_toclient":0,"start":"2020-05-04T14:16:23.184507+0530","end":"2020-05-04T14:16:23.184507+0530","age":0,"state":"new","reason":"timeout","alerted":false}}
Direct answer
Read the grok docs in detail.
Then head over to the grok debugger with some sample logs, to figure out expressions. (There's also a grok debugger built in to Kibana's devtools nowadays)
This list of grok patterns might come in handy, too.
A better way
Use Suricata's JSON log instead of the syslog format, and use Filebeat instead of rsyslog. Filebeat has a Suricata module out of the box.
Sidebar: Parsing JSON logs
In Logstash's filter config section:
filter {
json {
source => "message"
# you probably don't need the "message" field if it parses OK
#remove_field => "message"
}
}
[Edit: added JSON parsing]

How to parse an Auditbeat "event.type" in Logstash?

I have an almost-default installation of Auditbeat on several of my hosts, that are also auditing changes of /etc, that forward log data to a Logstash instance elsewhere. I want to generate a message based on these logs, as by default Auditbeat does not fill the message field with value (they moved it to event.original and anyway it's disabled, and I want to be as close to production as possible with my configs), so that Kibana displays "failed to find message" when I try viewing logs from auditbeat-*. So I went to parsing and adding fields to events with Logstash.
I have encountered an interesting issue: If I query something that belongs to any custom tree under root in JSON but event, Logstash filters work, but should I decide to query [event][type], the result is always false. The problem is, if I just stuff "%{[event][type]}" into my message, the value is in there! I have tried if ([event][type] == "info") {...}, if ([type] == "info") and also tried if ([event][action] == "change") to no avail, while when I do output a debug message value with "%{[event][type]} %{[event][action]}" both values are present and equal to whatever I'm comparing to. Note that [event][module] filter actually works, so this behavior with [event][type] really baffles me.
So, how to filter based on [event][type] in Logstash, provided they are present in incoming data?
The answer was pretty simple. Both event.type and event.action are arrays and not strings, so comparing an array to a string returned false. The proper way of filtering through these is using "in", like this:
if "info" in [event][type] {...}

Is it possible to pick only error entry from logfiles in logstash

I am using logstash to monitor my production server logs, but it throws all logs from info to errors, what I want is that it can only pick errors from log file and throw it on logstash kibana view.
After parsing your log using grok you can use logstash conditionals to check if loglevel (or whatever is your field name) equals to ERROR. If its true forward it to your output plugin,
output {
if [loglevel] == "ERROR"{ # Send ERROR logs only
elasticsearch {
...
}
}
}
If you are using filebeat to ship logs, you can use Processors, to send only logs that contains ERROR.
The contains condition checks if a value is part of a field. The field
can be a string or an array of strings. The condition accepts only a
string value.
For example, the following condition checks if an error is part of the
transaction status:
contains:
status: "Specific error"
Depends on your log format, you might be able to use one of the many supported conditions by filebeat processors,
Each condition receives a field to compare. You can specify multiple
fields under the same condition by using AND between the fields (for
example, field1 AND field2).
For each field, you can specify a simple field name or a nested map,
for example dns.question.name.
You can read more about Conditions here

Elasticsearch Logstash Kibana and Grok How do I break apart the message?

I created a filter to break apart our log files and am having the following issue. I'm not able to figure out how to save the parts of the "message" to their own field or tag or whatever you call it. I'm 3 days new to logstash and have had zero luck with finding someone here who knows it.
So for an example lets say this is your log line in a log file
2017-12-05 [user:edjm1971] msg:This is a message from the system.
And what you want to do is to get the value of the user and set that into some index mapping so you can search for all logs that were by that user. Also, you should see the information from the message in their own fields in Kibana.
My pipeline.conf file for logstash is like
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp} [sid:%{USERNAME:sid} msg:%{DATA:message}"
}
add_tag => [ "foo_tag", "some_user_value_from_sid_above" ]
}
Now when I run the logger to create logs data gets over to ES and I can see the data in KIBANA but I don't see foo_tag at all with the sid value.
How exactly do I use this to create the new tag that gets stored into ES so I can see the data I want from the message?
Note: using regex tools it all appears to parse the log formats fine and the log for logstash does not spit out errors when processing.
Also for the logstash mapping it is using some auto defined mapping as the path value is nil.
I'm not clear on how to create a mapping for this either.
Guidance is greatly appreciated.

logstash - add only first time value

Here's what I want, it's a bit the opposite of incremental data.
some data's are logs with a specific token, and I want to be able to keep (or to show in Elasticsearch) only the first submitted data, the oldiest information of each token.
I want to ignore any new log of the same token ?
How can I do that ? is it in logstash or elasticsearch ?
Thanks
Updates 2016-05-31
I think we can see that in different perspective. but globally what I want is the table like in the picture, but without the red lines, I want them to be ignored by logstash, or not display in ES queries.
I know it can be done, if I was able to add any flag in those lines I want to delete, but it's not possible, the only fact that tell us they can be removed is because we already have a key first-AAA that has been logged before.
At the logging process, we don't have this information.
You can achieve this using the elasticsearch filter. The filter would check in ES if the record already exists and if it is the case, we ask Logstash to just drop the line.
Note that I'm making the assumption that the Id field (AAA) is used as the document _id and is also present in the document as the Id field. Feel free to change whatever needs to, but this will work.
input {
...
}
filter {
elasticsearch {
hosts => ["localhost:9200"]
query => "_type:your_type AND _id:%{[Id]}"
fields => {"Id" => "found"}
}
if [found] {
drop {}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
...
}
}

Resources