logstash & kibana setting the #message property - logstash

The #message property seems to be the core property when using logstash & kibana. My json logger sends the data with the message at
{"msg":"some one did something"}
if i change it so its
{"#message":"someone did something"}
the logstash server picks it up as "#fields.#message".
I am a bit confused how I can set this property to render correctly.

I suspect that the input is reading events as json and not json_event. The difference is that json will add any fields under the #fields namespace. json_event will expect the full logstash event serialized as json.
The functionality you have is probably what you want. You typically don't want to be sending the full json_event if you don't have to. You can overwrite the #message field in logstash with the mutate filter.
mutate {
type => 'json_logger'
replace => ["#message", "%{msg}"]
remove => "msg"
}

Related

how to add bytes, session and source parameter in kibana to visualise suricata logs?

I redirected all the logs(suricata logs here) to logstash using rsyslog. I used template for rsyslog as below:
template(name="json-template"
type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"#version\":\"1")
constant(value="\",\"message\":\"") property(name="msg" format="json")
constant(value="\",\"sysloghost\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"programname\":\"") property(name="programname")
constant(value="\",\"procid\":\"") property(name="procid")
constant(value="\"}\n")
}
for every incoming message, rsyslog will interpolate log properties into a JSON formatted message, and forward it to Logstash, listening on port 10514.
Reference link: https://devconnected.com/monitoring-linux-logs-with-kibana-and-rsyslog/
(I have also configured logstash as mention on the above reference link)
I am getting all the column in Kibana discover( as mentioned in json-template of rsyslog) but I also require bytes, session and source column in kibana which I am not getting here. I have attached the snapshot of the column I am getting on Kibana here
Available fields(or say column) on Kibana are:
#timestamp
t #version
t _type
t facility
t host
t message
t procid
t programname
t sysloghost
t _type
t _id
t _index
# _score
t severity
Please let me know how to add bytes, session and source in the available fields of Kibana. I require these parameters for further drill down in Kibana.
EDIT: I have added how my "/var/log/suricata/eve.json" looks like (which I need to visualize in Kibana. )
For bytes, I will use (bytes_toserver+bytes_toclient) which is an available inside flow.
Session I need to calculate.
Source_IP I will use as the source.
{"timestamp":"2020-05 04T14:16:55.000200+0530","flow_id":133378948976827,"event_type":"flow","src_ip":"0000:0000:0000:0000:0000:0000:0000:0000","dest_ip":"ff02:0000:0000:0000:0000:0001:ffe0:13f4","proto":"IPv6-ICMP","icmp_type":135,"icmp_code":0,"flow":{"pkts_toserver":1,"pkts_toclient":0,"bytes_toserver":87,"bytes_toclient":0,"start":"2020-05-04T14:16:23.184507+0530","end":"2020-05-04T14:16:23.184507+0530","age":0,"state":"new","reason":"timeout","alerted":false}}
Direct answer
Read the grok docs in detail.
Then head over to the grok debugger with some sample logs, to figure out expressions. (There's also a grok debugger built in to Kibana's devtools nowadays)
This list of grok patterns might come in handy, too.
A better way
Use Suricata's JSON log instead of the syslog format, and use Filebeat instead of rsyslog. Filebeat has a Suricata module out of the box.
Sidebar: Parsing JSON logs
In Logstash's filter config section:
filter {
json {
source => "message"
# you probably don't need the "message" field if it parses OK
#remove_field => "message"
}
}
[Edit: added JSON parsing]

Elasticsearch Logstash Kibana and Grok How do I break apart the message?

I created a filter to break apart our log files and am having the following issue. I'm not able to figure out how to save the parts of the "message" to their own field or tag or whatever you call it. I'm 3 days new to logstash and have had zero luck with finding someone here who knows it.
So for an example lets say this is your log line in a log file
2017-12-05 [user:edjm1971] msg:This is a message from the system.
And what you want to do is to get the value of the user and set that into some index mapping so you can search for all logs that were by that user. Also, you should see the information from the message in their own fields in Kibana.
My pipeline.conf file for logstash is like
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp} [sid:%{USERNAME:sid} msg:%{DATA:message}"
}
add_tag => [ "foo_tag", "some_user_value_from_sid_above" ]
}
Now when I run the logger to create logs data gets over to ES and I can see the data in KIBANA but I don't see foo_tag at all with the sid value.
How exactly do I use this to create the new tag that gets stored into ES so I can see the data I want from the message?
Note: using regex tools it all appears to parse the log formats fine and the log for logstash does not spit out errors when processing.
Also for the logstash mapping it is using some auto defined mapping as the path value is nil.
I'm not clear on how to create a mapping for this either.
Guidance is greatly appreciated.

conversion fields kibana and logstash

I try to convert a field "tmp_reponse" in integer in the file "conf" with logstash as follows :
mutate {
convert => {"TMP_REPONSE" => "integer"}
}
,but on Kibana it shows me that he is still string. I do not understand how I can make a convertion to use my fields "tmp_response" to use it like as a metric fields on kibana
thank you help me please and if there is anyone who can explain to me how I can master the metrics on Kibana and use fields as being of metrics fields
mutate{} will change the type of the field in logstash. If you added a stdout{} output stanza, you would see that it's an integer at that point.
How elasticsearch treats it is another problem entirely. Elasticsearch usually sets the type of a field based on the first input received, so if you sent documents in before you added the mutate to your logstash config, they would have been strings and the elasticsearch index will always consider that field to be a string.
The type may also have been defined in an elasticsearch template or mapping.
The good news is that your mutate will probably set the type when a new index is created. If you're using daily indexes (the default in logstash), you can just wait a day. Or you can delete the index (losing any data so far) and let a new one be created. Or you could rebuild the index.
Good luck.

Logstash JDBC - how to process json field?

I have postgresql which stores some data as json fields, eg:
{"adults":2,"children":{"total":0,"ages":[]}}
I'm using logstash-input-jdbc plugin to process the data
How do i parse json from jdbc? From logs i see that the fields arrive as PGObject:
"travelers_json" => #<Java::OrgPostgresqlUtil::PGobject:0x278826b2>
which has a value and type properties.
I've tried using json filter, but i don't know how to access the value property to feed to json filter?
What i've tried:
source => "[travelers_json][value]"
source => "travelers_json.value"
source => "%{travelers_json.value}"
I must be missing something very obvious here?
Ok, so the simpliest way was to convert json to text in postgresql:
SELECT travelers_json::TEXT from xxx
but i still would like to know how to access that PGobject

Is it possible to fetch data from an HTTP URL to Logstash?

As title says, I need to feed data(CSV or JSON) directly into logstash. What I want is to set a filter, say CSV, which reads the content directly from some http://example.com/csv.php into Logstash without involving any middleman script.
If I understand you correctly, you are trying to call an http resource repeatedly and fetch data into logstash. So you are looking for an input rather than a filter.
Logstash has just released a new http poller input plugin for that purpose. After installing it using bin/plugin install logstash-input-http_poller you can set a config like this to call your resource:
input {
http_poller {
urls => {
myresource => "http://example.com/csv.php"
}
}
request_timeout => 60
interval => 60
codec => "json" # set this if the response is json formatted
}
}
If the response contains CSV you need to set a csv filter.
filter {
csv{ }
}
There are also plugins which perform an http request within the filter section. However, these are supposed to enrich an existing event and that doesn't seem to be what you are looking for.

Resources