logstash index not assigned with json pattern - logstash

I am new to ELK stack and trying to configure fields in kibana dashboard. my logstash.conf
input {
tcp {
port => 5000
}
}
filter{
json{
source => "message"
add_field => {
"newfiled" => "static"
}
}
}}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "test"
}
}
But index test is not present when i use curl to elastic server. Iam using python-logstash.I have installed json plugin. someone can help me how to send the json to elastic search so that I could view it on kibana dashboard?

Found the issue. There are two libraries json and jsonencode. If you are sending dictionary in text format(or using python logstash) make sure you use json encode

Related

Logstash filter according to fields source declare in filebeat config

input {
beats
{
port => 5042
}
}
output {
if [source] == "access"
{
elasticsearch {
hosts => ["16.113.56.102:9200"]
index => "logstsh-access-nginxlogs-%{+YYYY.MM.dd}"
}
}
else if [source] == "error"
{
elasticsearch {
hosts => ["16.113.56.102:9200"]
index => "logstsh-error-nginxlogs-%{+YYYY.MM.dd}"
}
}
}
I would like to separate the log file using the fields source and that declared at filebeat input, for the, from kibana side, the log have the source either is access/error, however, the logstash won't pass the log to elastic search, I'm wondering is it the right way to declare the source? I try to use the absolute in the input part, it works like a charm, so I think the issues is with the input filebeat or logstash.
First of all, I recommend you use one output if you don't need to send the data to multiple elasticsearch clusters. For example:
output {
elasticsearch {
hosts => ["16.113.56.102:9200"]
index => "logstash-%{source}-nginxlogs-%{+YYYY.MM.dd}"
}
}
}
In filebeat, you can add fields into each documents with the processor
processors:
- add_fields:
target: project
fields:
name: myproject
id: '574734885120952459'
Ref: https://www.elastic.co/guide/en/beats/filebeat/current/add-fields.html
Note: to understand if the problem because of Logstash you can use the following method.
Why are the logs not indexed in the elasticsearch logstash structure I designed?

logstash or filebeat to create multiple output files from tomcat log

I need to parse a tomcat log file and output it into several output files.
Each file is the result of a certain filter that will pick certain entries in the tomcat file that match a series of regexes or other transformation rule.
Currently I am doing this using a python script but it is not very flexible.
Is there a configurable tool for doing this?
I have looked into filebeat and logstash [none of which I am very familiar with] but it is not clear if it is possible to configure them to map a single input file into multiple output files each with a different filter/grok set of expressions.
Is it possible to achieve this with filebeat/logstash?
If all logs files are on the same servers you dont need Filebeat. Logstash can do the work.
Here an example of what your config logstash can look like.
In input you have you tomcat log file and you have multi output (json) depend of loglevel once logs have been parsed.
The grok is also an example you must define your own grok pattern depend on your log format.
input {
file {
path => "/var/log/tomcat.log"
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} - %{POSTFIX_SESSIONID:sessionId}: %{GREEDYDATA:messageText}" }
}
}
output {
if [loglevel] == "info" {
file {
codec => "json"
path => "/var/log/tomcat_info_parsed.log"
}
}
if [loglevel] == "warning" {
file {
codec => "json"
path => "/var/log/tomcat_warning_parsed.log"
}
}
}

How to create multiple indexes in logstash.conf file with common host

I am pretty new to logstash.
In our application we are creating multiple indexes, from the below thread i could understand how to resolve that
How to create multiple indexes in logstash.conf file?
but that results in many duplicate lines in the conf file (for host, ssl, etc.). So i wanted to check if there is any better way of doing it?
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
hosts => "localhost:9200"
index => "trial_indexer"
}
} else {
elasticsearch {
hosts => "localhost:9200"
index => "movie_indexer"
}
}
Instead of above config, can i have something like below?
output {
stdout {codec => rubydebug}
elasticsearch {
hosts => "localhost:9200"
}
if [type] == "trial" {
elasticsearch {
index => "trial_indexer"
}
} else {
elasticsearch {
index => "movie_indexer"
}
}
What you are looking for is using Environment Variables in logstash pipeline. You define this once, and can use same redundant values like you said for HOST, SSL etc.
For more information Logstash Use Environmental Variables
e.g.,
output {
elasticsearch{
hosts => ${ES_HOST}
index => "%{type}-indexer"
}
}
Let me know, if that helps.

Laravel Parsing log with elk (elasticsearch, logstash, kibana)

I have configured ELK successfully for Laravel app, But we are facing issue with Laravel log. I have configured logstash template with below code. but I am receiving Break line in Kibana. I have tried two different configuration code as per below details.
20-laravel.conf
input {
stdin{
codec => multiline {
pattern => "^\["
what => "previous"
negate => true
}
}
}
filter {
grok {
match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{DATA:env}\.%{DATA:severity}: %{DATA:message}" }
}
}
output {
elasticsearch {
document_type => "logs"
hosts => ["127.0.0.1"]
index => "laravel_logs"
}
}
filter {
# Laravel log files
if [type] == "laravel" {
grok {
match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{DATA:env}\.%{DATA:severity}: %{DATA:message} \[" }
}
}
}
laravel sample log is :
[2017-09-13 16:19:28] production.ERROR: Symfony\Component\Debug\Exception\FatalThrowableError: Parse error: syntax error, unexpected identifier (T_STRING), expecting ',' or ')' in /var/www/app/Http/Controllers/BrandsController.php:57
Stack trace:
#0 /var/www/vendor/composer/ClassLoader.php(322):
Composer\Autoload\includeFile('/var/www/vendor...')
#1 [internal function]: Composer\Autoload\ClassLoader-
>loadClass('App\\Http\\Contro...')
#2 [internal function]: spl_autoload_call('App\\Http\\Contro...')
So my main issue is we are reciveing this log in kibana in single line. for example above log code is a divided in different line message and we can't figure out that which line message is from which error?
Kibana log output for single laravel log is displayed in below image.kibana log-output
An easy alternative is to use Laralog.
With Laralog it is possible to Laravel logs directly to Elastic Search without install all the full Logstash stack, so it is suitable for small and container environments.
Example of usage:
laralog https://elasticsearch:9200 --input=laravel.log
Laralog will parse and send the logs automatically.
You should create a new provider to setup monolog properly, try the following setup:
class LogstashProvider extends ServiceProvider
{
public function boot(): void
{
$stream = storage_path('logs/laravel.log');
$name = env('APP_NAME');
$formatter = new LogstashFormatter($name, null, null, 'ctxt_', LogstashFormatter::V1);
$streamHandler = new StreamHandler($stream, Logger::DEBUG, false);
$streamHandler->setFormatter($formatter);
Log::getMonolog()->pushHandler(
$streamHandler
);
}
}
You also should configure your logstash to parse json instead

Logstash Grok Filter key/value pairs

Working on getting our ESET log files (json format) into elasticsearch. I'm shipping logs to our syslog server (syslog-ng), then to logstash, and elasticsearch. Everything is going as it should. My problem is in trying to process the logs in logstash...I cannot seem to separate the key/value pairs into separate fields.
Here's a sample log entry:
Jul 8 11:54:29 192.168.1.144 1 2016-07-08T15:55:09.629Z era.somecompany.local ERAServer 1755 Syslog {"event_type":"Threat_Event","ipv4":"192.168.1.118","source_uuid":"7ecab29a-7db3-4c79-96f5-3946de54cbbf","occured":"08-Jul-2016 15:54:54","severity":"Warning","threat_type":"trojan","threat_name":"HTML/Agent.V","scanner_id":"HTTP filter","scan_id":"virlog.dat","engine_version":"13773 (20160708)","object_type":"file","object_uri":"http://malware.wicar.org/data/java_jre17_exec.html","action_taken":"connection terminated","threat_handled":true,"need_restart":false,"username":"BATHSAVER\\sickes","processname":"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"}
Here is my logstash conf:
input {
udp {
type => "esetlog"
port => 5515
}
tcp {
type => "esetlog"
port => 5515
}
filter {
if [type] == "esetlog" {
grok {
match => { "message" => "%{DATA:timestamp}\ %{IPV4:clientip}\ <%{POSINT:num1}>%{POSINT:num2}\ %{DATA:syslogtimestamp}\ %{HOSTNAME}\ %{IPORHOST}\ %{POSINT:syslog_pid\ %{DATA:type}\ %{GREEDYDATA:msg}" }
}
kv {
source => "msg"
value_split => ":"
target => "kv"
}
}
}
output {
elasticsearch {
hosts => ['192.168.1.116:9200']
index => "eset-%{+YYY.MM.dd}"
}
}
When the data is displayed in kibana other than the data and time everything is lumped together in the "message" field only, with no separate key/value pairs.
I've been reading and searching for a week now. I've done similar things with other log files with no problems at all so not sure what I'm missing. Any help/suggestions is greatly appreciated.
Can you try belows configuration of logstash
grok {
match => {
"message" =>["%{CISCOTIMESTAMP:timestamp} %{IPV4:clientip} %{POSINT:num1} %{TIMESTAMP_ISO8601:syslogtimestamp} %{USERNAME:hostname} %{USERNAME:iporhost} %{NUMBER:syslog_pid} Syslog %{GREEDYDATA:msg}"]
}
}
json {
source => "msg"
}
It's working and tested in http://grokconstructor.appspot.com/do/match#result
Regards.

Resources