I have a log file called "/var/log/commands.log" that I'm trying to separate into fields with logstash & grok. I've got it working. Now, I'm trying to make logstash only do this to the file "/var/log/commands.log" and not any input by doing "if name = commands.log" but something with the "if" statement seems wrong as it skips over it.
input{
file{
path => "/var/log/commands.log"
}
beats{
port => 5044
}
}
filter {
if [log][file][path] == "/var/log/commands.log" {
grok{
match => { "message" => "*very long statement*"
}
}
}
}
output{
elasticsearch { hosts => ["localhost:9200"]}
}
If I remove the if statement it works and the fields are visible in kibana. I'm testing things locally. Does anyone know what's going on?
EDIT: SOLVED: In logstash, it has to be only [path] instead of all the rest.
Related
I am pretty new to logstash.
In our application we are creating multiple indexes, from the below thread i could understand how to resolve that
How to create multiple indexes in logstash.conf file?
but that results in many duplicate lines in the conf file (for host, ssl, etc.). So i wanted to check if there is any better way of doing it?
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
hosts => "localhost:9200"
index => "trial_indexer"
}
} else {
elasticsearch {
hosts => "localhost:9200"
index => "movie_indexer"
}
}
Instead of above config, can i have something like below?
output {
stdout {codec => rubydebug}
elasticsearch {
hosts => "localhost:9200"
}
if [type] == "trial" {
elasticsearch {
index => "trial_indexer"
}
} else {
elasticsearch {
index => "movie_indexer"
}
}
What you are looking for is using Environment Variables in logstash pipeline. You define this once, and can use same redundant values like you said for HOST, SSL etc.
For more information Logstash Use Environmental Variables
e.g.,
output {
elasticsearch{
hosts => ${ES_HOST}
index => "%{type}-indexer"
}
}
Let me know, if that helps.
I'm running an ELK stack on my local filesystem. I have the following configuration file set up:
input {
file {
path => "/var/log/rfc5424"
type => "RFC"
}
}
filter {
grok {
match => { "message" => "%{SYSLOG5424LINE}" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
I have a kibana instance running as well. I write a line to /var/log/rfc5424:
$ echo '<11>1' "$(date +'%Y-%m-%dT%H:%M:%SZ')" 'test-machine test-tag f81d4fae-7dec-11d0-a765-00a0c91e6bf6 log [nsId orgID="12 \"hey\" 345" projectID="2345[hehe]6"] this is a test message' >> /var/log/rfc5424
And it shows up in Kibana. Great! However, weirdly, it shows up six times:
As far as I can tell everything about these message is identical, and I only have one instance of logstash/kibana running, so I have no idea what could be causing this duplication.
Check out if there is .swp or .tmp file for your configuration under conf directory.
Add document id to documents:
output {
elasticsearch {
hosts => ["localhost:9200"]
document_id => "%{uuid_field}"
}
}
Working on getting our ESET log files (json format) into elasticsearch. I'm shipping logs to our syslog server (syslog-ng), then to logstash, and elasticsearch. Everything is going as it should. My problem is in trying to process the logs in logstash...I cannot seem to separate the key/value pairs into separate fields.
Here's a sample log entry:
Jul 8 11:54:29 192.168.1.144 1 2016-07-08T15:55:09.629Z era.somecompany.local ERAServer 1755 Syslog {"event_type":"Threat_Event","ipv4":"192.168.1.118","source_uuid":"7ecab29a-7db3-4c79-96f5-3946de54cbbf","occured":"08-Jul-2016 15:54:54","severity":"Warning","threat_type":"trojan","threat_name":"HTML/Agent.V","scanner_id":"HTTP filter","scan_id":"virlog.dat","engine_version":"13773 (20160708)","object_type":"file","object_uri":"http://malware.wicar.org/data/java_jre17_exec.html","action_taken":"connection terminated","threat_handled":true,"need_restart":false,"username":"BATHSAVER\\sickes","processname":"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"}
Here is my logstash conf:
input {
udp {
type => "esetlog"
port => 5515
}
tcp {
type => "esetlog"
port => 5515
}
filter {
if [type] == "esetlog" {
grok {
match => { "message" => "%{DATA:timestamp}\ %{IPV4:clientip}\ <%{POSINT:num1}>%{POSINT:num2}\ %{DATA:syslogtimestamp}\ %{HOSTNAME}\ %{IPORHOST}\ %{POSINT:syslog_pid\ %{DATA:type}\ %{GREEDYDATA:msg}" }
}
kv {
source => "msg"
value_split => ":"
target => "kv"
}
}
}
output {
elasticsearch {
hosts => ['192.168.1.116:9200']
index => "eset-%{+YYY.MM.dd}"
}
}
When the data is displayed in kibana other than the data and time everything is lumped together in the "message" field only, with no separate key/value pairs.
I've been reading and searching for a week now. I've done similar things with other log files with no problems at all so not sure what I'm missing. Any help/suggestions is greatly appreciated.
Can you try belows configuration of logstash
grok {
match => {
"message" =>["%{CISCOTIMESTAMP:timestamp} %{IPV4:clientip} %{POSINT:num1} %{TIMESTAMP_ISO8601:syslogtimestamp} %{USERNAME:hostname} %{USERNAME:iporhost} %{NUMBER:syslog_pid} Syslog %{GREEDYDATA:msg}"]
}
}
json {
source => "msg"
}
It's working and tested in http://grokconstructor.appspot.com/do/match#result
Regards.
I have been trying to send logs from logstash to elasticsearch.Suppose I am running a logstash instance and while it is running,I make a change to the file which the logstash instance is monitoring,then all the logs which have been previously saved in the elasticsearch are saved again,hence duplicates are formed.
Also,when the logstash instance is closed and is restarted again,the logs gets duplicated in the elasticsearch.
How do I counter this problem?
How to send only the newest added entry in the file from logstash to elasticsearch?
My logstash instance command is the following:
bin/logstash -f logstash-complex.conf
and the configuration file is this:
input
{
file
{
path => "/home/amith/Desktop/logstash-1.4.2/accesslog1"
}
}
filter
{
if [path] =~ "access"
{
mutate
{
replace =>
{ "type" => "apache_access" } }
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
host => localhost
index => feb9
}
stdout { codec => rubydebug }
}
I got the solution.
I was opening the file,adding a record and saving it ,due to which logstash treated the same file as a different file each time I saved it as it registered different inode number for the same file.
The solution is to append a line to the file without opening the file but by running the following command.
echo "the string you want to add to the file" >> filename
[ELK stack]
I wanted some custom configs in
/etc/logstash/conf.d/vagrant.conf
so the first step was to make a backup: /etc/logstash/conf.d/vagrant.conf.bk
This caused logstash to add 2 entries in elasticseach for each entry in <file>.log;
the same if i had 3 files in /etc/logstash/conf.d/*.conf.* in ES i had 8 entries for each line in *.log
As you mentioned in your question.
when the logstash instance is closed and is restarted again,the logs gets duplicated in the elasticsearch.
So, it probably you have delete the .since_db. Please have a look at here.
Try to specific the since_db and start_position. For example:
input
{
file
{
path => "/home/amith/Desktop/logstash-1.4.2/accesslog1"
start_position => "end"
sincedb_path => /home/amith/Desktop/sincedb
}
}
I am configuring logstash to collect logs from multiple workers on multiple hosts. I'm currently adding fields for host:
input {
file {
path => "/data/logs/box-1/worker-*.log"
add_field => {
"original_host" => "box-1"
}
}
file {
path => "/data/logs/box-2/worker-*.log"
add_field => {
"original_host" => "box-2"
}
}
However, I'd also like to add a field {'worker': 'A'} and so on. I have lots of workers, so I don't want to write a file { ... } block for every combination of host and worker.
Do I have any alternatives?
You should be able to do a path => "/data/logs/*/worker-*.log" and then add a grok filter to pull out what you need.
filter { grok { match => [ "path", "/(?<original_host>[^/]+)/worker-(?<worker>.*).log" ] } }
or something very close to that.... might want to surround it with if [path] =~ /worker/ depending on what else you have in your config file.