I am pretty new to logstash.
In our application we are creating multiple indexes, from the below thread i could understand how to resolve that
How to create multiple indexes in logstash.conf file?
but that results in many duplicate lines in the conf file (for host, ssl, etc.). So i wanted to check if there is any better way of doing it?
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
hosts => "localhost:9200"
index => "trial_indexer"
}
} else {
elasticsearch {
hosts => "localhost:9200"
index => "movie_indexer"
}
}
Instead of above config, can i have something like below?
output {
stdout {codec => rubydebug}
elasticsearch {
hosts => "localhost:9200"
}
if [type] == "trial" {
elasticsearch {
index => "trial_indexer"
}
} else {
elasticsearch {
index => "movie_indexer"
}
}
What you are looking for is using Environment Variables in logstash pipeline. You define this once, and can use same redundant values like you said for HOST, SSL etc.
For more information Logstash Use Environmental Variables
e.g.,
output {
elasticsearch{
hosts => ${ES_HOST}
index => "%{type}-indexer"
}
}
Let me know, if that helps.
Related
I have a log file called "/var/log/commands.log" that I'm trying to separate into fields with logstash & grok. I've got it working. Now, I'm trying to make logstash only do this to the file "/var/log/commands.log" and not any input by doing "if name = commands.log" but something with the "if" statement seems wrong as it skips over it.
input{
file{
path => "/var/log/commands.log"
}
beats{
port => 5044
}
}
filter {
if [log][file][path] == "/var/log/commands.log" {
grok{
match => { "message" => "*very long statement*"
}
}
}
}
output{
elasticsearch { hosts => ["localhost:9200"]}
}
If I remove the if statement it works and the fields are visible in kibana. I'm testing things locally. Does anyone know what's going on?
EDIT: SOLVED: In logstash, it has to be only [path] instead of all the rest.
ES 2.4.1
Logstash 2.4.0
I am sending data to elasticsearch from local to create a index "pica".I used the below conf file.
input {
file {
path => "C:\Output\Receive.txt"
start_position => "beginning"
codec => json_lines
}
}
output {
elasticsearch {
hosts => "http://localhost:9200/"
index => "pica"
}
stdout{
codec => rubydebug
}
}
I couldn't see any output in either logstash prompt or in elasticsearch cluster.
When i seen the .sincedb file it has the following code:
612384816-350504-4325376 0 0 3804
May i know what's the problem here?
Thanks
I guess you're missing out the square brackets [] for the hosts value, since it's a type of array as per the doc. Hence it should look like:
elasticsearch {
hosts => ["localhost:9200"]
index => "pica"
}
OR :
hosts => ["127.0.0.1"] OR hosts => ["localhost"]
I'm running an ELK stack on my local filesystem. I have the following configuration file set up:
input {
file {
path => "/var/log/rfc5424"
type => "RFC"
}
}
filter {
grok {
match => { "message" => "%{SYSLOG5424LINE}" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
I have a kibana instance running as well. I write a line to /var/log/rfc5424:
$ echo '<11>1' "$(date +'%Y-%m-%dT%H:%M:%SZ')" 'test-machine test-tag f81d4fae-7dec-11d0-a765-00a0c91e6bf6 log [nsId orgID="12 \"hey\" 345" projectID="2345[hehe]6"] this is a test message' >> /var/log/rfc5424
And it shows up in Kibana. Great! However, weirdly, it shows up six times:
As far as I can tell everything about these message is identical, and I only have one instance of logstash/kibana running, so I have no idea what could be causing this duplication.
Check out if there is .swp or .tmp file for your configuration under conf directory.
Add document id to documents:
output {
elasticsearch {
hosts => ["localhost:9200"]
document_id => "%{uuid_field}"
}
}
Working on getting our ESET log files (json format) into elasticsearch. I'm shipping logs to our syslog server (syslog-ng), then to logstash, and elasticsearch. Everything is going as it should. My problem is in trying to process the logs in logstash...I cannot seem to separate the key/value pairs into separate fields.
Here's a sample log entry:
Jul 8 11:54:29 192.168.1.144 1 2016-07-08T15:55:09.629Z era.somecompany.local ERAServer 1755 Syslog {"event_type":"Threat_Event","ipv4":"192.168.1.118","source_uuid":"7ecab29a-7db3-4c79-96f5-3946de54cbbf","occured":"08-Jul-2016 15:54:54","severity":"Warning","threat_type":"trojan","threat_name":"HTML/Agent.V","scanner_id":"HTTP filter","scan_id":"virlog.dat","engine_version":"13773 (20160708)","object_type":"file","object_uri":"http://malware.wicar.org/data/java_jre17_exec.html","action_taken":"connection terminated","threat_handled":true,"need_restart":false,"username":"BATHSAVER\\sickes","processname":"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"}
Here is my logstash conf:
input {
udp {
type => "esetlog"
port => 5515
}
tcp {
type => "esetlog"
port => 5515
}
filter {
if [type] == "esetlog" {
grok {
match => { "message" => "%{DATA:timestamp}\ %{IPV4:clientip}\ <%{POSINT:num1}>%{POSINT:num2}\ %{DATA:syslogtimestamp}\ %{HOSTNAME}\ %{IPORHOST}\ %{POSINT:syslog_pid\ %{DATA:type}\ %{GREEDYDATA:msg}" }
}
kv {
source => "msg"
value_split => ":"
target => "kv"
}
}
}
output {
elasticsearch {
hosts => ['192.168.1.116:9200']
index => "eset-%{+YYY.MM.dd}"
}
}
When the data is displayed in kibana other than the data and time everything is lumped together in the "message" field only, with no separate key/value pairs.
I've been reading and searching for a week now. I've done similar things with other log files with no problems at all so not sure what I'm missing. Any help/suggestions is greatly appreciated.
Can you try belows configuration of logstash
grok {
match => {
"message" =>["%{CISCOTIMESTAMP:timestamp} %{IPV4:clientip} %{POSINT:num1} %{TIMESTAMP_ISO8601:syslogtimestamp} %{USERNAME:hostname} %{USERNAME:iporhost} %{NUMBER:syslog_pid} Syslog %{GREEDYDATA:msg}"]
}
}
json {
source => "msg"
}
It's working and tested in http://grokconstructor.appspot.com/do/match#result
Regards.
I am configuring logstash to collect logs from multiple workers on multiple hosts. I'm currently adding fields for host:
input {
file {
path => "/data/logs/box-1/worker-*.log"
add_field => {
"original_host" => "box-1"
}
}
file {
path => "/data/logs/box-2/worker-*.log"
add_field => {
"original_host" => "box-2"
}
}
However, I'd also like to add a field {'worker': 'A'} and so on. I have lots of workers, so I don't want to write a file { ... } block for every combination of host and worker.
Do I have any alternatives?
You should be able to do a path => "/data/logs/*/worker-*.log" and then add a grok filter to pull out what you need.
filter { grok { match => [ "path", "/(?<original_host>[^/]+)/worker-(?<worker>.*).log" ] } }
or something very close to that.... might want to surround it with if [path] =~ /worker/ depending on what else you have in your config file.