how to configure logstash with elasticsearch in window8? - logstash

I'm currently trying to install and run Logstash on Windows 7 using the guidelines of the Logstash website. I am struggling to configure and use logstash with elasticsearch. Created logstash-simple.conf with below content
`
enter code here`input { stdin { } }
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
when i execute below Command:
D:\logstash-2.4.0\bin>logstash agent -f logstash-simple.conf
I get following error, i tried many things but i get same error
←[31mNo config files found: D:/logstash-2.4.0/bin/logstash-simple.conf
Can you make sure this path is a logstash config file? {:level=>:error}←[0m
The signal HUP is in use by the JVM and will not work correctly on this platform
D:\logstash-2.4.0\bin>

Read No config files found in the error. So Logstash can't find the logstash-simple.conf file.
So try
D:\logstash-2.4.0\bin>logstash agent -f [Direct path to the folder containing logstash-simple.conf]logstash-simple.conf

Verify if extension is .conf or another other thing like .txt (logstash-simple.conf.txt)

Related

Issue while sending file content from filebeat to logstash

I am new to ELK and i am trying to do some handson using the ELK stack. I am performing the following on WINDOWS,
1. Installed Elastic search,confirmed with http://localhost:9200/
2. Installed logstash,confirmed using http://localhost:9600/
logstash -f logstash.config
logstash.config file looks like this,
input {
beats {
port => "5043"
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
elasticsearch { hosts => ["localhost:9200"] }
}
3. Installed Kibana, confirmed using http://localhost:5601
Now, i want to use filebeat to pass a log file to logstash which parses and forwards it to Elastic search for indexing. and finally kibana displays it.
In order to do that,
"
i did the following changes in filebeat.yml.
change 1 :
In Filebeat prospectors, i added
paths:
# - /var/log/*.log
- D:\KibanaInput\vinod.log
Contents of vinod.log: Hello World from FileBeat.
Change 2:
In Outputs,
#output.logstash:
# The Logstash hosts
hosts: ["localhost:9600"]
when i run the below command,
filebeat -c filebeat.yml -e
i get the below error,
ERR Connecting error publishing events (retrying): Failed to parse JSON response: json: cannot unmarshal string into Go value of type struct { Number string }
Please let me know what mistake i am doing.
You are in a good path.
Please confirm the following:
in your filebeat.yml make sure that you don't have comment in the output.logstash: line, that correspond to your change number 2.
Make sure your messages are been grok correctly. Add the following output in your logstash pipeline config file.
output {
stdout { codec => json }
}
3.Start your logstash in debug mode.
4.If you are reading the same file with the same content make sure you remove the registry file in filebeat. ($filebeatHome/data/registry)
5.Read the log files.

How to use environmental variables in Logstash 5.X?

We've just switched from 2.X to 5.X, and I'm trying to find out how to use environmental variables in the pipeline configuration files.
In 2.X, the following worked:
export HOSTNAME
Then start Logstash with the --allow-env command line flag. The pipeline configuration file looked like this:
filter {
mutate {
add_field => { "some_field" => "${HOSTNAME}"}
}
}
The documentation says that the --allow-env flag is not required anymore.
I've tried to replace mutate with the environment filter, but with no luck.
I've tried to edit the startup.options file. Added HOSTNAME as a usual environment variable, and added it between the read-EOM part, but without any positive results.
What seems to be working now if I add the following part to the /usr/share/logstash/bin/logstash.lib.sh file, but I'm nut sure if I'm supposed to edit that.
HOSTNAME="the-name-of-the-host"
export HOSTNAME
So my question is: what have I overlooked? What is the proper way to allow the usage of enviromental variables in Logstash's pipeline configuration files?
Note:
Like Alcanzar described, this works if I run Logstash manually. However we would like to run it with systemctl, so it should be automatically started by the daemon on startup. In 2.X, it worked fine with the /etc/init.d/logstash file, but as the documentation describes, that is no more. The files I'm supposed to edit are: /etc/logstash/startup.options and /etc/logstash/logstash.yml.
The environment variables clearly work if starting manually:
With a test.conf of this:
input {
stdin { codec => "json" }
}
filter {
mutate {
add_field => {
"hostname" => "${HOSTNAME}"
}
}
}
output {
stdout { codec => "rubydebug" }
}
We can run a test and verify it:
% export HOSTNAME="abc1234"
% echo '{"a":1}' | bin/logstash -f test.conf
Sending Logstash's logs to /Users/xxxxxx/logstash-5.1.1/logs which is now configured via log4j2.properties
[2017-02-16T09:04:33,442][INFO ][logstash.inputs.stdin ] Automatically switching from json to json_lines codec {:plugin=>"stdin"}
[2017-02-16T09:04:33,454][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2017-02-16T09:04:33,457][INFO ][logstash.pipeline ] Pipeline main started
[2017-02-16T09:04:33,492][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
{
"a" => 1,
"hostname" => "abc1234",
"#timestamp" => 2017-02-16T15:04:33.497Z,
"#version" => "1",
"host" => "XXXXXX.local",
"tags" => []
}
[2017-02-16T09:04:36,472][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
So the real question is why isn't it working in your scenario. If you are using some sort of /etc/init.d script to start logstash, then you can add to /etc/sysconfig/logstash a line like export HOSTNAME="whatever"
we had the same issue with logstash 5. The best way (that we found) is to add the env vars in /etc/systemd/system/logstash.service.d/override.conf and in there have
[Service]
Environment="HOSTNAME=the-name-of-the-host"

INFO No non-zero metrics in the last 30s message in filebeat

I'm new to ELK and I'm getting issues while running logstash. I ran the logatash as defined in below link
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
But when run filebeat and logstash, Its show logstash successfully runs at port 9600. In filebeat it gives like this
INFO No non-zero metrics in the last 30s
Logstash is not getting input from filebeat.Please help..
the filebeat .yml is
filebeat.prospectors:
- input_type: log
paths:
- /path/to/file/logstash-tutorial.log
output.logstash:
hosts: ["localhost:5043"]
and I ran this command
sudo ./filebeat -e -c filebeat.yml -d "publish"
The config file is
input {
beats {
port => "5043"
}
}
output {
stdout { codec => rubydebug }
}
then ran the commands
1)bin/logstash -f first-pipeline.conf --config.test_and_exit - this gave warnings
2)bin/logstash -f first-pipeline.conf --config.reload.automatic -This started the logstash on port 9600
I couldn't proceeds after this since filebeat gives the INFO
INFO No non-zero metrics in the last 30s
And the ELK version used is 5.1.2
The registry file stores the state and location information that Filebeat uses to track where it was last reading
So you can try updating or deleting registry file. see here
cd /var/lib/filebeat
sudo mv registry registry.bak
sudo service filebeat restart
I have also faced this issue and I have solved with above commands.
Filebeat reads from the end of your file, and is expecting new stuff to be added over time (like a log file).
To make it read from the beginning of the file, set the 'tail_files' option.
Also note the instructions there about re-processing a file, as that can come into play during testing.

Elastic search logstash http input plugin config error

Been looking all over the web for a configuration example of the logstash http input plugin configuration, and tried to follow the once I've found. Still running in to problem with the following configuration:
input {
http {
host => "127.0.0.1"
post => "31311"
tags => "wpedit"
}
}
output {
elasticsearch {hosts => "localhost:9400"}
}
When running service logstash restart it responds with Configuration error. Not restarting. Re-run with configtest parameter for details.
So I ran a configuration test (/opt/logstash/bin/logstash --configtest) and it says everything is fine.
So, my question is, how can I find whats wrong with the configuration? Can you see anything obviously incorrect? I'm fairly new to the world of Elasticsearch, if you could not tell...

How to setting logstash to listen snmptrap?

I am a new to lostash and elastichsearch. I want to collection logs of network devices by snmptrap. I have a problem with logstash.
+logstash-snmptrap.conf
input {
snmptrap {
community => "public"
port => 160
type => "snmp_trap"
}
}
output {
if [type] == "snmp_trap" {
file {
codec => "rubydebug"
flush_interval => 1
path => "/tmp/logstash-snmptrap.log"
}
}
}
I didn't get any error msg when i execute the command as follows,
root#pc:~# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-
snmptrap.conf
Settings: Default filter workers: 1
Logstash startup completed
Settings: Default filter workers: 1
Logstash startup completed
but I can't find the file "/tmp/logstash-snmptrap.log" ,
what's wrong with my logstash config ?
I normally see snmptrap run on port 162. Are you sure that yours is on 160?
Also, don't run the process as root. Use a port forwarder (e.g. iptables), to send 162 to 1062 (the default port that snmptrap listens on).
Remember that you will lose traps if logstash is down. Previously, I had a small logstash installation that would listen to snmptrap and syslog and write them to redis, to be read by the main logstash when it was up. I replaced this with snmptrapd writing to a local file and letting logstash read from that file. I had more control over the input than logstash gave me.

Resources