I am using logstash to put messages to AWS Kinesis stream and the output plugin requires authentication, this authentication will be refer to environment variable or from a file. We don't have to set the user and access key in the logstash config, by default it will refer to the env variable or a file. Now this credential will change and i have to reload the logstash pipeline. With hot reload or auto reload I think the logstash will look for any change in the config but in my case the logstash config will not change, the environment variable or the file will change. How can we force the logstash to reload the config file in this case.
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
file {
path => "\xx\\elk.log"
}
}
output {
kinesis {
stream_name => "acars-stream"
region => "us-east-2"
}
}
The plugin used - https://github.com/samcday/logstash-output-kinesis
Since you mentioned that credentials may be passed using environment variables, then you can use environment variables in output plugin with auto reload of config enabled for logstash. Something along these lines:
output {
kinesis {
stream_name => "acars-stream"
region => "us-east-2"
access_key => "${AWS_ACCESS_KEY}"
secret_key => "${AWS_SECRET_KEY}"
}
}
If that is not the option then you can extend the process which updates the credentials file, when they credentials need to be updated, to also reload Logstash config.
Refer to reload docs: https://www.elastic.co/guide/en/logstash/6.4/reloading-config.html
You would do something like:
kill -1 PID_OF_YOUR_LOGSTASH_PROCESS
Related
I am new to ELK and i am trying to do some handson using the ELK stack. I am performing the following on WINDOWS,
1. Installed Elastic search,confirmed with http://localhost:9200/
2. Installed logstash,confirmed using http://localhost:9600/
logstash -f logstash.config
logstash.config file looks like this,
input {
beats {
port => "5043"
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
elasticsearch { hosts => ["localhost:9200"] }
}
3. Installed Kibana, confirmed using http://localhost:5601
Now, i want to use filebeat to pass a log file to logstash which parses and forwards it to Elastic search for indexing. and finally kibana displays it.
In order to do that,
"
i did the following changes in filebeat.yml.
change 1 :
In Filebeat prospectors, i added
paths:
# - /var/log/*.log
- D:\KibanaInput\vinod.log
Contents of vinod.log: Hello World from FileBeat.
Change 2:
In Outputs,
#output.logstash:
# The Logstash hosts
hosts: ["localhost:9600"]
when i run the below command,
filebeat -c filebeat.yml -e
i get the below error,
ERR Connecting error publishing events (retrying): Failed to parse JSON response: json: cannot unmarshal string into Go value of type struct { Number string }
Please let me know what mistake i am doing.
You are in a good path.
Please confirm the following:
in your filebeat.yml make sure that you don't have comment in the output.logstash: line, that correspond to your change number 2.
Make sure your messages are been grok correctly. Add the following output in your logstash pipeline config file.
output {
stdout { codec => json }
}
3.Start your logstash in debug mode.
4.If you are reading the same file with the same content make sure you remove the registry file in filebeat. ($filebeatHome/data/registry)
5.Read the log files.
We've just switched from 2.X to 5.X, and I'm trying to find out how to use environmental variables in the pipeline configuration files.
In 2.X, the following worked:
export HOSTNAME
Then start Logstash with the --allow-env command line flag. The pipeline configuration file looked like this:
filter {
mutate {
add_field => { "some_field" => "${HOSTNAME}"}
}
}
The documentation says that the --allow-env flag is not required anymore.
I've tried to replace mutate with the environment filter, but with no luck.
I've tried to edit the startup.options file. Added HOSTNAME as a usual environment variable, and added it between the read-EOM part, but without any positive results.
What seems to be working now if I add the following part to the /usr/share/logstash/bin/logstash.lib.sh file, but I'm nut sure if I'm supposed to edit that.
HOSTNAME="the-name-of-the-host"
export HOSTNAME
So my question is: what have I overlooked? What is the proper way to allow the usage of enviromental variables in Logstash's pipeline configuration files?
Note:
Like Alcanzar described, this works if I run Logstash manually. However we would like to run it with systemctl, so it should be automatically started by the daemon on startup. In 2.X, it worked fine with the /etc/init.d/logstash file, but as the documentation describes, that is no more. The files I'm supposed to edit are: /etc/logstash/startup.options and /etc/logstash/logstash.yml.
The environment variables clearly work if starting manually:
With a test.conf of this:
input {
stdin { codec => "json" }
}
filter {
mutate {
add_field => {
"hostname" => "${HOSTNAME}"
}
}
}
output {
stdout { codec => "rubydebug" }
}
We can run a test and verify it:
% export HOSTNAME="abc1234"
% echo '{"a":1}' | bin/logstash -f test.conf
Sending Logstash's logs to /Users/xxxxxx/logstash-5.1.1/logs which is now configured via log4j2.properties
[2017-02-16T09:04:33,442][INFO ][logstash.inputs.stdin ] Automatically switching from json to json_lines codec {:plugin=>"stdin"}
[2017-02-16T09:04:33,454][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2017-02-16T09:04:33,457][INFO ][logstash.pipeline ] Pipeline main started
[2017-02-16T09:04:33,492][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
{
"a" => 1,
"hostname" => "abc1234",
"#timestamp" => 2017-02-16T15:04:33.497Z,
"#version" => "1",
"host" => "XXXXXX.local",
"tags" => []
}
[2017-02-16T09:04:36,472][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
So the real question is why isn't it working in your scenario. If you are using some sort of /etc/init.d script to start logstash, then you can add to /etc/sysconfig/logstash a line like export HOSTNAME="whatever"
we had the same issue with logstash 5. The best way (that we found) is to add the env vars in /etc/systemd/system/logstash.service.d/override.conf and in there have
[Service]
Environment="HOSTNAME=the-name-of-the-host"
I'm currently trying to install and run Logstash on Windows 7 using the guidelines of the Logstash website. I am struggling to configure and use logstash with elasticsearch. Created logstash-simple.conf with below content
`
enter code here`input { stdin { } }
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
when i execute below Command:
D:\logstash-2.4.0\bin>logstash agent -f logstash-simple.conf
I get following error, i tried many things but i get same error
←[31mNo config files found: D:/logstash-2.4.0/bin/logstash-simple.conf
Can you make sure this path is a logstash config file? {:level=>:error}←[0m
The signal HUP is in use by the JVM and will not work correctly on this platform
D:\logstash-2.4.0\bin>
Read No config files found in the error. So Logstash can't find the logstash-simple.conf file.
So try
D:\logstash-2.4.0\bin>logstash agent -f [Direct path to the folder containing logstash-simple.conf]logstash-simple.conf
Verify if extension is .conf or another other thing like .txt (logstash-simple.conf.txt)
Been looking all over the web for a configuration example of the logstash http input plugin configuration, and tried to follow the once I've found. Still running in to problem with the following configuration:
input {
http {
host => "127.0.0.1"
post => "31311"
tags => "wpedit"
}
}
output {
elasticsearch {hosts => "localhost:9400"}
}
When running service logstash restart it responds with Configuration error. Not restarting. Re-run with configtest parameter for details.
So I ran a configuration test (/opt/logstash/bin/logstash --configtest) and it says everything is fine.
So, my question is, how can I find whats wrong with the configuration? Can you see anything obviously incorrect? I'm fairly new to the world of Elasticsearch, if you could not tell...
I have this puppet module (monit) in which I declare monit service to be enabled (a.k.a to be started when the machine booted)
class monit {
$configdir = "/etc/monit.d"
package {
"monit": ensure => installed;
}
service { "monit":
ensure => running,
enable => true,
require => Package["monit"],
provider => init;
}
file {
'/etc/monit.d':
ensure => directory;
'/etc/monit.conf':
content => template('monit/monitrc.erb'),
mode => 0600,
group => root,
require => File['/etc/monit.d'],
before => Service[monit],
notify => Service[monit],
}
}
I then included with include monit inside default node.
However, when I apply this configuration, puppet is not setting monit as a start up service (use chkconfig --list monit just display 'off' and 'off')
However, if I run puppet apply -e 'service { "monit": enable => true, } ' then monit is added to start up properly.
Am I doing any thing wrong here? (Puppet 2.7.6)
The full configuration can be view at https://github.com/phuongnd08/Giasu-puppet
The issue is probably the provider => init line, which is overriding the default provider for handling services. The init provider is a very simple provider that doesn't support the "enableable" feature, so it can't set a service to start on boot.
See http://docs.puppetlabs.com/references/2.7.6/type.html#service for its capabilities.
In your puppet apply example, you don't specify the provider so it picks the most appropriate for your system - in your case the "redhat" provider that uses chkconfig.
To fix this, remove the provider line from your service {} definition and it will again default to the most appropriate. You only need to specify a provider if it picks incorrectly and then it's best to specify it as a global default.