Is it possible to reference environment variables in logstash configuration?
In my case, i want to make my elasticsearch address configurable that i have set in the environment.
With logstash 2.3, you can set environment variable references into Logstash plugins configuration using ${var} or $var.
https://www.elastic.co/guide/en/logstash/current/environment-variables.html
Before logstash 2.3, you can use the "environment" filter plugin which is community maintained.
Documentation at : https://www.elastic.co/guide/en/logstash/current/plugins-filters-environment.html#plugins-filters-environment-add_field_from_env
How to install this plugin:
$LOGSTASH_HOME/bin/plugin install logstash-filter-environment
Source code at : https://github.com/logstash-plugins/logstash-filter-environment
The main part is:
# encoding: utf-8
require "logstash/filters/base"
require "logstash/namespace"
# Set fields from environment variables
class LogStash::Filters::Environment < LogStash::Filters::Base
config_name "environment"
# Specify a hash of fields to the environment variable
# A hash of matches of `field => environment` variable
config :add_field_from_env, :validate => :hash, :default => {}
public
def register
# Nothing
end # def register
public
def filter(event)
return unless filter?(event)
#add_field_from_env.each do |field, env|
event[field] = ENV[env]
end
filter_matched(event)
end # def filter
end # class LogStash::Filters::Environment
I can hardly believe, that these are the only solutions left: Hacking logstash or using some kind of templating system to re-write the config.
Actually, I do not want to touch or tweak the config for different deployment-scenarios: All I want is to pass in some parameters to connect logstash to the outside world (e.g. where elasticsearch is located, usernames/credentials to connect to other systems). I googled for an hour now an all I could find were these awkwardly complicated solutions for this very simple and common problem.
I sincerely hope, that someone comes up with a better idea like
%{ENV[ELASTICSEARCH_HOST]}}
That's not directly supported, no.
However, if you're running a version later than 1.4.0, it would be pretty trivial to edit elasticsearch.rb to add this feature. Around line 183:
client_settings["network.host"] = #bind_host if #bind_host
You could tweak it to read an environment variable:
if ENV["ESHOST"].nil? then
client_settings["network.host"] = ENV["ESHOST"]
else
client_settings["network.host"] = #bind_host if #bind_host
end
If you prefer, you can run Logstash with the -e command-line option to pass config via STDIN. You could cat in some file with special tokens that you've replaced with your environment variable(s).
The logstash configuration as of this writing is just a configuration file, but it's not a programing language. Thus, it has a few reasonable "limitations", for example, it cannot reference environment variables, cannot pass parameters it, hard to reuse other configuration file. Those limitations would make the logstash configuration file hard to maintain when the configuration file grows or you want to adjust its behavior on the fly.
My approach is to use template engine to generate the logstash configuration file. I used Jinja2 in Python.
For example, the elastic search output could be templated as
output {
elasticsearch {
index => {{ es_index_name }}
host => {{ es_hostname }}
}
}
Then I wrote a simple python code using Jinja2 to generate the logstash configuration file, and the value of es_index_name and es_hostname could be passed via the Python script argument. See here for Jiaja2 tutorial: http://kagerato.net/articles/software/libraries/jinja-quickstart.html
In this way, a big logstash configuration could be splitted into reusable pieces and its behavior can be adjusted on the fly.
As explained in logstash-issues
Connections are established at plugin registration time (during initialization, as they almost certainly should be), but field interpolation (like %{escluster}) is an event-processing-time operation. As such, host isn't really eligible for this behavior.
So unless input or output plugin will natively supports %{foo} syntax, doing any environment variable evaluation at the stage of event filtering is too late for the input and output plugin to take advantage of it.
.conf file support environment variables.
you just have to export the environment variable:
export EXAMPLE_VAR=123
and use it in the configuration file this way:
${EXAMPLE_VAR}
Related
I am using the Config crate in Rust, and would like to use environment variables to set keys inside a section of the config. The end goal is to override application settings from a docker compose file, or docker command line, using the environment.
If my config was the following, could I use a specifically crafted environment variable to set database.echo ?
(code blurb below is taken from this example)
debug = true
[database]
echo = true
The example code to configure this using the environment variables illustrates only to set keys at the top level. Wondering how to extend this. The .set() takes a hierarchical key, so I'm hopeful that there's a way to encode the path in the env variable name.
Answering my own question.
I just noticed that the Environment code accepts a custom separator which will get replaced with . (dot).
So one can set the separator to something like _XX_ and that would get mapped to a ".". Setting DATABASE_XX_ECHO=true, for instance would then change the database.echo key.
In my system, I use logstash, filebeat and elasticsearch
Filebeat reads the logs, required fields in the logs are filtered with logstash and saved in elasticsearch.
I have a customer requirement to switch on/off saving some fields in the log by a single config change by the customer.
My planned approach is to keep the switch variable as an environment variable in "/etc/default/logstash" location and let the customer change the variables with a file operation.
But I have found out that the logtash config is not reloaded when we change that file even if we set the "config.reload.automatic: true". So I cannot continue my planned approach.
Also letting customer edit the logstast ".conf" files is not a good approach either because the code is so complex.
Please advice on this issue.
Thanks,
I have found that it is not possible to reload the value of a variable in the environment without restarting logstash. So I have used a file read solution. The config block is as below.
ruby {
code => "event.set( 'variable1',IO.readlines('/etc/logstash/input.txt')[0])"
}
This has fixed my problem. But I would like to know is there a performance impact in executing file operation in each event
I'm new to configuration management, just FYI.
I'm trying to puppetize elasticsearch, and want to have a master list of elasticsearch nodes in a file (which can be used for multiple things, not just this purpose).
I would like to add elasticsearch.yml via an ERB template and expand the list of FDQN's into the discovery.zen.ping.unicast.hosts: [] param.
For example I have an external file called es_hosts in module/files that contains:
host1.domain.com
host2.domain.com
host3.domain.com
host4.domain.com
Then when puppet builds the ERB template have this in the param:
discovery.zen.ping.unicast.hosts: ["host1.domain.com", "host2.domain.com", "host3.domain.com", "host4.domain.com"]
I've tried a few things, but I can't get my head wrapped around it.
I would be using this list for other things like building firewall rules, etc, so I'd like to have one master list for reference that can be updated by my team.
Thanks for any help!
Rather than have a list in a file, it would be better to have it in Hiera, since defining lists and other external data is specifically what Hiera is for.
(If you have not used Hiera yet, you definitely should read up on it.)
So in Hiera you would have:
---
es_hosts:
- host1.domain.com
- host2.domain.com
- host3.domain.com
- host4.domain.com
In your manifest, you would read that in from Hiera using the hiera function:
$es_hosts = hiera('es_hosts')
(Note that instead of the hiera function, we often use Puppet's Automatic Parameter Lookup feature instead to read data into our manifests from Hiera, but your requirement here - a list of ES hosts to be used in multiple contexts - suggests you will want this list not to be bound to a specific class input. If this does not make sense to you right now, you will need to learn about Parameterised Classes and Automatic Parameter Lookup, but it's otherwise not relevant to this answer.)
Finally, in your ERB template you would have:
discovery.zen.ping.unicast.hosts: ["<%= #es_hosts.join('", "') %>"]
Pay attention to the fact that the $es_hosts variable from your manifest is accessed via a Ruby instance variable #es_hosts in your ERB template.
Finally, note that there is an Elasticsearch Puppet module available on the Puppet Forget here. You may find that using that module is better than writing your own.
when configuring https for play framework, I have to use following configuration when running the background task.
play -Dhttps.port=9443 -Dhttps.keyStore=keystore.jks -Dhttps.keyStorePassword=password run
I don't want to display the keystore password on the command line. It shouldn't be visible for all users on that machine.
HTTPS configuration can either be supplied using system properties or in application.conf
I recommend to use a combination of environment variables and the application.conf
Put your sensitive information in environment variables
Reference these environment variables from the application.conf:
Like this:
https.keyStore = defaultvalue
https.keyStore = ${?MY_HTTPS_KEY_STORE_ENV}
The question mark means that if there is no value found for MY_HTTPS_KEY_STORE_ENV then the defaultvalue from above will be used
I have installed Java through some cookbook and have set some default variables, now I want to add some more variables (Application specific) through my cookbook. How can I do that through Recipes in Chef. I tried to pass some variables in setenv.sh but it is overriding the default values instead I want to merge the variables and override existing variable values.
My code in setenv.sh:
export JAVA_OPTS="$JAVA_OPTS -Xmx2048m"
where $JAVA_OPTS - default variables
First way to do it would be to update the attribute defining the bases values in your application cookbook, as the attributes are read before the recipes are evaluated your file would end up with the correct values.
You're not saying with which cookbook you're using so I'll base the example on the tomcat cookbook
This cookbook define an attribute default['tomcat']['java_options'] = '-Xmx128M -Djava.awt.headless=true'
The easiest way it to complement this by using something like
default['tomcat']['java_options'] = "#{default['tomcat']['java_options']} -Xmx2048m"
The obvious problem is that you end up with 2 -Xmx values, usually the JVM will take the latest but it become hard to find what options is at which value when there's a lot of overwriting.
Second option is to take advantage of jvmargs cookbook wich gives helpers functions to deine the java options and use in your setenv.sh template at end.