file_line { '/etc/profile.d/setjvmparams.sh':
path => '/etc/profile.d/setjvmparams.sh',
line => "export JAVA_HOME=/usrdata/apps/java/${tomcat::jdkversion}\nexport JRE_HOME=/usrdata/apps/java/${tomcat::jdkversion}/jre\nexport PATH=\"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:\$JAVA_HOME/bin\"",
}
This is appending the data each time. As far as I read, file_line was designed to add data only if it did not exist in the file. How to make sure, it gets added only when not present?
The file_line type should be used for single lines, but the line parameter you're passing has three lines separated with \n. This should be split into three resources:
file_line { '/etc/profile.d/setjvmparams.sh JAVA_HOME':
path => '/etc/profile.d/setjvmparams.sh',
line => "export JAVA_HOME=/usrdata/apps/java/${tomcat::jdkversion}",
}
file_line { '/etc/profile.d/setjvmparams.sh JRE_HOME':
path => '/etc/profile.d/setjvmparams.sh',
line => "export JRE_HOME=/usrdata/apps/java/${tomcat::jdkversion}/jre",
}
file_line { '/etc/profile.d/setjvmparams.sh PATH':
path => '/etc/profile.d/setjvmparams.sh',
line => "export PATH=\"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:\$JAVA_HOME/bin\"",
}
Related
When i execute the configuration file using the command bin\logstash -f the configfile.conf. There is not display on the console just the logs by logstash.
Here is the configuation file:
input
{
file
{
path => "F:\ELK\50_Startups.csv"
start_position => "beginning"
}
}
filter
{
csv
{
separator => ","
columns => ["R&D","Administration","Marketing","State","Profit"]
}
}
output
{
elasticsearch
{
hosts => ["localhost:9200"]
index => ["Startups"]
}
stdout{}
}
do the input file (50_Startups.csv) has fresh data written to? if not, it might be that logstash already stored the read offset as the last line, and it would not re-read it on future runs, unless you delete the sincedb_path offset files, of just add the following config:
sincedb_path => "/dev/null"
That would force logstash to re-parse the file.
see more info on files offsets here:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#_tracking_of_current_position_in_watched_files
from it:
By default, the sincedb file is placed in the data directory of Logstash with a filename based on the filename patterns being watched
I created a configuration in Puppet for Nagios agent (NRPE). Now, I'm trying to set different file sources depending on the existence of dirs. First, I check if a specific directory exists and then set specific file content. My current config files look like:
class nagios_client::file_nagios-check-Linux-stats {
include nagios_client::check_location_lib-nagios
file { '/usr/lib/nagios/plugins/check_linux_stats.pl':
ensure => file,
owner => root,
group => root,
mode => 755,
content => template("nagios_client/check_linux_stats.pl.erb"),
require => Exec["check_usr-lib_exists"],
}
file { '/usr/lib64/nagios/plugins/check_linux_stats.pl':
ensure => file,
owner => root,
group => root,
mode => 755,
content => template("nagios_client/check_linux_stats.pl.erb"),
require => Exec["check_usr-lib64_exists"],
}
file { '/usr/lib32/nagios/plugins/check_linux_stats.pl':
ensure => file,
owner => root,
group => root,
mode => 755,
content => template("nagios_client/check_linux_stats.pl.erb"),
require => Exec["check_usr-lib32_exists"],
}
}
This works fine, but I have a problem with this:
class nagios_client::file_nrpe-cfg {
# include nagios_client::check_location_lib-nagios
file { '/etc/nagios/nrpe.cfg.def':
path => '/etc/nagios/nrpe.cfg',
ensure => file,
owner => root,
group => root,
mode => 644,
content => template("nagios_client/nrpe-cfg.erb"),
require => Exec["check_usr-lib_exists"],
}
file { '/etc/nagios/nrpe.cfg.32':
path => '/etc/nagios/nrpe.cfg',
ensure => file,
owner => root,
group => root,
mode => 644,
content => template("nagios_client/nrpe-cfg-32.erb"),
require => Exec["check_usr-lib32_exists"],
}
file { '/etc/nagios/nrpe.cfg.64':
path => '/etc/nagios/nrpe.cfg',
ensure => file,
owner => root,
group => root,
mode => 644,
content => template("nagios_client/nrpe-cfg-64.erb"),
require => Exec["check_usr-lib64_exists"],
}
}
It looks like require => Exec[...] is ignored.
My check definition:
class nagios_client::check_location_lib-nagios {
exec { 'check_usr-lib_exists':
command => '/bin/true',
onlyif => '/usr/bin/test -d /usr/lib/nagios/plugins',
}
exec { 'check_usr-lib32_exists':
command => '/bin/true',
onlyif => '/usr/bin/test -d /usr/lib32/nagios/plugins',
}
exec { 'check_usr-lib64_exists':
command => '/bin/true',
onlyif => '/usr/bin/test -d /usr/lib64/nagios/plugins',
}
}
I'm using Puppet 3.8.7. How to do it in the right way?
The problem with what you have is that you are using require, which only makes sure that the specified resource (in this case each exec) executes before the file resource. The behavior you want corresponds more closely to notify relationships (which create a refresh event), however, file resources do not care about refresh events. You can read more about refresh relationships here: https://puppet.com/docs/puppet/latest/lang_relationships.html#refreshing-and-notification.
There are 2 possible ways I can think of fixing this. The first one would be to use an exec statement to manage the file, instead of a file resource. This is definitely not optimal, since you lose all of the parameters from the file resource (I definitely do not recommend this approach, but you could).
The other way would be to create a custom ruby fact to check if the files exist. The fact would look something like this:
Facter.add('nagios_directories') do
confine kernel: 'Linux'
setcode do
paths_to_check = [
'/usr/lib/nagios/plugins',
'/usr/lib32/nagios/plugins',
'/usr/lib64/nagios/plugins',
]
paths_to_check.select { |d| File.directory?(d) }
end
end
This fact would check all the directories listed in the paths_to_check array, and return an array containing the directories that do exist. If none of the directories exist, it would return an empty array.
Once you have that fact set up, you can then rewrite your code like this:
class nagios_client::file_nrpe-cfg {
if (member($fact['nagios_directories'], '/usr/lib/nagios/plugins')) {
file { '/etc/nagios/nrpe.cfg.def':
path => '/etc/nagios/nrpe.cfg',
ensure => file,
owner => root,
group => root,
mode => 644,
content => template("nagios_client/nrpe-cfg.erb"),
}
}
if (member($fact['nagios_directories'], '/usr/lib32/nagios/plugins')) {
file { '/etc/nagios/nrpe.cfg.32':
path => '/etc/nagios/nrpe.cfg',
ensure => file,
owner => root,
group => root,
mode => 644,
content => template("nagios_client/nrpe-cfg-32.erb"),
}
}
if (member($fact['nagios_directories'], '/usr/lib64/nagios/plugins')) {
file { '/etc/nagios/nrpe.cfg.64':
path => '/etc/nagios/nrpe.cfg',
ensure => file,
owner => root,
group => root,
mode => 644,
content => template("nagios_client/nrpe-cfg-64.erb"),
}
}
}
Here is some additional documentation for custom facts: https://puppet.com/docs/facter/3.9/fact_overview.html.
Lastly, if you are using Puppet 6 (currently the latest release), you can write a custom Ruby function and make use of the new deferred type. This type allows you to execute functions on the agent during catalog run time (before this release, all Puppet functions where executed on the Puppet Master at compile time), which means you can use a function to check if a file exists. I have not had a chance to try this feature, but you can view the documentation here: https://puppet.com/docs/puppet/6.0/integrating_secrets_and_retrieving_agent-side_data.html.
I got a filename in the format <key>:<value>-<key>:<value>.log like e.g. pr:64-author:mxinden-platform:aws.log containing logs of a test run.
I want to stream each line of the file to elasticsearch via logstash. Each line should be treated as a separate document. Each document should get the fields according to the filename. So e.g. for the above example let's say log-line 17-12-07 foo something happened bar would get the fields: pr with value 64, author with value mxinden and platform with value aws.
At the point in time, where I write the logstash configuration I do not know the names of the fields.
How do I dynamically add fields to each line based on the fields contained in the filename?
The static approach so far is:
filter {
mutate { add_field => { "file" => "%{[#metadata][s3][key]}"} }
else {
grok { match => { "file" => "pr:%{NUMBER:pr}-" } }
grok { match => { "file" => "author:%{USERNAME:author}-" } }
grok { match => { "file" => "platform:%{USERNAME:platform}-" } }
}
}
Changes to the filename structure are fine.
Answering my own question based on #dan-griffiths comment:
Solution for a file like pr=64,author=mxinden,platform=aws.log is to use the Elasticsearch kv filter like e.g.:
filter {
kv {
source => "file"
field_split => ","
}
}
where file is a field extracted from the filename via the AWS S3 input plugin.
I'm trying to change configuration file using puppet.
This is my test.txt file that i want to change :
[default]
#puppet=no
abc=123
[nova]
#puppet=no
I want to change "#puppet=no" to "puppet=yes" only on [default] tab.
This is my test.pp for two version :
file_line{"someline":
path => '/root/openstack-puppet/computenode/nova/test.txt',
match => '[default]\n#puppet',
line => 'puppet=ok'
}
This one failed to find match pattern, so it just add "puppet=ok" at the end of file.
file_line{"someline":
path => '/root/openstack-puppet/computenode/nova/test.txt',
match => '#puppet',
line => 'puppet=ok'
}
This one failed because of multi match pattern problem.
I tried Augeas also, but I can't find how to uncomment using Augeas.
Somebody please help me with this problem!!
=========================================================================
I run this code :
file_line { 'someline':
path => '/root/openstack-puppet/computenode/nova/test.txt',
after => '\[default\]',
multiple => 'false',
match => '#puppet',
line => 'puppet=ok',
}
But when I run with "puppet apply" it still makes same error :
Error: More than one line in file '/root/openstack-puppet/computenode/nova/test.txt' matches pattern '#puppet'
Error: /Stage[main]/Main/File_line[someline]/ensure: change from absent to present failed: More than one line in file '/root/openstack-puppet/computenode/nova/test.txt' matches pattern '#puppet'
I think that 'after' attribute cannot applied when 'match' attribute is defined.
When I erase 'match' attribute, it works, but it didn't replace original string('#puppet=no').
It just added new line after [default] like this :
[default]
puppet=ok
#puppet=no
abc=123
dedd=0
[nova]
#puppet=no
So the issues still remain, how can I erase(or replace) the string '#puppet=no'
only on [default] tab??
The after attribute will solve this problem for you. Taking your second resource and cleaning up some, we have:
file_line { 'someline':
path => '/root/openstack-puppet/computenode/nova/test.txt',
match => '#puppet',
line => 'puppet=ok'
after => '[default]',
multiple => false,
}
Notice I also added the multiple attribute to safeguard against changing more than just the line you want to change.
The reason your first resource would have issues is threefold. First, file_line requires that your line attribute have a successful regexp match against the match attribute, which is not true in your case. Second, putting [default] in the match attribute means that [default] would be removed from your file if the resource succeeded as you wrote it. Third, you need to escape [] in your regexp, so it would look like \[default\] if you wanted to go that route (and you do not for the first two reasons given).
The file looks like it fits with the ini file format so a better solution would be to use the inifile resource type https://forge.puppet.com/modules/puppetlabs/inifile
ini_setting { "sample setting":
ensure => present,
path => '/root/openstack-puppet/computenode/nova/test.txt',
section => 'default',
setting => 'puppet',
value => 'yes',
}
Hi you can try it.
include stdlib
file_line{"someline":
ensure => 'present',
after => 'default',
multiple => false,
path => '/root/openstack-puppet/computenode/nova/test.txt',
line => 'puppet=ok',
}
In puppet, you can chown/chmod a single file by doing:
file {
'/var/log/mylog/test.log':
ensure => 'present',
mode => '0644',
owner => 'me';
}
Two questions on this:
ensure=>'present' is gonna make sure '/var/log/mylog/test.log' exists, if it doesn't it creates it. Is there any way I can make it do actions if file exists, if file doesn't exist, don't bother to create/delete it, just ignore it and carry on.
Let's say I have 3 files under /var/log/mylog/, I want to chown/chmod against them all in a batch instead of having 3 file resource sections in my puppet code. Can I do something like below(of coz, the code below doesn't exist, it's in my dream now ^_^ ):
files {
'/var/log/mylog/*.log':
ensure => 'present',
mode => '0644',
owner => 'me';
}
If you want to specify to take a given action if file exists, if file doesn't exist etc. you have no choice (to my knownledge) currently than to use the exec resource with creates + onlyif or unless directives.
You could use for instance (see reference doc)
exec { "touch /var/log/mylog/test.log":
path => "/usr/bin:/usr/sbin:/bin",
user => "${yourmodule::params::user}",
group => "${yourmodule::params::group}",
creates => "/var/log/mylog/test.log",
unless => "test -f /var/log/mylog/test.log"
}
file { '/var/log/mylog/test.log':
ensure => 'present',
mode => "${${yourmodule::params::mode}",
owner => "${yourmodule::params::user}",
group => "${yourmodule::params::group}",
require => Exec["touch /var/log/mylog/test.log"]
}
No. Again, you'll have to use an execresource.