Why isn't the "tidy" resource removing files on a new provision. I have the following:
package {'apache2':
ensure => present,
before => [
File["/etc/apache2/apache2.conf"],
File["/etc/apache2/envvars"]
],
}->
#Remove the conf files in the conf.d directory except the charset.
tidy { 'tidy_apache_conf':
path => '/etc/apache2/conf.d/',
recurse => 1,
backup => true,
matches => [
'localized-error-pages',
'other-vhosts-access-log',
'security'
],
}
On provisioning the files specified in the matches attribute aren't removed. However by specifying a "file" resource, I see the desired results.
$unwanted_apache_conf = [
'/etc/apache2/conf.d/localized-error-pages',
'/etc/apache2/conf.d/other-vhosts-access-log',
'/etc/apache2/conf.d/security'
]
package {'apache2':
ensure => present,
before => [
File["/etc/apache2/apache2.conf"],
File["/etc/apache2/envvars"]
],
}->
file { $unwanted_apache_conf:
ensure => absent
}
Why isn't the tidy resource removing the files? The tidy resource should be generating a file resource for each file matched. Am I missing an attribute in the tidy resource, or just missing the concept entirely? Is there any way to see what the file resources the tidy resource is generating look like? Thanks for any input.
This is because the resource Tidy for about 5 years didn't support notify mechanisms.
https://projects.puppetlabs.com/issues/3924
Related
I want to add/replace a string in a file with in a particular pattern. Please refer below
"dont_search_this" => {
-tag => "qwerty",
-abc_asd => [ "q/rg/dfg.txt",],
-dependent_fcv => ["me_lib", "you_lib",],
-vlog_opts => (($ENV{ABC_PROJECT}) eq "xuv")
? [ "-error=AMR", "-error=GHJ", "-error=TYU", "-error=IJK", ]
: [] ,
},
"search_this" => {
-tag => "qwerty",
-abc_asd => [ "q/rg/dfg.txt",],
-dependent_fcv => ["me_lib", "you_lib",],
-vlog_opts => (($ENV{ABC_PROJECT}) eq "xuv")
? [ "-error=AMR", "-error=GHJ", "-error=TYU", "-error=IJK", ]
:[],
},
In above data, I want to add string "-error=all", in the line -vlog_opts in search_this paragraph only. Modified should be as follows
"dont_search_this" => {
-tag => "qwerty",
-abc_asd => [ "q/rg/dfg.txt",],
-dependent_fcv => ["me_lib", "you_lib",],
-vlog_opts => (($ENV{ABC_PROJECT}) eq "xuv")
? [ "-error=AMR", "-error=GHJ", "-error=TYU", "-error=IJK", ]
:[],
},
"search_this" => {
-tag => "qwerty",
-abc_asd => [ "q/rg/dfg.txt",],
-dependent_fcv => ["me_lib", "you_lib",],
-vlog_opts => (($ENV{ABC_PROJECT}) eq "xuv")
? [ "-error=AMR", "-error=GHJ", "-error=TYU", "-error=IJK", "-error=all" ]
:[],
},
Please help me in this.
Using perl is also fine.
Thank You very much!
I can't help it but think that there's got to be a better way than editing the source code ... ?
Read the whole script file into a string and then follow the trail to identify the place to change
perl -0777 -wpe'
s/"search_this"\s+=>\s+\{.*?\-vlog_opts\s+=>\s+[^\]]+\K/ADD_THIS/s;
' file
(broken over lines for readability)
Notes
0777 switch unsets the input record separator, so the file is "slurped" whole as one "line"
the /s modifier makes it so that . matches the newline as well
the \K makes it so that all matches up to that point are dropped (not consumed) so they don't have to be (captured and) entered in the replacement part. So we literally add ADD_THIS
Good information about \K is under "Lookaround Assertions" in Extended Patterns in perlre but keep in mind that it subtly differs from other lookarounds
That looks like a perl data structure.
Any reason why can't just push "-error=all" into $hash{search_this}{-vlog_opts}->#*
I have a strange problem with a logstash filter, that was working up until yesterday.
This is my .conf file:
input {
beats {
port => 5044
}
}
filter {
if "access.log" in [source] {
grok {
match => { "message" => "%{GREEDYDATA:messagebefore}\[%{HTTPDATE:real_date}\]\ %{GREEDYDATA:messageafter}" }
}
mutate {
replace => { "[message]" => "%{messagebefore} %{messageafter}" }
remove_field => [ "messagebefore" ]
remove_field => [ "messageafter" ]
}
date {
match => [ "real_date", "dd/MMM/YYYY:HH:mm:ss Z" ]
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
}
}
The issue is that in the output, the derived variables %messagebefore and %message after are coming through as literal text, rather than the content.
Example:
source:/var/log/nginx/access.log message:%{messagebefore} %{messageafter}...
The strange thing is that this was working fine before yesterday afternoon. I also appreciate that this is probably not the best way to process nginx logs, but I'm using this one as an example only as it's affecting all of my other configuration files as well.
My environment:
ELK stack running as a docker container on Centos 7 derived from docker.io/sebp/elk.
Filebeat running on Centos 7 client.
Any ideas?
Thanks.
Solved this myself, and posting here in case anyone gets the same issue.
When building the docker container, I inadvertently left behind another .conf file that also contained reference to access.log. The two .conf files were clashing as logstash was processing both. I deleted the erroneous file and it has all started working.
I have a variable that I set at a very early stage, the variable is boolean, then at some point I want to exec a command based on the fact if this variable is true or false, something like this:
exec { 'update-to-latest-core':
command => "do something",
user => 'root',
refreshonly => true,
path => [ "/bin/bash", "sbin/" , "/usr/bin/", "/usr/sbin/" ],
onlyif => 'test ${latest} = true',
notify => Exec["update-to-latest-database"],
}
So this command doesn't work (onlyif => ['test ${latest} = true'],) I tried several other ways too, but didn't work. Something so simple cannot be so hard to do. Can someone help me with this, and also explain the rules behind getting commands to execute inside onlyif close. (also I cannot use if-close on a higher level because I have other dependencies )
Since $latest is a Puppet variable, it is not useful to defer checking its value until the catalog is applied. Technically, in fact, you cannot do so: Puppet will interpolate the variable's value into the Exec resource's catalog representation, so there is no variable left by the time the catalog is applied, only a literal.
Since the resource in question is notified by another resource, you must not suppress it altogether. If the code presented in your question does not work, then it is likely because the interpolated value of $latest is different than you expect -- for example, '1' instead of 'true'. Running the agent in --debug mode should show you the details of the command being executed.
Alternatively, you could approach it a different way:
exec { 'update-to-latest-core':
command => $latest ? { true => "do something", default => '/bin/true' },
user => 'root',
refreshonly => true,
path => [ "/bin/bash", "sbin/" , "/usr/bin/", "/usr/sbin/" ],
notify => Exec["update-to-latest-database"],
}
That will execute (successfully, and with no external effect) whenever $latest is false, and it will run your do something command whenever $latest is true. The choice of command is made during catalog building.
I am running the following filter in a logstash config file:
filter {
if [type] == "logstash" {
grok {
match => {
"message" => [
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{DATA:mymessage}, reason:%{GREEDYDATA:reason}",
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{GREEDYDATA:mymessage}"
]
}
}
}
}
It kind of works:
it does identify and carve out variables "timestamp", "severity", "instance", "mymessage", and "reason"
Really what I wanted was to have text which is now %{mymessage} to be the ${message} but when I add any sort of mutate command to this grok it stops working (btw, should there be a log that tells me what is breaking? I didn't see it... ironic for a logging solution to not have verbose logging).
Here's what I tried:
filter {
if [type] == "logstash" {
grok {
match => {
"message" => [
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{DATA:mymessage}, reason:%{GREEDYDATA:reason}",
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{GREEDYDATA:mymessage}"
]
}
mutate => {
replace => [ "message", "%{mymessage}"]
remove => [ "mymessage" ]
}
}
}
}
So in summary I'd like to understand:
Are there log files I can look at to see why/where a failure is happening?
Why would my mutate commands illustated above not work?
I also thought that if I never used the mymessage variable but instead just referred to message as the variable that maybe it would automatically truncate message to just the matched pattern but that appeared to append the results instead ... what is the correct behaviour?
Using the overwrite option is the best solution, but I thought I'd address a couple of your questions directly anyway.
It depends on how Logstash is started. Normally you'd run it via an init script that passes the -l or --log option. /var/log/logstash would be typical.
mutate is a filter of its own, not a part of grok. You could've done like this (or used rename instead of replace + remove):
grok {
...
}
mutate {
replace => [ "message", "%{mymessage}" ]
remove => [ "mymessage" ]
}
I'd do it a different way. For what you're trying to do, the overwrite option might be more apt.
Something like this:
grok {
overwrite => "message"
match => [
"message" => [
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{DATA:message}, reason:%{GREEDYDATA:reason}",
"\[%{DATA:timestamp}\]\[%{DATA:severity}\]\[%{DATA:instance}\]%{GREEDYDATA:message}"
]
]
}
This'll replace 'message' with the 'grokked' bit.
I know that doesn't directly answer your question - about all I can say is when you start logstash, it writes to STDOUT - at least on the version I'm using - which I'm capturing and writing to a file. In here, it reports some of the errors.
There's a -l option to logstash that lets you specify a log file to use - this will usually show you what's going on in the parser, but bear in mind that if something doesn't match a rule, it won't necessarily tell you why it didn't.
I am receiving Log4j generated log files from remote servers using Logstash forwarder. The log event has fields including a field named "file" in the format /tomcat/logs/app.log, /tomcat/logs/app.log.1, etc. Of course file path /tomcat/logs is on the remote machine and I would like Logstash to create files on the local file system using only the file name and not use the remote file path.
Locally, I would like to create a file based on file name app.log, app.log.1, etc. How can one accomplish this?
I am unable to use grok since it appears to work only with "message" field and not others.
Example Log Event:
{"message":["11 Sep 2014 16:29:04,934 INFO LOG MESSAGE DETAILS HERE "],"#version":"1","#timestamp":"2014-09-15T05:44:43.472Z","file":["/tomcat/logs/app.log.1"],"host":"aus-002157","offset":"3116","type":"app.log"}
Logstash configuration - what do I use to write the filter section?
input {
lumberjack {
port => 48080
ssl_certificate => "/tools/LogStash/logstash-1.4.2/ssl/logstash.crt"
ssl_key => "/tools/LogStash/logstash-1.4.2/ssl/logstash.key"
}
}
filter{
}
output {
file{
#message_format => "%{message}"
flush_interval => 0
path => "/tmp/%{host}.%{type}.%{filename}"
max_size => "4M"
}
}
Figured out the pattern to be as follows:
grok{
match => [ "file", "^(/.*/)(?<filename>(.*))$" ]
}
Thanks for the help!
Logstash Grok can parse all the fields in a log event, not only message field.
For example, you want to extract the file field,
you can do like this
filter {
grok {
match => [ "file", "your pattern" ]
}
}