hiera CLI lookup fails but puppet agent -t works - puppet

[root#puppet puppet]# cat /etc/hiera.yaml
---
:backends:
- yaml
:yaml:
:datadir: '/etc/puppet/hieradata'
:hierarchy:
- env/%{::environment}/fqdn/%{::fqdn}
- hostgroup/%{::hostgroup1}
- global
[root#puppet puppet]# cat hieradata/env/dev/fqdn/client00.itw.local.yaml
fruit::a:
- 'DevFQDN-kiwi'
[root#puppet puppet]# cat environments/dev/modules/fruit/manifests/init.pp
class fruit(
$a = hiera('fruit::a' ),
$b = hiera('fruit::b'),
$c = hiera('fruit::c')
) {
notify { 'foo':
message => "a is: ${a}, b is: ${b}, c is : ${c}",
}
}
fruit::a seems to resolve fine on client00.itw.local
[root#client00 ~]# puppet agent -t
Warning: Local environment: "production" doesn't match server specified node environment "dev", switching agent to "dev".
Info: Retrieving plugin
Info: Caching catalog for client00.itw.local
Info: Applying configuration version '1411407772'
Notice: a is: DevFQDN-kiwi, b is: HostgroupAll-orange, c is : global-lime-C
But CLI hiera does not return the correct value on the puppet master
[root#puppet puppet]# hiera -d fruit::a ::fqdn=client00.itw.local ::hostgroup1=all
DEBUG: Mon Sep 22 13:57:16 -0400 2014: Hiera YAML backend starting
DEBUG: Mon Sep 22 13:57:16 -0400 2014: Looking up fruit::a in YAML backend
DEBUG: Mon Sep 22 13:57:16 -0400 2014: Looking for data source hostgroup/all
DEBUG: Mon Sep 22 13:57:16 -0400 2014: Looking for data source global
DEBUG: Mon Sep 22 13:57:16 -0400 2014: Found fruit::a in global
["global-lime-A"]
With mcollective, hiera -d fruit::a -m client00.itw.local, I got the same result.
Thanks for your help.

environment is a Puppet specific fact, included in Puppet libraries
When using hiera in the command line you have to pass those facts.

Related

Puppet Hiera lookup in manifest not working

I am learning Puppet and just read about Hiera.
Before coming to the issue, i'm providing some config settings below:
$ cat /etc/puppetlabs/puppet/puppet.conf
[master]
codedir = /etc/puppetlabs/code
[agent]
server = puppet.example.com
[master]
certname = puppet.example.com
[master]
vardir = /var/opt/puppetlabs/puppetserver
ssldir = $vardir/ssl
[main]
environmentpath = /etc/puppetlabs/code/environments
basemodulepath = /etc/puppetlabs/code/modules
$ cat /etc/puppetlabs/puppet/hiera.yaml
---
:backends:
- yaml
:hierarchy:
- "nodes/%{::trusted.certname}"
- common
:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /etc/puppetlabs/code/environments/%{environment}/hieradata on *nix
# - %CommonAppData%\PuppetLabs\code\environments\%{environment}\hieradata on Windows
# When specifying a datadir, make sure the directory exists.
:datadir:
UPDATE: After going through #um-FelixFrank answer, i changed my default Hiera config file location. The config file has the following content:
$ cat /etc/puppetlabs/code/hiera.yaml
---
:backends:
- yaml
:hierarchy:
- "hostname/%{facts.hostname}"
- "os/%{facts.osfamily}"
- common
:yaml:
:datadir: /etc/puppetlabs/code/hieradata
I created hieradata dir under /etc/puppetlabs/code
$ ll /etc/puppetlabs/code/
total 4
drwxr-xr-x. 4 root root 32 Dec 20 11:17 environments
drwxr-xr-x. 3 root root 39 Dec 21 12:22 hieradata
-rw-r--r--. 1 root root 153 Dec 20 16:51 hiera.yaml
drwxr-xr-x. 2 root root 6 Dec 21 11:02 modules
$ cat /etc/puppetlabs/code/hieradata/common.yaml
---
puppet::status: 'running'
puppet::enabled: true
I tried overwriting the above values in my hostname yaml file as stated under:
$ cat /etc/puppetlabs/code/hieradata/hostname/delvmplgc1.yaml
---
puppet::status: 'stopped'
puppet::enabled: false
$ facter hostname
delvmplgc1
I created a sample manifest on Puppet server to see whether i am able to perform Hiera lookup in manifest.
$ cat /etc/puppetlabs/code/environments/qa/manifests/hierasample.pp
notify { 'welcome':
message => "Hello!",
}
$status = lookup({ name => 'puppet::status', default_value => 'running' })
$enabled = lookup({ name => 'puppet::enabled', default_value => true })
service { 'puppet':
ensure => $status,
enable => $enabled,
}
Now the problem is that when i try executing the manifest, i see no messages related to Hiera.
$ puppet apply /etc/puppetlabs/code/environments/qa/manifests/hierasample.pp
Notice: Compiled catalog for delvmplgc1.sapient.com in environment production in 0.99 seconds
Notice: Hello!
Notice: /Stage[main]/Main/Notify[welcome]/message: defined 'message' as 'Hello!'
Notice: Applied catalog in 0.10 seconds
Any help will be appreciated.
As the comment in the standard hiera.yaml states, the datadir is located in
/etc/puppetlabs/code/environments/%{environment}/hieradata
So instead of creating hieradata in /etc/puppetlabs/code directly, move it down two levels into /etc/puppetlabs/code/environments/qa and other environments.

How to monitor newly created file in a directory with bash?

I have a log directory that consists of bunch of log files, one log file is created once an system event has happened. I want to write an oneline bash script that always monitors the file list and display the content of the newly created file on the terminal. Here is what it looks like:
Currently, all I have is to display the content of the whole directory:
for f in *; do cat $f; done
It lacks the monitoring feature that I wanted. One limitation of my system is that I do not have watch command. I also don't have any package manager to install fancy tools. Raw BSD is all I have. I do have tail, I was thinking of something like tail -F $(ls) but this tails each file instead of the file list.
In summary, I want to modify my script such that I can monitor the content of all newly created files.
First approach - use a hidden file in you dir (in my example it has a name .watch). Then you one-liner might look like:
for f in $(find . -type f -newer .watch); do cat $f; done; touch .watch
Second approach - use inotify-tools: https://unix.stackexchange.com/questions/273556/when-a-particular-file-arrives-then-execute-a-procedure-using-shell-script/273563#273563
You can cram it into a one-liner if you want, but I'd recommend just running the script in the background:
#!/bin/bash
[ ! -d "$1" ] && {
printf "error: argument is not a valid directory to monitory.\n"
exit 1
}
while :; fname="$1/$(inotifywait -q -e modify -e create --format '%f' "$1")"; do
cat "$fname"
done
Which will watch the directory given as the first argument, and cat any new or changed file in that directory. Example:
$ bash watchdir.sh my_logdir &
Which will then cat new or changed files in my_logdir.
Using inotifywait in monitor mode
First this little demo:
Open one terminal and run this:
ext=(php css other)
while :;do
subname=''
((RANDOM%10))||printf -v subname -- "-%04x" $RANDOM
date >/tmp/test$subname.${ext[RANDOM%3]}
sleep 1
done
This will create randomly files named /tmp/test.php, /tmp/test.css and /tmp/test.other, but randomly (approx 1 time / 10), the name will be /tmp/test-XXXX.[css|php|other] where XXXX is an hexadecimal random number.
Open another terminal and run this:
waitPaths=(/{home,tmp})
while read file ;do
if [ "$file" ] &&
( [ -z "${file##*.php}" ] || [ -z "${file##*.css}" ] ) ;then
(($(stat -c %Y-%X $file)))||echo -n new
echo file: $file, content:
cat $file
fi
done < <(
inotifywait -qme close_write --format %w%f ${waitPaths[*]}
)
This may produce something like:
file: /tmp/test.css, content:
Tue Apr 26 18:53:19 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:21 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:23 CEST 2016
file: /tmp/test.css, content:
Tue Apr 26 18:53:25 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:27 CEST 2016
newfile: /tmp/test-420b.php, content:
Tue Apr 26 18:53:28 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:29 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:30 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:31 CEST 2016
Some explanation:
waitPaths=(/{home,tmp}) could be written waitPaths=(/home /tmp) or for only one directory: waitPaths=/var/log
if condition search for filenames matching *.php or *.css
(($(stat -c %Y-%X $file)))||echo -n new will compare creation and modification time.
inotifywait
-q to stay quiet (don't print more then required)
-m for monitor mode: Command don't termine, but print each matching event.
-e close_write react only to specified kind of event.
-f %w%f Output format: path/file
Another way:
There is a more sophisticated sample:
Listenning for two kind of events (CLOSE_WRITE | CREATE)
Using a list of new files flags for knowing which files are new when CLOSE_WRITE event occur.
In second console, hit Ctrl+C, or in new terminal, tris this:
waitPaths=(/{home,tmp})
declare -A newFiles
while read path event file; do
if [ "$file" ] && ( [ -z "${file##*.php}" ] || [ -z "${file##*.css}" ] ); then
if [ "$event" ] && [ -z "${event//*CREATE*}" ]; then
newFiles[$file]=1
else
if [ "${newFiles[$file]}" ]; then
unset newFiles[$file]
echo NewFile: $file, content:
sed 's/^/>+ /' $file
else
echo file: $file, content:
sed 's/^/> /' $path/$file
fi
fi
fi
done < <(inotifywait -qme close_write -e create ${waitPaths[*]})
May produce something like:
file: test.css, content:
> Tue Apr 26 22:16:02 CEST 2016
file: test.php, content:
> Tue Apr 26 22:16:03 CEST 2016
NewFile: test-349b.css, content:
>+ Tue Apr 26 22:16:05 CEST 2016
file: test.css, content:
> Tue Apr 26 22:16:08 CEST 2016
file: test.css, content:
> Tue Apr 26 22:16:10 CEST 2016
file: test.css, content:
> Tue Apr 26 22:16:13 CEST 2016
Watching for new files AND new lines in old files, using bash
There is another solution by using some bashisms like associative arrays:
Sample:
wpath=/var/log
while : ;do
while read -a crtfile ;do
if [ "${crtfile:0:1}" = "-" ] &&
[ "${crtfile[8]##*.}" != "gz" ] &&
[ "${files[${crtfile[8]}]:-0}" -lt ${crtfile[4]} ] ;then
printf "\e[47m## %-14s :- %(%a %d %b %y %T)T ##\e[0m\n" ${crtfile[8]} -1
tail -c +$[1+${files[${crtfile[8]}]:-0}] $wpath/${crtfile[8]}
files[${crtfile[8]}]=${crtfile[4]}
fi
done < <( /bin/ls -l $wpath )
sleep 1
done
This will dump each files (with filename not ending by .gz) in /var/log, and watch for modification or new files, then dump new lines.
Demo:
In a first terminal console, hit:
ext=(php css other)
( while :; do
subname=''
((RANDOM%10)) || printf -v subname -- "-%04x" $RANDOM
name=test$subname.${ext[RANDOM%3]}
printf "%-16s" $name
{
date +"%a %d %b %y %T" | tee /dev/fd/5
fortune /usr/share/games/fortunes/bofh-excuses
} >> /tmp/$name
sleep 1
done ) 5>&1
You need to have fortune installed with BOFH excuses librarie.
If you really not have fortune, you could use this instead:
LANG=C ext=(php css other)
( while :; do
subname=''
((RANDOM%10)) || printf -v subname -- "-%04x" $RANDOM
name=test$subname.${ext[RANDOM%3]}
printf "%-16s" $name
{
date +"%a %d %b %y %T" | tee /dev/fd/5
for ((1; RANDOM%5; 1))
do
printf -v str %$[RANDOM&12]s
str=${str// /blah, }
echo ${str%, }.
done
} >> /tmp/$name
sleep 1
done ) 5>&1
This may output something like:
test.css Thu 28 Apr 16 12:00:02
test.php Thu 28 Apr 16 12:00:03
test.other Thu 28 Apr 16 12:00:04
test.css Thu 28 Apr 16 12:00:05
test.css Thu 28 Apr 16 12:00:06
test.other Thu 28 Apr 16 12:00:07
test.php Thu 28 Apr 16 12:00:08
test.css Thu 28 Apr 16 12:00:09
test.other Thu 28 Apr 16 12:00:10
test.other Thu 28 Apr 16 12:00:11
test.php Thu 28 Apr 16 12:00:12
test.other Thu 28 Apr 16 12:00:13
In a second terminal console, hit:
declare -A files
wpath=/tmp
while :; do
while read -a crtfile; do
if [ "${crtfile:0:1}" = "-" ] && [ "${crtfile[8]:0:4}" = "test" ] &&
( [ "${crtfile[8]##*.}" = "css" ] || [ "${crtfile[8]##*.}" = "php" ] ) &&
[ "${files[${crtfile[8]}]:-0}" -lt ${crtfile[4]} ]; then
printf "\e[47m## %-14s :- %(%a %d %b %y %T)T ##\e[0m\n" ${crtfile[8]} -1
tail -c +$[1+${files[${crtfile[8]}]:-0}] $wpath/${crtfile[8]}
files[${crtfile[8]}]=${crtfile[4]}
fi
done < <(/bin/ls -l $wpath)
sleep 1
done
This will each seconds
for all entries in watched directory
search for files (first caracter is -),
search for filenames begining by test,
search for filenames ending by css or php,
compare already printed sizes with new file size,
if new size greater,
print out new bytes by using tail -c and
store new already printed size
sleep 1 seconds
this may output something like:
## test.css :- Thu 28 Apr 16 12:00:09 ##
Thu 28 Apr 16 12:00:02
BOFH excuse #216:
What office are you in? Oh, that one. Did you know that your building was built over the universities first nuclear research site? And wow, aren't you the lucky one, your office is right over where the core is buried!
Thu 28 Apr 16 12:00:05
BOFH excuse #145:
Flat tire on station wagon with tapes. ("Never underestimate the bandwidth of a station wagon full of tapes hurling down the highway" Andrew S. Tannenbaum)
Thu 28 Apr 16 12:00:06
BOFH excuse #301:
appears to be a Slow/Narrow SCSI-0 Interface problem
## test.php :- Thu 28 Apr 16 12:00:09 ##
Thu 28 Apr 16 12:00:03
BOFH excuse #36:
dynamic software linking table corrupted
Thu 28 Apr 16 12:00:08
BOFH excuse #367:
Webmasters kidnapped by evil cult.
## test.css :- Thu 28 Apr 16 12:00:10 ##
Thu 28 Apr 16 12:00:09
BOFH excuse #25:
Decreasing electron flux
## test.php :- Thu 28 Apr 16 12:00:13 ##
Thu 28 Apr 16 12:00:12
BOFH excuse #3:
electromagnetic radiation from satellite debris
Nota: If some file are modified more than one time between two checks, all modification will be printed on next check.
Although not really nice, the following gives (and repeats) the last 50 lines of the newest file in the current directory:
while true; do tail -n 50 $(ls -Art | tail -n 1); sleep 5; done
You can refresh every minute using a cronjob:
$crontabe -e
* * * * * /home/script.sh
if you need to refresh in less than a minute you can use the command "sleep" inside your script.

Sed to extract with log between to given date

I am trying to extract the logs b/w two given dates. the code is working fine if I specify the date like this Apr 02 15:21:28, I mean if know time with exact min and second, But code gets failed with I pass value like this
from Apr 02 15* to Apr 04 15* here 15 is hour, actually I want to make script in which user just need to add day and time (only in hours no min or seconds)
> #!/bin/bash
read -p " enter the App name : " app
file="/logs/$app/$app.log"
read -p " Enter the Date in this Format --'10 Jan 20 or Jan 10 20' : " first
read -p " Enter the End time of logs : " end
if [ -f "$file" ]
then
if grep -q "$first" "$file"; then
final_first=$first
fi
if grep -q "$end" "$file"; then
final_end=$end
fi
sed -n " /$final_first/,/$final_end/ "p $file >$app.txt
else
echo "$app.log not found, Please check correct log name in deployer"
fi
Sample data:
Apr 07 12:39:15 DEBUG [http-0.0.0.0-8089-21] model.DSSAuthorizationModel - pathInfo : /about-ses
Apr 07 12:39:15 DEBUG [http-0.0.0.0-8089-21] servlet.CasperServlet - Request about to be serviced by model: com.ge.oilandgas.sts.model.SessionValidModel
I'd use a language with built-in datetime parsing or an easily included module. For example, perl
first="Apr 07 12"
end="Apr 08 00"
perl -MTime::Piece -sane '
BEGIN {
$first_ts = Time::Piece->strptime($first, "%b %d %H")->epoch;
$end_ts = Time::Piece->strptime($end, "%b %d %H")->epoch;
}
$ts = Time::Piece->strptime(join(" ", #F[0..2]), "%b %d %T")->epoch;
print if $first_ts <= $ts and $ts <= $end_ts;
' -- -first="$first" -end="$end" <<END
Apr 07 11:39:15 DEBUG [http-0.0.0.0-8089-21] model.DSSAuthorizationModel - pathInfo : /about-ses
Apr 07 12:00:00 DEBUG [http-0.0.0.0-8089-21] model.DSSAuthorizationModel - pathInfo : /about-ses
Apr 07 12:39:15 DEBUG [http-0.0.0.0-8089-21] model.DSSAuthorizationModel - pathInfo : /about-ses
Apr 07 12:39:15 DEBUG [http-0.0.0.0-8089-21] servlet.CasperServlet - Request about to be serviced by model: com.ge.oilandgas.sts.model.SessionValidModel
Apr 07 23:59:59 DEBUG [http-0.0.0.0-8089-21] servlet.CasperServlet - Request about to be serviced by model: com.ge.oilandgas.sts.model.SessionValidModel
Apr 08 00:00:01 DEBUG [http-0.0.0.0-8089-21] servlet.CasperServlet - Request about to be serviced by model: com.ge.oilandgas.sts.model.SessionValidModel
END
outputs
Apr 07 12:00:00 DEBUG [http-0.0.0.0-8089-21] model.DSSAuthorizationModel - pathInfo : /about-ses
Apr 07 12:39:15 DEBUG [http-0.0.0.0-8089-21] model.DSSAuthorizationModel - pathInfo : /about-ses
Apr 07 12:39:15 DEBUG [http-0.0.0.0-8089-21] servlet.CasperServlet - Request about to be serviced by model: com.ge.oilandgas.sts.model.SessionValidModel
Apr 07 23:59:59 DEBUG [http-0.0.0.0-8089-21] servlet.CasperServlet - Request about to be serviced by model: com.ge.oilandgas.sts.model.SessionValidModel
Given your code, I would make the following change:
if ! [ -f "$file" ]; then
echo "$app.log not found, Please check correct log name in deployer"
exit 1
fi
grep -q "$first" "$file" && final_first="/$first/" || final_first='1'
grep -q "$end" "$file" && final_end="/$end/" || final_end='$'
sed -n "${final_first},${final_end}p" "$file" >"$app.txt"
That provides default addresses for the sed range, first line and last line.

how to check if a string is a valid mailq mailid using bash?

I am stuck.
I am wondering how to check if a Q-ID is a valid mailq ID.
/var/spool/mqueue (3 requests)
-----Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient-----------
m9TMLQHG012749 1103 Thu Oct 30 11:21 <apache#localhost.localdomain>
(host map: lookup (electrictoolbox.com): deferred)
<test#electrictoolbox.com>
m9TMLRB9012751 37113 Thu Oct 30 11:21 <apache#localhost.localdomain>
(host map: lookup (electrictoolbox.com): deferred)
<test#electrictoolbox.com>
m9TMLPcg012747 240451 Thu Oct 30 11:21 <apache#localhost.localdomain>
(host map: lookup (electrictoolbox.com): deferred)
<test#electrictoolbox.com>
Total requests: 3
thanks for any hint.
Based on your last comment, I guess this is what would suffice.
randomMailQId=XXXXXXXX # get this populated
if mailq | grep "^$randomMailQId\s"; then
echo Valid
else
echo Invalid
fi

procmail disregards /etc/group?

sample procmailrc:
SHELL=/bin/bash
LOGFILE=$HOME/procmail.log
VERBOSE=yes
:0
* ^Subject: envdump please$
{
LOG="`id`"
:0
/dev/null
}
/etc/group file contains (note the other usernames are vain attempts to make this work):
someuser:x:504:
s3:x:505:someuser,someotheruser,postfix,postdrop,mail,root
If I run as "someuser" the command id:
[someuser#lixyz-pqr ~]$ id
uid=504(someuser) gid=504(someuser) groups=504(someuser),505(s3)
However when I run procmail by sending an email with the subject "envdump please", the 505/s3 group disappears (this is in procmail.log):
procmail: [17618] Mon Dec 19 17:39:50 2011
procmail: Match on "^Subject: envdump please$"
procmail: Executing "id"
procmail: Assigning "LOG=uid=504(someuser) gid=504(someuser) groups=504(someuser)"
uid=504(someuser) gid=504(someuser) groups=504(someuser)procmail: Assigning "LASTFOLDER=/dev/null"
this server is running Fedora 14 with Postfix 2.7.5
Procmail wasn't installed setuid.
for background, it should look like:
[root#li321-238 postfix]# ls -l /usr/bin/procmail
-rwsr-sr-x. 1 root mail 92816 Jul 28 2009 /usr/bin/procmail
which you can set up via:
chmod ug+s /usr/bin/procmail

Resources