Using a Chef recipe to append multiple lines to a config file - linux

I'm trying to create a Chef recipe to append multiple lines (20-30) to a specific config file.
I'm aware the recommended pattern is to change entire config files rather than just appending to a file, but I dislike this approach for multiple reasons.
So far the only solution I found was to use a cookbook_file and then use a bash resource to do:
cat lines_to_append >> /path/configfile
Obviously this wouldn't work properly, as it'd append the file over and over, each time you run chef-client. I'd have to create a small bash script to check for a specific string first, and, if not found, append to the file.
But this seems to defeat the purpose of using Chef. There must be a better way.
One promising solution was the line cookbook from OpsCode Community. It aimed to solve this exact problem. Unfortunately the functionality is incomplete, buggy, and the code is just a quick hack. Far from being a solid solution.
Another option I evaluated was augeas. Seems pretty powerful, but it'd add yet-another layer of abstraction to the system. Overkill, in my case.
Given that this is one of the most obvious tasks for any sysadmin, is there any easy and beautiful solution with Chef that I'm not seeing?
EDIT: here's how I'm solving it so far:
cookbook_file "/tmp/parms_to_append.conf" do
source "parms_to_append.conf"
end
bash "append_to_config" do
user "root"
code <<-EOF
cat /tmp/parms_to_append.conf >> /etc/config
rm /tmp/parms_to_append.conf
EOF
not_if "grep -q MY_IDENTIFIER /etc/config"
end
It works, but not sure this is the recommended Chef pattern.

As you said yourself, the recommended Chef pattern is to manage the whole file.
If you're using Chef 11 you could probably make use of partials for what you're trying to achieve.
There's more info here and on this example cookbook.
As long as you have access to the original config template, just append <%= render "original_config.erb" %> to the top of your parms_to_append.conf template.

As said before, using templates and partials is common way of doing this, but chef allows appending files, and even changing(editing) file lines. Appendind is performed using following functions:
insert_line_after_match(regex, newline);
insert_line_if_no_match(regex, newline)
You may find and example here on stackoverflow, and the full documentation on rubydoc.info
Please use it with caution, and only when partials and templates are not appropriate.

I did something like this:
monit_overwrites/templates/default/monitrc.erb:
#---FLOWDOCK-START
set mail-format { from: monit#ourservice.com }
#---FLOWDOCK-END
In my recipe I did this:
monit_overwrites/recipes/default.rb:
execute "Clean up monitrc from earlier runs" do
user "root"
command "sed '/#---FLOWDOCK-START/,/#---FLOWDOCK-END/d' > /etc/monitrc"
end
template "/tmp/monitrc_append.conf" do
source "monitrc_append.erb"
end
execute "Setup monit to push notifications into flowdock" do
user "root"
command "cat /tmp/monitrc_append.conf >> /etc/monitrc"
end
execute "Remove monitrc_append" do
command "rm /tmp/monitrc_append.conf"
end

The easiest way to tackle this would be to create a string and pass it to content. Of course bash blocks work... but I think file resources are elegant.
lines = ""
File.open('input file') do |f|
f.lines.each do |line|
lines = lines + line + "\n"
end
end
file "file path" do
content line
end

Here is the example ruby block for inserting 2 new lines after match:
ruby_block "insert_lines" do
block do
file = Chef::Util::FileEdit.new("/etc/nginx/nginx.conf")
file.insert_line_after_match("worker_rlimit_nofile", "load_module 1")
file.insert_line_after_match("pid", "load_module 2")
file.write_file
end
end
insert_line_after_match searches for the regex/string and it will insert the value in after the match.

Related

Check if same file exists in another directory using Bash

I'm new to bash and would like your help; couldn't find an answer for this case.
I'm trying to check if the files in one directory exist in another directory
Let's say I have the path /home/public/folder/ (here I have several files)
and I want to check if the files exist in /home/private/folder2
I tried that
for file in $firstPath/*
do
if [ -f $file ]; then
(ask if to over write etc.. rest of the code)
And also
for file in $firstPath/*
do
if [ -f $file/$secondPath ]; then
(ask if to over write etc.. rest of the code)
Both don't work; it seems that in the first case, it compares the files in the first path (so it always ask me if I want to overwrite although it doesn't exist in the second path)
And in the second case, it doesn't go inside the if statement.
How could I fix that?
When you have a construct like for file in $firstPath/*, the value of $file is going to include the value of $firstPath, which does not exist within $secondPath. You need to strip the path in order to get the bare filename.
In traditional POSIX shell, the canonical way to do this was with an external tool called basename. You can, however, achieve what is generally thought to be equivalent functionality using Parameter Expansion, thus:
for file in "$firstPath"/*; do
if [[ -f "$secondPath/${file##*/}" ]]; then
# file exists, do something
fi
done
The ${file##*/} bit is the important part here. Per the documentation linked above, this means "the $file variable, with everything up to the last / stripped out." The result should be the same as what basename produces.
As a general rule, you should quote your variables in bash. In addition, consider using [[ instead of [ unless you're actually writing POSIX shell scripts which need to be portable. You'll have a more extensive set of tests available to you, and more predictable handling of variables. There are other differences too.

how to download batch of data with linux command line?

For example I want to download data from:
http://nimbus.cos.uidaho.edu/DATA/OBS/
with the link:
http://nimbus.cos.uidaho.edu/DATA/OBS/pr_1979.nc
to
http://nimbus.cos.uidaho.edu/DATA/OBS/pr_2015.nc
How can I write a script to download all of them? with wget?and how to loop the links from 1979 to 2015?
wget can take file as input which contains URLs per line.
wget -ci url_file
-i : input file
-c : resume functionality
So all you need to do is put the URLs in a file and use that file with wget.
A simple loop like Jeff Puckett II's answer will be sufficient for your particular case, but if you happen to deal with more complex situations (random urls), this method may come in handy.
Probably something like a for loop iterating over a predefined series.
Untested code:
for i in {1979..2015}; do
wget http://nimbus.cos.uidaho.edu/DATA/OBS/pr_$i.nc
done

Organize code in unix bash scripting

I am used to object oriented programming. Now, I have just started learning unix bash scripting via linux.
I have a unix script with me. I wanted to break it down into "modules" or preferably programs similar to "more", "ls", etc., and then use pipes to link all my programs together. E.g., "some input" myProg1 | myProg2 | myProg3.
I want to organize my code and make it look neater, instead of all in one script. Also, it will be easy to do testing and development.
Is it possible to do this, especially as a newbie ?
There are a few things you could take a look at, for example the usage of aliases in bash and storing them in either bashrc or a seperate file called by bashrc
that will make running commands easier..
take a look here for expanding commands into aliases (simple aliases are easy)
You can also look into using functions in your code (lots of bash scripts in above link's home folder to make sense of functions browse this site :) which has much better examples...
Take a look here for some piping tails into script
pipe tail output into another script
The thing with bash is its flexibility, so for example if something starts to get too messy for bash you could always write a perl/Java any lang and then call this from within your bash script, capture its output and do something else..
Unsure why all the pipes anyways here is something that may be of help:
./example.sh 20
function one starts with 20
In function 2 20 + 10 = 30
Function three returns 10 + 10 = 40
------------------------------------------------
------------------------------------------------
Local function variables global:
Result2: 30 - Result3: 40 - value2: 10 - value1: 20
The script:
example.sh
#!/bin/bash
input=$1;
source ./shared.sh
one
echo "------------------------------------------------"
echo "------------------------------------------------"
echo "Local function variables global:"
echo "Result2: $result2 - Result3: $result3 - value2: $value2 - value1: $value1"
shared.sh
function one() {
value1=$input
echo "function one starts with $value1"
two;
}
function two() {
value2=10;
result2=$(expr $value1 + $value2)
echo "In function 2 $value1 + $value2 = $result2"
three;
}
function three() {
local value3=10;
result3=$(expr $value2 + $result2;)
echo "Function three returns $value2 + $value3 = $result3"
}
I think the pipes you mean can actually be functions and each function can call one another.. and then you give the script the value which it passes through the functions..
bash is pretty flexible about passing values around, so long as the function being called before has the variable the next function being called by it can reuse it or it can be called from main program
I also split out the functions which can be sourced by another script to carry out the same functions
E2A Thanks for the upvote, I have also decided to include this link
http://tldp.org/LDP/abs/html/sample-bashrc.html
There is an awesome .bashrc to be reused, it has a lot of functions which will also give some insight into how to simplify a lot of daily repetitive commands such as that require piping, an alias can be written to do all of them for you..
You can do one thing.
Just as a C program can be divided into a header file and a source file for reducing complexity, you can divide your bash script into two scripts - a header and a main script but with some differences.
Header file - This will contain all the common variables defined and functions defined which will be used by your main script.
Your script - This will only contain function calls and other logic.You need to use "source <"header-file path">" in your script at starting to get all the functions and variables declared in the header available to your script.
Shell scripts have standard input and output like any other program on Unix, so you can use them in pipes. Splitting your scripts is a good solution because you can later use them in pipes with other commands.
I organize my Bash projects in the following way :
Each command is put in its own file
Reusable functions are kept in a library file which is just a classic script with only functions
All files are in the same directory, so commands can find the library with $(dirname $0)/library
Configuration is stored in another file as environment variables
To keep things clear, you should not use global variables to communicate between functions and main program.
I prepare a template for scripts with the following parts prepared :
Header with name and copyright
Read configuration with source
Load library with source
Check parameters
Function to display help, which is called if asked for or if parameters are wrong
My best advice is : always write the help function, as the next person who will need it is ... yourself !
To install your project you simply copy all files, and explain what to configure in the configuration file.

Passing data into perl script from command line

I have a perl script the creates a report based on an xml definition. Currently these definitions all exist as .xml files.
So I have the script run-report.pl, which can take a path to a definition file and create the report.
Now I want to create run-reports-from-db.pl, which will generate the report definition based on same database entries. I don't want to create temp files to pass to run-report.pl, I would just like to pass in the definition somehow.
So instead of saying:
run-report.pl -def=./path/to/def.xml
I want to be able to say:
run-report.pl --stream
And have the report definition available in <STDIN>
I am sure there is pretty trivial way to do this???
If I understand your question correctly, all you need is one | (pipe).
./generate-xml-from-db.pl | ./run-report.pl --stream
Anything the first process in the pipeline prints to stdout will appear in the second process's stdin.
As long as you read from STDIN, you have it available. Notice what happens with you take the code below name it something like echo.pl run it at the command line and paste reams of text.
#!/usr/bin/perl -w
use 5.010;
use strict;
use warnings;
while ( <> ) {
say;
}
<> is the Perl shorthand for "read from STDIN".
As long as the method you're using to launch the process has a way to get a hold of the standard input and outputs, you can just write it to that handle. You have to use the ways that are available to you. In Java, for example, you'd have to get the input stream of the process, in a batch command you have to pipe it. At a GUI terminal you can cut and paste.

Perl program structure for parsing

I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)

Resources