Puppet - add text in desired place - puppet

i want to add, using puppet, text in existing file in desired place. Structure of the file is as follows:
[OPTION1]
aaa
bbb
ccc
I want to add text between aaa and bbb. For now I have figured out how to add text at the end of the file with:
file { '/home/file.txt': ensure => present, } ->
file_line { 'Add text to /home/file.txt':
path => '/home/file.txt',
line => 'added_text'
Should I use awk or sed (i saw it somewhere on google) or there is another way?

file_line has an after parameter, which you should set to the line you want the text to be inserted after:
file_line { 'Add text to /home/file.txt':
path => '/home/file.txt',
line => 'added_text',
after => 'aaa',
}
See the file_line documentation for a full list of supported features.

Related

Logstash: Reading multiline data from optional lines

I have a log file which contains lines which begin with a timestamp. An uncertain number of extra lines might follow each such timestamped line:
SOMETIMESTAMP some data
extra line 1 2
extra line 3 4
The extra lines would provide supplementary information for the timestamped line. I want to extract the 1, 2, 3, and 4 and save them as variables. I can parse the extra lines into variables if I know how many of them there are. For example, if I know there are two extra lines, the grok filter below will work. But what should I do if I don't know, in advance, how many extra lines will exist? Is there some way to parse these lines one-by-one, before applying the multiline filter? That might help.
Also, even if I know I will only have 2 extra lines, is the filter below the best way to access them?
filter {
multiline {
pattern => "^%{SOMETIMESTAMP}"
negate => "true"
what => "previous"
}
if "multiline" in [tags] {
grok {
match => { "message" => "(?m)^%{SOMETIMESTAMP} %{DATA:firstline}(?<newline>[\r\n]+)%{DATA:secondline}(?<newline>[\r\n]+)%{DATA:thirdline}$" }
}
}
# After this would be grok filters to process the contents of
# 'firstline', 'secondline', and 'thirdline'. I would then remove
# these three temporary fields from the final output.
}
(I separated the lines into separate variables since this allows me to do additional pattern matching on the contents of the lines separately, without having to refer to the entire pattern all over again. For example, based on the contents of the first line, I might want to present branching behavior for the other lines.)
Why do you need this?
Are you going to be inserting one single event with all of the values or are they really separate events that just need to share the same time stamp?
If they all need to appear in the same event, you'll like need to resort to a ruby filter to separate out the extra lines into fields on the event that you can then further work on.
For example:
if "multiline" in [tags] {
grok {
match => { "message" => "(?m)^%{SOMETIMESTAMP} %{DATA:firstline}(?<newline>[\r\n]+)" }
}
ruby {
code => '
event["lines"] = event["message"].scan(/[^\r\n]+[\r\n]*/);
'
}
}
If they are really separate events, you could use the memorize plugin for logstash 1.5 and later.
This has changed over versions of ELK
Direct event field references (i.e. event['field']) have been disabled in favor of using event get and set methods (e.g. event.get('field')).
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:logtime} %{LOGLEVEL:level}%{DATA:firstline}" }
}
ruby { code => "event.set('message', event.get('message').scan(/[^\r\n]+[\r\n]*/))" }
}

logstash multine filter for lines without date

I'm looking for a a logstash multiline filter. Here are my requirements:
Any line that begins with whitespace or empty line belongs to the previous line
And any line that does not start with a date timestamp belongs to the previous line
The first part is below for spaces and empty lines but how do I add the second part.
multiline {
pattern => "^\s|^$"
what => "previous"
}
As far as I can tell you can ignore the first condition since it describes a subset of the lines in the second condition, i.e. any line for which the first condition is true also matches the second condition. You should therefore be able to get away with the following (adjust depending on the timestamp format):
multiline {
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}

Reading text file and omitting line

Is there any method of reading from a text file and omitting certain lines from the output into a text box?.
the text file will look like this
Name=Test Name
Date=19/02/14
Message blurb spanning over several lines
The format will always be the same and the Name & Date will always be the 1st 2 rows and these are the rows that i want to omit and return the rest of the message blurb to a text box.
I know how to use the ReadAllLines function and StreamReader but not sure how to start coding it.
Any pointers or directions to some relevant online documentation?
Thanks in advance
You can read file line by line and just skip lines with given beginnings:
string[] startsToOmit = new string[] { "Name=", "Date=" };
var result = File.ReadLines(path)
.Where(line => !startsToOmit.Any(start => line.StartsWith(start)));
and then you have an IEnumerable<string> as a result, you can use it for example by result.ToList().
Just read the stream line by line:
using (StreamReader sr = new StreamReader(path))
{
Console.WriteLine(sr.ReadLine());
}
Ignore the first two lines, and process the 3rd line however you need.

Show do you set unique attributes for resources in Puppet when utilizing arrays

In Puppet you can use arrays when declaring resources as so:
file { ["/tmp/file1", "/tmp/file2"]:
ensure => file,
}
However, as far as I know both file1 and file2 must have the same attributes, content, etc... Is there a way to have file1 and file2 have differing attributes? Something like:
myContent = { "/tmp/file1" => "foo", "/tmp/file2" => "bar" }
file { ["/tmp/file1", "/tmp/file2"]:
ensure => file,
content => myContent[name],
}
So file1 contains foo and file2 contains bar? As far as I know this comes down to being able to tell if the resource is for file1 or file2, at which point options like hashes or inline templates should become viable, but I'm not sure if this is possible. Thanks!
No, that's not possible.
You could use Hiera with create_resources:
manifest: create_resources('file', hiera('filez'))
Hierafile:
---
filez:
/tmp/file1:
content: foo
/tmp/file2:
content: bar

How can I import data from text files into Excel?

I have multiple folders. There are multiple txt files inside these folder. I need to extract data (just a single value: value --->554) from a particular type of txt file in this folder.(individual_values.txt)
No 100 Value 555 level match 0.443 top level 0.443 bottom 4343
There will be many folders with same txt file names but diff value. Can all these values be copyed to excel one below the other.
I have to extract a value from a txt file which i mentioned above. Its a same text file with same name located inside different folders. All i want to do is extract this value from all the text file and paste it in excel or txt one below the other in each row.
Eg: The above is a text file here I have to get the value of 555 and similarly from other diff values.
555
666
666
776
Yes.
(you might want to clarify your question )
Your question isn't very clear, I imagine you want to know how this can be done.
You probably need to write a script that traverses the folders, reads the individual files, parses them for the value you want, and generates a Comma Separated Values (CSV) file. CSV files can easily be imported to Excel.
There are two or three basic methods you can use to get stuff into a Excel Spreadsheet.
You can use OLE wrappers to manipulate Excel.
You can write the file in a binary form
You can use Excel's import methods to take delimited text in as a spreadsheet.
I chose the latter way, because 1) it is the simplest, and 2) your problem is so poorly stated as it does not require a more complex way. The solution below outputs a tab-delimited text file that Excel can easily support.
In Perl:
use IO::File;
my #field_names = split m|/|, 'No/Value/level match/top level/bottom';
#' # <-- catch runaway quote
my $input = IO::File->new( '<data.txt' );
die 'Could not open data.txt for input!' unless $input;
my #data_rows;
while ( my $line = <$input> ) {
my %fields = $line =~ /(level match|top level|bottom|Value|No)\s+(\d+\S*)/g;
push #data_rows, \%fields if exists $fields{Value};
}
$input->close();
my $tab_file = IO::File->new( '>data.tab' );
die 'Could not open data.tab for output!' unless $tab_file;
$tab_file->print( join( "\t", #field_names ), "\n" );
foreach my $data_ref ( #data ) {
$tab_file->print( join( "\t", #$data_ref{#field_names} ), "\n" );
}
$tab_file->close();
NOTE: Excel's text processing is really quite neat. Try opening the text below (replacing the \t with actual tabs) -- or even copying and pasting it:
1\t2\t3\t=SUM(A1:C1)
I chose c#, because i thought it would be fun to use a recursive lambda. This will create the csv file containing matches to the regex pattern.
string root_path = #"c:\Temp\test";
string match_filename = "test.txt";
Func<string,string,StringBuilder, StringBuilder> getdata = null;
getdata = (path,filename,content) => {
Directory.GetFiles(path)
.Where(f=>
Path.GetFileName(f)
.Equals(filename,StringComparison.OrdinalIgnoreCase))
.Select(f=>File.ReadAllText(f))
.Select(c=> Regex.Match(c, #"value[\s\t]*(\d+)",
RegexOptions.IgnoreCase))
.Where(m=>m.Success)
.Select(m=>m.Groups[1].Value)
.ToList()
.ForEach(m=>content.AppendLine(m));
Directory.GetDirectories(path)
.ToList()
.ForEach(d=>getdata(d,filename,content));
return content;
};
File.WriteAllText(
Path.Combine(root_path, "data.csv"),
getdata(root_path, match_filename, new StringBuilder()).ToString());
No.
just making sure you have a 50/50 chance of getting the right answer
(assuming it was a question answerable by Yes and No) hehehe
File_not_found
Gotta have all three binary states for the response.

Resources