IMAP folders diff? - linux

I'd like to "diff" two IMAP folders (on two different servers) to compare spam filters, I'd like to have a command line tool (linux) to get just the headers (not the whole dir, e.g. using 'isync' or similar), something like this:
$ imapget --subjects -p=password user#server
or this:
$ imapget --format "$DATE - $FROM - $SUBJ" -p=password user#server
('imapget' cmd is fictional)
What would you suggest?
Thank you

I would mirror the two IMAP folders to local Maildir folders using something like OfflineIMAP, imapsync, imapcopy, isync or mailsync.
Then I'd use something like mailutils to output lists of messages in both and diff them.

The easies way is probably to get perl and Mail::IMAPClient and use something like:
use Mail::IMAPClient;
my $imap = Mail::IMAPClient->new(
Server => $imaphost, User => $login, Password => $pass, Uid => 1
);
$imap->select("demo_folder");
my $msgs = $imap->search("ALL");
for my $h (
# get specified headers from every message in folder "demo_folder" the
values %{ $imap->parse_headers( $msgs , "Date", "From", "Subject") } )
{
# $h is the value of each element in the hash ref returned
# from parse_headers, and $h is also a reference to a hash.
# We'll only print the first occurrence of each field because
# we don't expect more than one particular header line per
# message.
print map { "$_:\t$h->{$_}[0]\n"} keys %$h;
}

Related

Perl script cron/environment issue

The following Perl script generates an .xls file from a text file. It runs great in our linux test environment, but generates an empty spreadsheet (.xls) in our production environment when run via cron (cron works in test, as well.) Nothing jumps out at our sys admins in terms of system level settings that might account for this behavior. Towards the bottom of the script in the import_data subroutine, the correct number of lines is reported, but nothing is written to the spreadsheet and no errors are returned at either the script or system level. I ran it through the perl debugger but my skills fell short of being able to interactively watch it populate the file. The cron entry looks like this:
cd <script directory>; cvs2xls input.txt output.xls 2>&1
Any debugging tips would be appreciated, as well as potential system settings that I can forward on to our sysadmins.
#!/usr/bin/perl
use strict;
use warnings;
use lib '/apps/tu01688/perl5/lib/perl5';
use Spreadsheet::WriteExcel;
use Text::CSV::Simple;
BEGIN {
unshift #INC, "/apps/tu01688/jobs/mayo-expert";
};
my $infile = shift;
usage() unless defined $infile && -f $infile;
my $parser = Text::CSV::Simple->new;
my #data = $parser->read_file($infile);
my $headers = shift #data;
my $outfile = shift || $infile . ".xls";
my $subject = shift || 'worksheet';
sub usage {
print "csv2xls infile [outfile] [subject]\n";
exit;
}
my $workbook = Spreadsheet::WriteExcel->new($outfile);
my $bold = $workbook->add_format();
$bold->set_bold(1);
import_data($workbook, $subject, $headers, \#data);
# Add a worksheet
sub import_data {
my $workbook = shift;
my $base_name = shift;
my $colums = shift;
my $data = shift;
my $limit = shift || 50_000;
my $start_row = shift || 1;
my $worksheet = $workbook->add_worksheet($base_name);
$worksheet->add_write_handler(qr[\w], \&store_string_widths);
#$worksheet->add_write_handler(qr[\w]| \&store_string_widths);
my $w = 1;
$worksheet->write('A' . $start_row, $colums, ,$bold);
my $i = $start_row;
my $qty = 0;
for my $row (#$data) {
$qty++;
if ($i > $limit) {
$i = $start_row;
$w++;
$worksheet = $workbook->add_worksheet("$base_name - $w");
$worksheet->write('A1', $colums,$bold);
}
$worksheet->write($i++, 0, $row);
}
autofit_columns($worksheet);
warn "Converted $qty rows.";
return $worksheet;
}
###############################################################################
###############################################################################
#
# Functions used for Autofit.
#
###############################################################################
#
# Adjust the column widths to fit the longest string in the column.
#
sub autofit_columns {
my $worksheet = shift;
my $col = 0;
for my $width (#{$worksheet->{__col_widths}}) {
$worksheet->set_column($col, $col, $width) if $width;
$col++;
}
}
###############################################################################
#
# The following function is a callback that was added via add_write_handler()
# above. It modifies the write() function so that it stores the maximum
# unwrapped width of a string in a column.
#
sub store_string_widths {
my $worksheet = shift;
my $col = $_[1];
my $token = $_[2];
# Ignore some tokens that we aren't interested in.
return if not defined $token; # Ignore undefs.
return if $token eq ''; # Ignore blank cells.
return if ref $token eq 'ARRAY'; # Ignore array refs.
return if $token =~ /^=/; # Ignore formula
# Ignore numbers
#return if $token =~ /^([+-]?)(?=\d|\.\d)\d*(\.\d*)?([Ee]([+-]?\d+))?$/;
# Ignore various internal and external hyperlinks. In a real scenario
# you may wish to track the length of the optional strings used with
# urls.
return if $token =~ m{^[fh]tt?ps?://};
return if $token =~ m{^mailto:};
return if $token =~ m{^(?:in|ex)ternal:};
# We store the string width as data in the Worksheet object. We use
# a double underscore key name to avoid conflicts with future names.
#
my $old_width = $worksheet->{__col_widths}->[$col];
my $string_width = string_width($token);
if (not defined $old_width or $string_width > $old_width) {
# You may wish to set a minimum column width as follows.
#return undef if $string_width < 10;
$worksheet->{__col_widths}->[$col] = $string_width;
}
# Return control to write();
return undef;
}
###############################################################################
#
# Very simple conversion between string length and string width for Arial 10.
# See below for a more sophisticated method.
#
sub string_width {
return length $_[0];
}
Uhmm.. don't put chained commands in cron, use an external script instead. Anyway: some suggestions that may help you:
Debugging cron commands
Check the mail! By default cron will mail any output from the command to the user it is running the command as. If there is no output there will be no mail. If you want cron to send mail to a different account then you can set the MAILTO environment variable in the crontab file e.g.
MAILTO=user#somehost.tld
1 2 * * * /path/to/your/command
Capture the output yourself
1 2 * * * /path/to/your/command &>/tmp/mycommand.log
which captures stdout and stderr to /tmp/mycommand.log
Look at the logs; cron logs its actions via syslog, which (depending on your setup) often go to /var/log/cron or /var/log/syslog.
If required you can filter the cron statements with e.g.
grep CRON /var/log/syslog
Now that we've gone over the basics of cron, where the files are and how to use them let's look at some common problems.
Check that cron is running
If cron isn't running then your commands won't be scheduled ...
ps -ef | grep cron | grep -v grep
should get you something like
root 1224 1 0 Nov16 ? 00:00:03 cron
or
root 2018 1 0 Nov14 ? 00:00:06 crond
If not restart it
/sbin/service cron start
or
/sbin/service crond start
There may be other methods; use what your distro provides.
cron runs your command in a restricted environment.
What environment variables are available is likely to be very limited. Typically, you'll only get a few variables defined, such as $LOGNAME, $HOME, and $PATH.
Of particular note is the PATH is restricted to /bin:/usr/bin. The vast majority of "my cron script doesn't work" problems are caused by this restrictive path. If your command is in a different location you can solve this in a couple of ways:
Provide the full path to your command.
1 2 * * * /path/to/your/command
Provide a suitable PATH in the crontab file
PATH=/usr:/usr/bin:/path/to/something/else
1 2 * * * command
If your command requires other environment variables you can define them in the crontab file too.
cron runs your command with cwd == $HOME
Regardless of where the program you execute resides on the filesystem, the current working directory of the program when cron runs it will be the user's home directory. If you access files in your program, you'll need to take this into account if you use relative paths, or (preferably) just use fully-qualified paths everywhere, and save everyone a whole lot of confusion.
The last command in my crontab doesn't run
Cron generally requires that commands are terminated with a new line. Edit your crontab; go to the end of the line which contains the last command and insert a new line (press enter).
Check the crontab format
You can't use a user crontab formatted crontab for /etc/crontab or the fragments in /etc/cron.d and vice versa. A user formatted crontab does not include a username in the 6th position of a row, while a system formatted crontab includes the username and runs the command as that user.
I put a file in /etc/cron.{hourly,daily,weekly,monthly} and it doesn't run
Check that the filename doesn't have an extension see run-parts
Ensure the file has execute permissions.
Tell the system what to use when executing your script (eg. put #!/bin/sh at top)
Cron date related bugs
If your date is recently changed by a user or system update, timezone or other, then crontab will start behaving erratically and exhibit bizarre bugs, sometimes working, sometimes not. This is crontab's attempt to try to "do what you want" when the time changes out from underneath it. The "minute" field will become ineffective after the hour is changed. In this scenario, only asterisks would be accepted. Restart cron and try it again without connecting to the internet (so the date doesn't have a chance to reset to one of the time servers).
Percent signs, again
To emphasise the advice about percent signs, here's an example of what cron does with them:
# cron entry
* * * * * cat >$HOME/cron.out%foo%bar%baz
will create the ~/cron.out file containing the 3 lines
foo
bar
baz
This is particularly intrusive when using the date command. Be sure to escape the percent signs
* * * * * /path/to/command --day "$(date "+\%Y\%m\%d")"
Thanks so much for the extensive feedback, everyone. I'm certainly taking a lot more away from this than I put into it. In any event, I ran across the answer. In my perl5 lib folder I found that somehow the IO and OLE libraries were missing on production. Copying those over from development resulted in everything working fine. The fact that I was unable to determine/capture this through conventional debugging efforts as opposed to merely comparing directory listings out of exasperation speaks to how much more I have to learn along these lines. But I'm confident that the great feedback I received will go a long ways towards getting me there. Thanks again, everyone.

How to extract key value pairs from a file when values span multiple lines?

I'm a few weeks into bash scripting and I haven't advanced enough yet to get my head wrapped around this problem. Any help would be appreciated!
I have a "script.conf" file that contains the following:
key1=value1
key2=${HOME}/Folder
key3=( "k3v1" "k3 v2" "k3v3")
key4=( "k4v1"
"k4 v2"
"k4v3"
)
key5=value5
#key6="Do Not Include Me"
In a bash script, I want to read the contents of this script.conf file into an array. I've learned how to handle the scenarios for keys 1, 2, 3, and 5, but the key4 scenario throws a wrench into it with it spanning across multiple lines.
I've been exploring the use of sed -n '/=\s*[(]/,/[)]/{/' which does capture key4 and its value, but I can't figure out how to mix this so that the other keys are also captured in the matches. The range syntax is also new to me, so I haven't figured out how to separate the key/value. I feel like there is an easy regex that would accomplish what I want... in plain-text: "find and group the pattern ^(.*)= (for the key), then group everything after the '=' char until another ^(.*)= match is found, rinse and repeat". I guess if I do this, I need to change the while read line to not handle the key/value separation for me (I'll be looking into this while I'm waiting for a response). BTW, I think a solution where the value of key4 is flattened (new lines removed) would be acceptable; I know for key3 I have to store the value as a string and then convert it to an array later when I want to iterate over it since an array element apparently can't contain a list.
Am I on the right path with sed or is this a job for awk or some other tool? (I haven't ventured into awk yet). Is there an easier approach that I'm missing because I'm too deep into the forest (like changing the while read line in the LoadConfigFile function)?
Here is the code that I have so far in script.sh for processing and capturing the other pairs into the $config array:
__AppDir=$(dirname $0)
__AppName=${__ScriptName%.*}
typeset -A config #init config array
config=( #Setting Default Config values
[key1]="defaultValue1"
[key2]="${HOME}/defaultFolder"
[QuietMode]=0
[Verbose]=0 #Ex. Usage: [[ "${config[Verbose]}" -gt 0 ]] && echo ">>>Debug print"
)
function LoadConfigFile() {
local cfgFile="${1}"
shopt -s extglob #Needed to remove trailing spaces
if [ -f ${cfgFile} ]; then
while IFS='=' read -r key value; do
if [[ "${key:0:1}" == "#" ]]; then
#echo "Skipping Comment line: ${key}"
elif [ "${key:-EMPTY}" != "EMPTY" ]; then
value="${value%%\#*}" # Delete in-line, right comments
value="${value%%*( )}" # Delete trailing spaces
value="${value%%( )*}" # Delete leading spaces
#value="${value%\"*}" # Delete opening string quotes
#value="${value#\"*}" # Delete closing string quotes
#Manipulate any variables included in the value so that they can be expanded correctly
# - value must be stored in the format: "${var1}". `backticks`, "$var2", and "doubleQuotes" are left as is
value="${value//\"/\\\"}" # Escape double quotes for eval
value="${value//\`/\\\`}" # Escape backticks for eval
value="${value//\$/\\\$}" # Escape ALL '$' for eval
value="${value//\\\${/\${}" # Undo the protection of '$' if it was followed by a '{'
value=$(eval "printf '%s\n' \"${value}\"")
config[${key}]=${value} #Store the value into the config array at the specified key
echo " >>>DBG: Key = ${key}, Value = ${value}"
#else
# echo "Skipped Empty Key"
fi
done < "${cfgFile}"
fi
}
CONFIG_FILE=${__AppDir}/${__AppName}.conf
echo "Config File # ${CONFIG_FILE}"
LoadConfigFile ${CONFIG_FILE}
#Print elements of $config
echo "Script Config Values:"
echo "----------------------------"
for key in "${!config[#]}"; do #The '!' char gets an array of the keys, without it, we would get an array of the values
printf " %-20s = %s\n" "${key}" "${config[${key}]}"
done
echo "------ End Script Config ------"
#To convert to an array...
declare -a valAsArray=${config[RequiredAppPackages]} #Convert the value from a string to an array
echo "Count = ${#valAsArray[#]}"
for itemCfg in "${valAsArray[#]}"; do
echo " item = ${itemCfg}"
done
As I mentioned before, I'm just starting to learn bash and Linux scripting in general, so if you see that I'm doing some taboo things in other areas of my code too, please feel free to provide feedback in the comments... I don't want to start bad habits early on :-).
*If it matters, the OS is Ubuntu 14.04.
EDIT:
As requested, after reading the script.conf file, I would like for the elements in $config[#] to be equivalent to the following:
typeset -A config #init config array
config=(
[key1]="value1"
[key2]="${HOME}/Folder"
[key3]="( \"k3v1\" \"k3 v2\" \"k3v3\" )"
[key4]="( \"k4v1\" \"k4 v2\" \"k4v3\" )"
[key5]="value5"
)
I want to be able to convert the values of elements 'key4' and 'key3' into an array and iterated over them the same way in the following code:
declare -a keyValAsArray=${config[keyN]} #Convert the value from a string to an array
echo "Count = ${#keyValAsArray[#]}"
for item in "${keyValAsArray[#]}"; do
echo " item = ${item}"
done
I don't think it matters if \n is preserved for key4's value or not... that depends on if declare has a problem with it.
A shell is an environment from which to call tools with a language to sequence those calls. It is NOT a tool to manipulate text. The standard UNIX tool to manipulate text is awk. Trying to manipulate text in shell IS a bad habit, see why-is-using-a-shell-loop-to-process-text-considered-bad-pr‌​actice for SOME of the reasons why
You still didn't post the expected result of populating the config array so I'm not sure but I think this is what you wanted:
$ cat tst.sh
declare -A config="( $(awk '
{ gsub(/^[[:space:]]+|([[:space:]]+|#.*)$/,"") }
!NF { next }
/^[^="]+=/ {
name = gensub(/=.*/,"",1)
value = gensub(/^[^=]+=/,"",1)
n2v[name] = value
next
}
{ n2v[name] = n2v[name] OFS $0 }
END {
for (name in n2v) {
value = gensub(/"/,"\\\\&","g",n2v[name])
printf "[%s]=\"%s\"\n", name, value
}
}
' script.conf
) )"
declare -p config
$ ./tst.sh
declare -A config='([key5]="value5" [key4]="( \"k4v1\" \"k4 v2\" \"k4v3\" )" [key3]="( \"k3v1\" \"k3 v2\" \"k3v3\")" [key2]="/home/Ed/Folder" [key1]="value1" )'
The above uses GNU awk for gensub(), with other awks you'd use [g]sub() instead.

CSV Bash loop Issue with Variables

I have a csv file which im trying to loop through with the purpose to find out if an User Input is found inside the csv data. I wrote the following code which sometimes works and others doesn't. It always stops working when I try to compare to a 2+ digit number. It works OK for numbers 1 through 9, but once u enter lets say 56 , or 99 or 100, it stops working.
the csv data is comma delimited, i have about 300 lines they are just like this.
1,John Doe,Calculus I,5.0
1,John Doe,Calculus II,4.3
1,John Doe,Physics II,3.5
2,Mary Poppins,Calculus I,3.7
2,Mary Poppins,Calculus II,4.7
2,Mary Poppins,Physics I,3.7
Data is just like that, all the way down until ID #100 for a total of 300 lines. Both the sh file and csv file are in the same folder, I'm using a fresh installation of Ubuntu 12.04.3, using gedit as the text editor.
I tried Echoing the variables ID and inside the IF conditionals but it doesn't behave the way it should when testing for the same value. Could someone point me out in the right direction. Thanks
Here's the code:
#s!/bin/bash
echo "enter your user ID";
read user;
INPUT_FILE=notas.csv
while IFS="," read r- ID name asignature final;
do
if [$ID = $user]; then
userType=1;
else
userType=2;
fi
done < notas.csv
Well, your code as written has a few issues.
You have r- instead of -r on the read line - I assume that's a typo not present in your actual code or you wouldn't get very far.
Similarly, you need space around the [...] brackets: [$ID is a syntax error.
You need to quote the parameter expansions in your if clause, and/or switch bracket types. You probably make it a numeric comparison as #imp25 suggested, which I would do by using ((...)).
You probably don't want to set userType to 2 in an else clause, because that will set it to 2 for everyone except whoever is listed last in the file (ID 100, presumably). You want to set it to 2 first, outside the loop. Then, inside the loop when you find a match, set it to 1 and break out of the loop:
userType=2
while IFS=, read -r ID name asignature final; do
if (( $ID == $user )); then
userType=1;
break
fi
done < notas.csv
You could also just use shell tools like awk:
userType=$(awk -F, -vtype=2 '($1=="'"$user"'") {type=1}; END {print type}' notas.csv)
or grep:
grep -q "^$user," notas.csv
userType=$(( $? + 1 ))
etc.
You should quote your variables in the if test statement. You should also perform a numeric test -eq rather than a string comparison =. So your if statement should look like:
if [[ "$ID" -eq "$user" ]]

bash - How to find string from file and get its position?

File services - contains many records like this one:
define service {
host_name\t\t\t\tHOSTNAME
...
...
}
File hosts - contains records:
define host {
host_name\t\t\t\tHOSTNAME
...
...
}
and I need to go to hosts, somehow get name of HOSTNAME from first record, then go to file services and find all records with that HOSTNAME and put them to other file. Then do it for every HOSTNAME in hosts.
What I don't know is primary how to get the HOSTNAME from file hosts and then how to get a whole record in file services to a variable. I have prepared a regex (hope it's right) ^define.*host_name\t\t\t\t$HOSTNAME.*}
Please give me a few advices or examples how to get wanted result.
The files you provide look very much like nagios configuration files.
sed might be your friend here, as it allows you to slice the file into smaller parts, eg:
:t
/^define service {/,/}$/ { # For each line between these block markers..
/}$/!{ # If we are not at the /end/ marker
$!{ # nor the last line of the file,
N; # add the Next line to the pattern space
bt
} # branch (loop back) to the :t label.
} # This line matches the /end/ marker.
/host_name[ \t]\+HOSTNAME\b/!d; # delete the block if wrong host.
}
That example lifted from the sed faq 4.21, and adapted slightly. You could also look at question 4.22 which appears to address this directly:
http://sed.sourceforge.net/sedfaq4.html#s4.22
Like the previous answer, I'm also inclined to say you're probably better off using another scripting language. If you need a different interpreter to get this done anyway, might as well use something you know.
This task a bit too complex for a bash script. I would use Perl:
#!/usr/bin/perl
use warnings;
use strict;
open my $SRV, '<', 'services' or die $!;
open my $HST, '<', 'hosts' or die $!;
my %services;
{ local $/ = "\n}";
while (my $service = <$SRV>) {
my ($hostname) = $service =~ /^\s*host_name\t+(.+?)\s*$/m;
push #{ $services{$hostname} }, $service if defined $hostname;
}
}
while (my $line = <$HST>) {
if (my ($host) = $line =~ /^\s*host_name\t+(.+?)\s*$/) {
if (exists $services{$host}) {
print "===== $host =====\n";
print "$_\n" for #{ $services{$host} };
} else {
warn "$host not found in services!\n";
}
}
}

Parameters in TCL using ns2

how can i sent this values
24.215729
24.815729
25.055134
27.123499
27.159186
28.843474
28.877798
28.877798
to tcl input argument?
as you know we cant use pipe command because tcl dosent accept in that way!
what can i do to store this numbers in tcl file(the count of this numbers in variable and can be 0 to N and in this example its 7)
This is pretty easy to do in bash, dump the list of values into a file and then run:
tclsh myscript.tcl $(< datafilename)
And then the values are accessible in the script with the argument variables:
puts $argc; # This is a count of all values
puts $argv; # This is a list containing all the arguments
You can read data piped to stdin with commands like
set data [gets stdin]
or from temporary files, if you prefer. For example, the following program's first part (an example from wiki.tcl.tk) reads some data from a file, and the other part then reads data from stdin. To test it, put the code into a file (eg reading.tcl), make it executable, create a small file somefile, and execute via eg
./reading.tcl < somefile
#!/usr/bin/tclsh
# Slurp up a data file
set fsize [file size "somefile"]
set fp [open "somefile" r]
set data [read $fp $fsize]
close $fp
puts "Here is file contents:"
puts $data
puts "\nHere is from stdin:"
set momo [read stdin $fsize]
puts $momo
A technique I use when coding is to put data in my scripts as a literal:
set values {
24.215729
24.815729
25.055134
27.123499
27.159186
28.843474
28.877798
28.877798
}
Now I can just feed them into a command one at a time with foreach, or send them as a single argument:
# One argument
TheCommand $values
# Iterating
foreach v $values {
TheCommand $v
}
Once you've got your code working with a literal, switching it to pull the data from a file is pretty simple. You just replace the literal with code to read a file:
set f [open "the/data.txt"]
set values [read $f]
close $f
You can also pull the data from stdin:
set values [read stdin]
If there's a lot of values (more than, say, 10–20MB) then you might be better off processing the data one line at a time. Here's how to do that with reading from stdin…
while {[gets stdin v] >= 0} {
TheCommand $v
}

Resources