Calculating the Mean from aPerl Script - statistics

I m still in here. ;)
I've got this code from a very expert guy, and I'm shy to ask him this basic questions...anyway this is my question now; this Perl Script prints the median of a column of numbers delimited space, and, I added some stuff to get the size of it, now I'm trying to get the sum of the same column. I did and got not results, did I not take the right column? ./stats.pl 1 columns.txt
#!/usr/bin/perl
use strict;
use warnings;
my $index = shift;
my $filename = shift;
my $columns = [];
open (my $fh, "<", $filename) or die "Unable to open $filename for reading\n";
for my $row (<$fh>) {
my #vals = split/\s+/, $row;
push #{$columns->[$_]}, $vals[$_] for 0 .. $#vals;
}
close $fh;
my #column = sort {$a <=> $b} #{$columns->[$index]};
my $offset = int($#column / 2);
my $length = 2 - #column % 2;
my #medians = splice(#column, $offset, $length);
my $median;
$median += $_ for #medians;
$median /= #medians;
print "MEDIAN = $median\n";
################################################
my #elements = #{$columns->[$index]};
my $size = #elements;
print "SIZE = $size\n";
exit 0;
#################################################
my $sum = #{$columns->[$index]};
for (my $size=0; $size < $sum; $size++) {
my $mean = $sum/$size;
};
print "$mean\n";
thanks in advance.

OK some pointers to get you going :
You can put all the numbers into an array :
my #result = split(m/\d+/, $line);
#average
use List::Util qw(sum);
my $sum = sum(#result);
You can then access individual columns with $result[$index] where index is the number of column you want to access.
Also note that :
$total = $line + $total;
$count = $count + 1;
Can be rewritten as :
$total += $line;
$count += 1;
Finally make sure that you are reading the file :
put a "debugging" print into the while loop :
print $line, "\n";
This should get you going :)

Related

Split string to fixed length chunks and write in separate line in Raku

I have a file test.txt:
Stringsplittingskills
I want to read this file and write to another file out.txt with three characters in each line like
Str
ing
spl
itt
ing
ski
lls
What I did
my $string = "test.txt".IO.slurp;
my $start = 0;
my $elements = $string.chars;
# open file in writing mode
my $file_handle = "out.txt".IO.open: :w;
while $start < $elements {
my $line = $string.substr($start,3);
if $line.chars == 3 {
$file_handle.print("$line\n")
} elsif $line.chars < 3 {
$file_handle.print("$line")
}
$start = $start + 3;
}
# close file handle
$file_handle.close
This runs fine when the length of string is not multiple of 3. When the string length is multiple of 3, it inserts extra newline at the end of output file. How can I avoid inserting new line at the end when the string length is multiple of 3?
I tried another shorter approach,
my $string = "test.txt".IO.slurp;
my $file_handle = "out.txt".IO.open: :w;
for $string.comb(3) -> $line {
$file_handle.print("$line\n")
}
Still it suffers from same issue.
I looked for here, here but still unable to solve it.
spurt "out.txt", "test.txt".IO.comb(3).join("\n")
Another approach using substr-rw.
subset PositiveInt of Int where * > 0;
sub break( Str $str is copy, PositiveInt $length )
{
my $i = $length;
while $i < $str.chars
{
$str.substr-rw( $i, 0 ) = "\n";
$i += $length + 1;
}
$str;
}
say break("12345678", 3);
Output
123
456
78
The correct answer is of course to use .comb and .join.
That said, this is how you might fix your code.
You could change the if line to check if it is at the end, and use else.
if $start+3 < $elements {
$file_handle.print("$line\n")
} else {
$file_handle.print($line)
}
Personally I would change it so that only the addition of \n is conditional.
while $start < $elements {
my $line = $string.substr($start,3);
$file_handle.print( $line ~ ( "\n" x ($start+3 < $elements) ));
$start += 3;
}
This works because < returns either True or False.
Since True == 1 and False == 0, the x operator repeats the \n at most once.
'abc' x 1; # 'abc'
'abc' x True; # 'abc'
'abc' x 0; # ''
'abc' x False; # ''
If you were very cautious you could use x+?.
(Which is actually 3 separate operators.)
'abc' x 3; # 'abcabcabc'
'abc' x+? 3; # 'abc'
infix:« x »( 'abc', prefix:« + »( prefix:« ? »( 3 ) ) );
I would probably use loop if I were going to structure it like this.
loop ( my $start = 0; $start < $elements ; $start += 3 ) {
my $line = $string.substr($start,3);
$file_handle.print( $line ~ ( "\n" x ($start+3 < $elements) ));
}
Or instead of adding a newline to the end of each line, you could add it to the beginning of every line except the first.
while $start < $elements {
my $line = $string.substr($start,3);
my $nl = "\n";
# clear $nl the first time through
once $nl = "";
$file_handle.print($nl ~ $line);
$start = $start + 3;
}
At the command line prompt, three one-liner solutions below.
Using comb and batch (retains incomplete set of 3 letters at end):
~$ echo 'StringsplittingskillsX' | perl6 -ne '.join.put for .comb.batch(3);'
Str
ing
spl
itt
ing
ski
lls
X
Simplifying (no batch, only comb):
~$ echo 'StringsplittingskillsX' | perl6 -ne '.put for .comb(3);'
Str
ing
spl
itt
ing
ski
lls
X
Alternatively, using comb and rotor (discards incomplete set of 3 letters at end):
~$ echo 'StringsplittingskillsX' | perl6 -ne '.join.put for .comb.rotor(3);'
Str
ing
spl
itt
ing
ski
lls

Perl daemon doesn't go through entire loop

I'm trying to make a program that works like a very simple server in Perl.
The program itself is meant to work as a library catalogue, giving the user options of searching for books by title or author, and borrowing or returning books. The list of books is provided in a separate file.
Basically, it's supposed to take requests (files) from "Requests" folder, process them, and then give answers (also files) in "Answers" folder. After the process is over, it deletes the old requests and repeats the process (answers are deleted by the client after they are accepted).
It's meant to run as a daemon, but for some reason only the loop responsible for deleting the request files works in the background - the requests are not processed into answers, but are just deleted. Whenever a new request appears, it's almost immediately deleted.
I'm learning to use daemons and tried to emulate what is in this thread.
#!/usr/bin/perl
use warnings;
use strict;
use Proc::Daemon;
#FUNCTIONS DEFINTIONS
sub FindAuthor
{
#try to find book by this author in the catalogue
}
sub FindTitle
{
#try to find book with this title in the catalogue
}
sub CheckIfCanBeReturned
{
#check if the book is borrowed and by whom
}
#attempt at daemonization
Proc::Daemon::Init;
my $continueWork = 1;
$SIG{TERM} = sub { $continueWork = 0 };
while ( $continueWork )
{
sleep(2);
my #RequestFilesArray = `ls /home/Ex5/Requests`;
#list all requests currently in the Request folder
for ( my $b = 0; $b < #RequestFilesArray; $b++)
{
my $cut = `printf "$RequestFilesArray[$b]" | wc -m`;
$cut = $cut - 1;
$RequestFilesArray[$b] = substr $RequestFilesArray[$b], 0, $cut;
}
#the requests are formatted in such way,
#that the first 2 letters indicate what the client wants to be done
#and the rest is taken parameters used in the processing
for (my $i = 0; $i < #RequestFilesArray; $i++)
{
my $UserRequest = `tail -1 Requests/$RequestFilesArray[$i]`;
my $fix = `printf "$UserRequest" | wc -m`;
$fix = $fix - 1;
$UserRequest = substr $UserRequest, 0, $fix;
my $RequestType = substr $UserRequest, 0, 2;
my $RequestedValue = substr $UserRequest, 3;
my $RequestNumber = $i;
if ($RequestType eq "fa")
{
#FIND BY AUTHOR
my #results = FindAuthor ($RequestedValue);
my $filename = "/home/Ex5/Answers/" . $RequestFilesArray[$RequestNumber];
open (my $answerFile, '>', $filename) or die "$!";
for (my $a = 0; $a < #results; $a++)
{
print $answerFile $results[$a],"\n";
}
close $answerFile;
}
elsif ($RequestType eq "ft")
{
#FIND BY TITLE
my #results = FindTitle ($RequestedValue);
my $filename = "/home/Ex5/Answers/" . $RequestFilesArray[$RequestNumber];
open ( my $answerFile, '>', $filename) or die "$!";
for (my $a = 0; $a < #results; $a++)
{
print $answerFile $results[$a],"\n";
}
close $answerFile;
}
elsif ($RequestType eq "br")
{
#BOOK RETURN
my $result = CheckIfCanBeReturned ($RequestedValue, $RequestFilesArray[$RequestNumber]);
my $filename = "/home/Ex5/Answers/" . $RequestFilesArray[$RequestNumber];
open ( my $answerFile, '>', $filename) or die "$!";
print $answerFile $result;
close $answerFile;
}
elsif ($RequestType eq "bb")
{
#BOOK BORROW
my $result = CheckIfCanBeBorrowed ($RequestedValue, $RequestFilesArray[$RequestNumber]);
my $filename = "/home/Ex5/Answers/" . $RequestFilesArray[$RequestNumber];
open ( my $answerFile, '>', $filename) or die "$!";
print $answerFile $result;
close $answerFile;
}
else
{
print "something went wrong with this request";
}
}
#deleting processed requests
for ( my $e = 0; $e < #RequestFilesArray; $e++)
{
my $removeReq = "/home/Ex5/Requests/" . $RequestFilesArray[$e];
unlink $removeReq;
}
#$continueWork =0;
}
You have written way too much code before attempting to test it. You have also started shell processes at every opportunity rather than learning the correct way to achieve things in Perl
The first mistake is to use ls to discover what jobs are waiting. ls prints multiple files per line, and you treat the whole of each line as a file name, using the bizarre printf "$RequestFilesArray[$b]" | wc -m instead of length $RequestFilesArray[$b]
Things only get worse after that
I suggest the following
Start again from scratch
Write your program in Perl. Perl isn't a shell language
Advance in very small increments, making sure that your code compiles and does what it is supposed to every three or four lines. It does wonders for the confidence to know that you're enhancing working code rather than creating a magical sequence of random characters
Learn how to debug. You appear to be staring at your code hoping for inspiration to strike in the manner of someone staring at their car engine in the hope of seeing why it won't start
Delete request files as part of processing the request, and only once the request has been processed and the answer file successfully written. It shouldn't be done in a separate loop
Taking what you provided, here's some pseudocode I've devised for you that you can use as somewhat of a template. This is by NO MEANS exhaustive. I think the advice #Borodin gave is sound and prudent.
This is all untested and much of the new stuff is pseudocode. However, hopefully, there are some breadcrumbs from which to learn. Also, as I stated above, your use of Proc::Daemon::Init is suspect. At the very least, it is so minimally used that it is gobbling up whatever error(s) is/are occurring and you've no idea what's wrong with the script.
#!/usr/bin/perl -wl
use strict;
use File::Basename;
use File::Spec;
use Proc::Daemon;
use Data::Dumper;
# turn off buffering
$|++;
#FUNCTIONS DEFINTIONS
sub FindAuthor
{
#try to find book by this author in the catalogue
}
sub FindTitle
{
#try to find book with this title in the catalogue
}
sub CheckIfCanBeReturned
{
#check if the book is borrowed and by whom
}
sub tail
{
my $file = shift;
# do work
}
sub find_by
{
my $file = shift;
my $val = shift;
my $by = shift;
my #results;
my $xt = 0;
# sanity check args
# do work
if ( $by eq 'author' )
{
my #results = FindByAuthor(blah);
}
elsif ( $by eq 'blah' )
{
#results = blah();
}
#...etc
# really should use File::Spec IE
my $filename = File::Spec->catfile('home', 'Ex5', 'Answers', $file);
# might be a good idea to either append or validate you're not clobbering
# an existent file here because this is currently clobbering.
open (my $answerFile, '>', $filename) or die "$!";
for ( #results )
{
print $answerFile $_,"\n";
}
close $answerFile;
# have some error checking in place and set $xt to 1 if an error occurs
return $xt;
}
#attempt at daemonization
# whatever this is is completely broken methinks.
#Proc::Daemon::Init;
my $continueWork++;
my $r_dir = '/home/user/Requests';
$SIG{TERM} = sub { $continueWork = 0 };
# going with pseudocode
while ( $continueWork )
{
#list all requests currently in the Request folder
my #RequestFilesArray = grep(/[^\.]/, <$r_dir/*>);
#the requests are formatted in such way,
#that the first 2 letters indicate what the client wants to be done
#and the rest is taken parameters used in the processing
for my $request_file ( #RequestFilesArray )
{
my $result = 0;
$request_file = basename($request_file);
my $cut = length($request_file) - 1;
my $work_on = substr $request_file, 0, $cut;
my $UserRequest = tail($request_file);
my $fix = length($UserRequest) - 1;
$UserRequest = substr $UserRequest, 0, $fix;
my $RequestType = substr $UserRequest, 0, 2;
my $RequestedValue = substr $UserRequest, 3;
if ($RequestType eq "fa")
{
#FIND BY AUTHOR
$result = find_by($request_file, $RequestedValue, 'author');
}
elsif ($RequestType eq "ft")
{
#FIND BY TITLE
$result = find_by($request_file, $RequestedValue, 'title');
}
elsif ($RequestType eq "br")
{
#BOOK RETURN
$result = CheckIfCanBeReturned ($RequestedValue, $request_file) or handle();
}
elsif ($RequestType eq "bb")
{
#BOOK BORROW
$result = CheckIfCanBeBorrowed ($RequestedValue, $request_file) or handle();
}
else
{
print STDERR "something went wrong with this request";
}
}
#deleting processed requests
if ( $result == 1 )
{
unlink $work_on;
}
sleep(2);
}
Take special note to my "mild" attempt and DRYing up your code by using the find_by subroutine. You had a LOT of duplicate code in your original script, which I moved into a single sub routine. DRY eq 'Don't Repeat Yourself'.

Keeping count with threads in perl

Im trying to count whenever a thread is done in perl, and print the count. but this is not working. i keep getting either "0" or "1", im trying to add to the count then print the count right after the get request is made.
use strict;
use threads;
use LWP::UserAgent;
our $MAX //= $ARGV[1];
my $list = $ARGV[0];
open my $handle, '<', $list;
chomp(my #array = <$handle>);
close $handle;
my $lines = `cat $list | wc -l`;
my $count = 0;
my #threads;
foreach $_ (#array) {
push #threads, async{
my #chars = ("a".."z");
my $random = join '', map { #chars[rand #chars] } 1 .. 6;
my $ua = LWP::UserAgent->new;
my $url = $_ . '?session=' . $random;
my $response = $ua->get($url);
count++;
print $count;
};
sleep 1 while threads->list( threads::running ) > $MAX;
}
$_->join for #threads;
Just to summarise points in comments by #choroba and myself, and not leave the question without an answer.
You would need to include:
use threads::shared;
in your code, along with all the other use elements.
And to indicate that variable $count is shared:
my $count :shared = 0;
EDIT As per Ikegami's comment, you would have to lock the variable if you want to modify it, to avoid problems of concurrency.
{
lock($count);
$count++;
print $count;
}
And that should be enough for the variable $count to be shared.

How to turn an array of words into an array containing the characters of the words in order?

I have an array of unknown number of words, with an unknown max length.
I need to convert it to another array, basically turning it into a column word array.
so with this original array:
#original_array = ("first", "base", "Thelongest12", "justAWORD4");
The resluting array would be:
#result_array = ("fbTj","iahu","rses","selt","t oA"," nW"," gO"," eR"," sD"," t4"," 1 "," 2 ");
Actually I will have this:
fbTj
iahu
rses
selt
t oA
nW
gO
eR
sD
t4
1
2
I need to do it in order to make a table, and these words are the table's headers.
I hope I have made myself clear, and appreciate the help you are willing to give.
I tried it with the split function, but I keep messing it up...
EDIT:
Hi all, thanks for all the tips and suggestions! I learned quite much from all of you hence the upvote. However I found tchrist's answer more convenient, maybe because I come from a c,c# background... :)
use strict;
use warnings;
use 5.010;
use Algorithm::Loops 'MapCarU';
my #original_array = ("first", "base", "Thelongest12", "justAWORD4");
my #result_array = MapCarU { join '', map $_//' ', #_ } map [split //], #original_array;
I have an old program that does this. Maybe you can adapt it:
$ cat /tmp/a
first
base
Thelongest12
justAWORD4
$ rot90 /tmp/a
fbTj
iahu
rses
selt
t oA
nW
gO
eR
sD
t4
1
2
Here’s the source:
$ cat ~/scripts/rot90
#!/usr/bin/perl
# rot90 - tchrist#perl.com
$/ = '';
# uncomment for easier to read, but not reversible:
### #ARGV = map { "fmt -20 $_ |" } #ARGV;
while ( <> ) {
chomp;
#lines = split /\n/;
$MAXCOLS = -1;
for (#lines) { $MAXCOLS = length if $MAXCOLS < length; }
#vlines = ( " " x #lines ) x $MAXCOLS;
for ( $row = 0; $row < #lines; $row++ ) {
for ( $col = 0; $col < $MAXCOLS; $col++ ) {
$char = ( length($lines[$row]) > $col )
? substr($lines[$row], $col, 1)
: ' ';
substr($vlines[$col], $row, 1) = $char;
}
}
for (#vlines) {
# uncomment for easier to read, but again not reversible
### s/(.)/$1 /g;
print $_, "\n";
}
print "\n";
}
use strict;
use warnings;
use List::Util qw(max);
my #original_array = ("first", "base", "Thelongest12", "justAWORD4");
my #new_array = transpose(#original_array);
sub transpose {
my #a = map { [ split // ] } #_;
my $max = max(map $#$_, #a);
my #out;
for my $n (0 .. $max) {
$out[$n] .= defined $a[$_][$n] ? $a[$_][$n] : ' ' for 0 .. $#a;
}
return #out;
}
It can be done easily by this simple one-liner:
perl -le'#l=map{chomp;$m=length if$m<length;$_}<>;for$i(0..$m-1){print map substr($_,$i,1)||" ",#l}'

Sorting files with unordered multi-part key

Using any combination of Linux tools (without going into any full featured programming language) how can I sort this list
A,C 1
C,B 2
B,A 3
into
A,B 3
A,C 1
B,C 2
Not applying for any beauty contest, this seems to come close:
#!/bin/bash
while read one two; do
one=`echo $one | sed -e 's/,/\n/g' | sort | sed -e '
1 {h; d}
$! {H; d}
H; g; s/\n/,/g;
'`
echo $one $two
done | sort
Change the internal field separator, then compare the the first two letters with ">":
(
IFS=" ,";
while read a b n; do
if [ "$a" \> "$b" ]; then
echo "$b,$a $n";
else
echo "$a,$b $n";
fi;
done;
) <<EOF | sort
A,C 1
C,B 2
B,A 3
EOF
In case somebody is interested. I was not realy satisfied with any suggestions. Probably because I hoped for view lines solution and such doesn't exist as far as I know.
Anyway I did wrote an utility, called ljoin (for left join like in databases) which does exactly what I was asking for (of course :D)
#!/usr/bin/perl
=head1 NAME
ljoin.pl - Utility to left join files by specified key column(s)
=head1 SYNOPSIS
ljoin.pl [OPTIONS] <INFILE1>..<INFILEN> <OUTFILE>
To successfully join rows one must suply at least one input file and exactly one output file. Input files can be real file names or a patern, like [ABC].txt or *.in etc.
=head1 DESCRIPTION
This utility merges multiple file into one using specified column as a key
=head2 OPTIONS
=item --field-separator=<separator>, -fs <separator>
Specifies what string should be used to separate columns in plain file. Default value for this option is tab symbol.
=item --no-sort-fields, -no-sf
Do not sort columns when creating a key for merging files
=item --complex-key-separator=<separator>, -ks <separator>
Specifies what string should be used to separate multiple values in multikey column. For example "A B" in one file can be presented as "B A" meaning that this application should somehow understand that this is the same key. Default value for this option is space symbol.
=item --no-sort-complex-keys, -no-sk
Do not sort complex column values when creating a key for merging files
=item --include-primary-field, -i
Specifies whether key which is used to find matching lines in multiple files should be included in the output file. First column in output file will be the key in any case, but in case of complex column the value of first column will be sorted. Default value for this option is false.
=item --primary-field-index=<index>, -f <index>
Specifies index of the column which should be used for matching lines. You can use multiple instances of this option to specify a multi-column key made of more than one column like this "-f 0 -f 1"
=item --help, -?
Get help and documentation
=cut
use strict;
use warnings;
use Getopt::Long;
use Pod::Usage;
my $fieldSeparator = "\t";
my $complexKeySeparator = " ";
my $includePrimaryField = 0;
my $containsTitles = 0;
my $sortFields = 1;
my $sortComplexKeys = 1;
my #primaryFieldIndexes;
GetOptions(
"field-separator|fs=s" => \$fieldSeparator,
"sort-fields|sf!" => \$sortFields,
"complex-key-separator|ks=s" => \$complexKeySeparator,
"sort-complex-keys|sk!" => \$sortComplexKeys,
"contains-titles|t!" => \$containsTitles,
"include-primary-field|i!" => \$includePrimaryField,
"primary-field-index|f=i#" => \#primaryFieldIndexes,
"help|?!" => sub { pod2usage(0) }
) or pod2usage(2);
pod2usage(0) if $#ARGV < 1;
push #primaryFieldIndexes, 0 if $#primaryFieldIndexes < 0;
my %primaryFieldIndexesHash;
for(my $i = 0; $i <= $#primaryFieldIndexes; $i++)
{
$primaryFieldIndexesHash{$i} = 1;
}
print "fieldSeparator = $fieldSeparator\n";
print "complexKeySeparator = $complexKeySeparator \n";
print "includePrimaryField = $includePrimaryField\n";
print "containsTitles = $containsTitles\n";
print "primaryFieldIndexes = #primaryFieldIndexes\n";
print "sortFields = $sortFields\n";
print "sortComplexKeys = $sortComplexKeys\n";
my $fieldsCount = 0;
my %keys_hash = ();
my %files = ();
my %titles = ();
# Read columns into a memory
foreach my $argnum (0 .. ($#ARGV - 1))
{
# Find files with specified pattern
my $filePattern = $ARGV[$argnum];
my #matchedFiles = < $filePattern >;
foreach my $inputPath (#matchedFiles)
{
open INPUT_FILE, $inputPath or die $!;
my %lines;
my $lineNumber = -1;
while (my $line = <INPUT_FILE>)
{
next if $containsTitles && $lineNumber == 0;
# Don't use chomp line. It doesn't handle unix input files on windows and vice versa
$line =~ s/[\r\n]+$//g;
# Skip lines that don't have columns
next if $line !~ m/($fieldSeparator)/;
# Split fields and count them (store maximum number of columns in files for later use)
my #fields = split($fieldSeparator, $line);
$fieldsCount = $#fields+1 if $#fields+1 > $fieldsCount;
# Sort complex key
my #multipleKey;
for(my $i = 0; $i <= $#primaryFieldIndexes; $i++)
{
my #complexKey = split ($complexKeySeparator, $fields[$primaryFieldIndexes[$i]]);
#complexKey = sort(#complexKey) if $sortFields;
push #multipleKey, join($complexKeySeparator, #complexKey)
}
# sort multiple keys and create key string
#multipleKey = sort(#multipleKey) if $sortFields;
my $fullKey = join $fieldSeparator, #multipleKey;
$lines{$fullKey} = \#fields;
$keys_hash{$fullKey} = 1;
}
close INPUT_FILE;
$files{$inputPath} = \%lines;
}
}
# Open output file
my $outputPath = $ARGV[$#ARGV];
open OUTPUT_FILE, ">" . $outputPath or die $!;
my #keys = sort keys(%keys_hash);
# Leave blank places for key columns
for(my $pf = 0; $pf <= $#primaryFieldIndexes; $pf++)
{
print OUTPUT_FILE $fieldSeparator;
}
# Print column headers
foreach my $argnum (0 .. ($#ARGV - 1))
{
my $filePattern = $ARGV[$argnum];
my #matchedFiles = < $filePattern >;
foreach my $inputPath (#matchedFiles)
{
print OUTPUT_FILE $inputPath;
for(my $f = 0; $f < $fieldsCount - $#primaryFieldIndexes - 1; $f++)
{
print OUTPUT_FILE $fieldSeparator;
}
}
}
# Print merged columns
print OUTPUT_FILE "\n";
foreach my $key ( #keys )
{
print OUTPUT_FILE $key;
foreach my $argnum (0 .. ($#ARGV - 1))
{
my $filePattern = $ARGV[$argnum];
my #matchedFiles = < $filePattern >;
foreach my $inputPath (#matchedFiles)
{
my $lines = $files{$inputPath};
for(my $i = 0; $i < $fieldsCount; $i++)
{
next if exists $primaryFieldIndexesHash{$i} && !$includePrimaryField;
print OUTPUT_FILE $fieldSeparator;
print OUTPUT_FILE $lines->{$key}->[$i] if exists $lines->{$key}->[$i];
}
}
}
print OUTPUT_FILE "\n";
}
close OUTPUT_FILE;

Resources