I'm running a perl script consisting of 30 threads to run a subroutine. For each thread, I'm supplying 100 data. In the subroutine, after the code does what its supposed to, I'm storing the output in a csv file. However, I find that on execution, the csv file has some data overlapped. For example, in the csv file, I'm storing name, age, gender, country this way-
print OUTPUT $name.",".$age.",".$gender.",".$country.",4\n";
The csv file should have outputs as such-
Randy,35,M,USA,4
Tina,76,F,UK,4
etc.
However, in the csv file, I see that some columns has overlapped or has been entered haphazardly in this way-
Randy,35,M,USA,4
TinaMike,76,UK
23,F,4
Is it because some threads are executing at the same time? What could I do to avoid this? I'm using the print statement only after I'm getting the data. Any suggestions?
4 is the group id which will remain constant.
Below is the code snippet:
#!/usr/bin/perl
use DBI;
use strict;
use warnings;
use threads;
use threads::shared;
my $host = "1.1.1.1";
my $database = "somedb";
my $user = "someuser";
my $pw = "somepwd";
my #threads;
open(PUT,">/tmp/file1.csv") || die "can not open file";
open(OUTPUT,">/tmp/file2.csv") || die "can not open file";
my $dbh = DBI->connect("DBI:mysql:$database;host=$host", $user, $pw ,) || die "Could not connect to database: $DBI::errstr";
$dbh->{'mysql_auto_reconnect'} = 1;
my $sql = qq{
//some sql to get a primary keys
};
my $sth = $dbh->prepare($sql);
$sth->execute();
while(my #request = $sth->fetchrow_array())
{
#get other columns and print to file1.csv
print PUT $net.",".$sub.",4\n";
$i++; #this has been declared before
}
for ( my $count = 1; $count <= 30; $count++) {
my $t = threads->new(\&sub1, $count);
push(#threads,$t);
}
foreach (#threads) {
my $num = $_->join;
print "done with $num\n";
}
sub sub1 {
my $num = shift;
//calculated start_num and end_num based on an internal logic
for(my $x=$start_num; $x<=$end_num; $x++){
print OUTPUT $name.",".$age.",".$gender.",".$country.",4\n";
$j++; #this has been declared before
}
sleep(1);
return $num;
}
I have problem in the file2 which has the OUTPUT handler
You are multithreading and printing to a file from multiple threads. This will always end badly - print is not an 'atomic' operation, so different prints can interrupt each other.
What you need to do is serialize your output such that this cannot happen. The simplest way is to use a lock or a semaphore:
my $print_lock : shared;
{
lock $print_lock;
print OUTPUT $stuff,"\n";
}
when the 'lock' drifts out of scope, it'll be released.
Alternatively, have a separate thread that 'does' file IO, and use Thread::Queue to feed lines to it. Depends somewhat on whether you need any ordering/processing of the contents of 'OUTPUT'.
Something like:
use Thread::Queue;
my $output_q = Thread::Queue -> new();
sub output_thread {
open ( my $output_fh, ">", "output_filename.csv" ) or die $!;
while ( my $output_line = $output_q -> dequeue() ) {
print {$output_fh} $output_line,"\n";
}
close ( $output_fh );
sub doing_stuff_thread {
$output_q -> enqueue ( "something to output" ); #\n added by sub!
}
my $output_thread = threads -> create ( \&output_thread );
my $doing_stuff_thread = threads -> create ( \&doing_stuff_thread );
#wait for doing_stuff to finish - closing the queue will cause output_thread to flush/exit.
$doing_stuff_thread -> join();
$output_q -> end;
$output_thread -> join();
Open the File handle globally, then try using flock on the file handle as demonstrated:
sub log_write {
my $line = shift;
flock(OUTPUT, LOCK_EX) or die "can't lock: $!";
seek(OUTPUT, 0, SEEK_END) or die "can't fast forward: $!";
print OUTPUT $line;
flock(OUTPUT, LOCK_UN) or die "can't unlock: $!";
}
Other example:
perlfaq5 - I still don't get locking. I just want to increment the number in the file. How can I do this?
Related
I am trying to take a portion of an existing script and have it run multiple nmap scans simultaneously to increase the speed of the script.
I initially tried using fork, but it was suggested to me that I should be using threads instead as I am doing this on a Windows box. I modified a code snippet I found online and it partially works.
I am using a list of 23 IP addresses. I have been able to open 10 threads and scan the first 10 addresses, but then the code exits. Ideally, the code would open a new thread each time one exits so that there are always 10 threads running, until it reaches the remainder, in this case there would be three. Then only 3 threads would be open.
This entire code needs to be run inside a subroutine that I have in my original sequential code. I am using ping instead of the nmap command to test.
#!/usr/bin/Perl
use strict;
use threads;
my $i = 0;
my #lines;
# Define the number of threads
my $num_of_threads = 10;
# use the initThreads subroutine to create an array of threads.
my #threads = initThreads();
my #files = glob( "./ping.txt" ) or die "Can't open CMS HostInventory$!"; # Open the CMS Host Inventory csv files for parsing
foreach my $file ( #files ) {
open (CONFIG, '<', $file) or die "Can't ip360_DNS File$!";
#lines = <CONFIG>;
chomp (#lines);
}
# Loop through the array:
foreach ( #threads ) {
# Tell each thread to perform our 'doOperation()' subroutine.
$_ = threads->create(\&doOperation);
}
# This tells the main program to keep running until all threads have finished.
foreach ( #threads ) {
$_->join();
}
print "\nProgram Done!\nPress Enter to exit";
$a = <>;
####################### SUBROUTINES ############################
sub initThreads{
my #initThreads;
for ( my $i = 1; $i <= $num_of_threads; $i++ ) {
push(#initThreads, $i);
}
return #initThreads;
}
sub doOperation{
# Get the thread id. Allows each thread to be identified.
my $id = threads->tid();
my $ip = ($id - 1);
system("ping $lines[$ip] >> ./input/$lines[$ip].txt");
print "Thread $id done!\n";
# Exit the thread
threads->exit();
}
I'm trying to make a program that works like a very simple server in Perl.
The program itself is meant to work as a library catalogue, giving the user options of searching for books by title or author, and borrowing or returning books. The list of books is provided in a separate file.
Basically, it's supposed to take requests (files) from "Requests" folder, process them, and then give answers (also files) in "Answers" folder. After the process is over, it deletes the old requests and repeats the process (answers are deleted by the client after they are accepted).
It's meant to run as a daemon, but for some reason only the loop responsible for deleting the request files works in the background - the requests are not processed into answers, but are just deleted. Whenever a new request appears, it's almost immediately deleted.
I'm learning to use daemons and tried to emulate what is in this thread.
#!/usr/bin/perl
use warnings;
use strict;
use Proc::Daemon;
#FUNCTIONS DEFINTIONS
sub FindAuthor
{
#try to find book by this author in the catalogue
}
sub FindTitle
{
#try to find book with this title in the catalogue
}
sub CheckIfCanBeReturned
{
#check if the book is borrowed and by whom
}
#attempt at daemonization
Proc::Daemon::Init;
my $continueWork = 1;
$SIG{TERM} = sub { $continueWork = 0 };
while ( $continueWork )
{
sleep(2);
my #RequestFilesArray = `ls /home/Ex5/Requests`;
#list all requests currently in the Request folder
for ( my $b = 0; $b < #RequestFilesArray; $b++)
{
my $cut = `printf "$RequestFilesArray[$b]" | wc -m`;
$cut = $cut - 1;
$RequestFilesArray[$b] = substr $RequestFilesArray[$b], 0, $cut;
}
#the requests are formatted in such way,
#that the first 2 letters indicate what the client wants to be done
#and the rest is taken parameters used in the processing
for (my $i = 0; $i < #RequestFilesArray; $i++)
{
my $UserRequest = `tail -1 Requests/$RequestFilesArray[$i]`;
my $fix = `printf "$UserRequest" | wc -m`;
$fix = $fix - 1;
$UserRequest = substr $UserRequest, 0, $fix;
my $RequestType = substr $UserRequest, 0, 2;
my $RequestedValue = substr $UserRequest, 3;
my $RequestNumber = $i;
if ($RequestType eq "fa")
{
#FIND BY AUTHOR
my #results = FindAuthor ($RequestedValue);
my $filename = "/home/Ex5/Answers/" . $RequestFilesArray[$RequestNumber];
open (my $answerFile, '>', $filename) or die "$!";
for (my $a = 0; $a < #results; $a++)
{
print $answerFile $results[$a],"\n";
}
close $answerFile;
}
elsif ($RequestType eq "ft")
{
#FIND BY TITLE
my #results = FindTitle ($RequestedValue);
my $filename = "/home/Ex5/Answers/" . $RequestFilesArray[$RequestNumber];
open ( my $answerFile, '>', $filename) or die "$!";
for (my $a = 0; $a < #results; $a++)
{
print $answerFile $results[$a],"\n";
}
close $answerFile;
}
elsif ($RequestType eq "br")
{
#BOOK RETURN
my $result = CheckIfCanBeReturned ($RequestedValue, $RequestFilesArray[$RequestNumber]);
my $filename = "/home/Ex5/Answers/" . $RequestFilesArray[$RequestNumber];
open ( my $answerFile, '>', $filename) or die "$!";
print $answerFile $result;
close $answerFile;
}
elsif ($RequestType eq "bb")
{
#BOOK BORROW
my $result = CheckIfCanBeBorrowed ($RequestedValue, $RequestFilesArray[$RequestNumber]);
my $filename = "/home/Ex5/Answers/" . $RequestFilesArray[$RequestNumber];
open ( my $answerFile, '>', $filename) or die "$!";
print $answerFile $result;
close $answerFile;
}
else
{
print "something went wrong with this request";
}
}
#deleting processed requests
for ( my $e = 0; $e < #RequestFilesArray; $e++)
{
my $removeReq = "/home/Ex5/Requests/" . $RequestFilesArray[$e];
unlink $removeReq;
}
#$continueWork =0;
}
You have written way too much code before attempting to test it. You have also started shell processes at every opportunity rather than learning the correct way to achieve things in Perl
The first mistake is to use ls to discover what jobs are waiting. ls prints multiple files per line, and you treat the whole of each line as a file name, using the bizarre printf "$RequestFilesArray[$b]" | wc -m instead of length $RequestFilesArray[$b]
Things only get worse after that
I suggest the following
Start again from scratch
Write your program in Perl. Perl isn't a shell language
Advance in very small increments, making sure that your code compiles and does what it is supposed to every three or four lines. It does wonders for the confidence to know that you're enhancing working code rather than creating a magical sequence of random characters
Learn how to debug. You appear to be staring at your code hoping for inspiration to strike in the manner of someone staring at their car engine in the hope of seeing why it won't start
Delete request files as part of processing the request, and only once the request has been processed and the answer file successfully written. It shouldn't be done in a separate loop
Taking what you provided, here's some pseudocode I've devised for you that you can use as somewhat of a template. This is by NO MEANS exhaustive. I think the advice #Borodin gave is sound and prudent.
This is all untested and much of the new stuff is pseudocode. However, hopefully, there are some breadcrumbs from which to learn. Also, as I stated above, your use of Proc::Daemon::Init is suspect. At the very least, it is so minimally used that it is gobbling up whatever error(s) is/are occurring and you've no idea what's wrong with the script.
#!/usr/bin/perl -wl
use strict;
use File::Basename;
use File::Spec;
use Proc::Daemon;
use Data::Dumper;
# turn off buffering
$|++;
#FUNCTIONS DEFINTIONS
sub FindAuthor
{
#try to find book by this author in the catalogue
}
sub FindTitle
{
#try to find book with this title in the catalogue
}
sub CheckIfCanBeReturned
{
#check if the book is borrowed and by whom
}
sub tail
{
my $file = shift;
# do work
}
sub find_by
{
my $file = shift;
my $val = shift;
my $by = shift;
my #results;
my $xt = 0;
# sanity check args
# do work
if ( $by eq 'author' )
{
my #results = FindByAuthor(blah);
}
elsif ( $by eq 'blah' )
{
#results = blah();
}
#...etc
# really should use File::Spec IE
my $filename = File::Spec->catfile('home', 'Ex5', 'Answers', $file);
# might be a good idea to either append or validate you're not clobbering
# an existent file here because this is currently clobbering.
open (my $answerFile, '>', $filename) or die "$!";
for ( #results )
{
print $answerFile $_,"\n";
}
close $answerFile;
# have some error checking in place and set $xt to 1 if an error occurs
return $xt;
}
#attempt at daemonization
# whatever this is is completely broken methinks.
#Proc::Daemon::Init;
my $continueWork++;
my $r_dir = '/home/user/Requests';
$SIG{TERM} = sub { $continueWork = 0 };
# going with pseudocode
while ( $continueWork )
{
#list all requests currently in the Request folder
my #RequestFilesArray = grep(/[^\.]/, <$r_dir/*>);
#the requests are formatted in such way,
#that the first 2 letters indicate what the client wants to be done
#and the rest is taken parameters used in the processing
for my $request_file ( #RequestFilesArray )
{
my $result = 0;
$request_file = basename($request_file);
my $cut = length($request_file) - 1;
my $work_on = substr $request_file, 0, $cut;
my $UserRequest = tail($request_file);
my $fix = length($UserRequest) - 1;
$UserRequest = substr $UserRequest, 0, $fix;
my $RequestType = substr $UserRequest, 0, 2;
my $RequestedValue = substr $UserRequest, 3;
if ($RequestType eq "fa")
{
#FIND BY AUTHOR
$result = find_by($request_file, $RequestedValue, 'author');
}
elsif ($RequestType eq "ft")
{
#FIND BY TITLE
$result = find_by($request_file, $RequestedValue, 'title');
}
elsif ($RequestType eq "br")
{
#BOOK RETURN
$result = CheckIfCanBeReturned ($RequestedValue, $request_file) or handle();
}
elsif ($RequestType eq "bb")
{
#BOOK BORROW
$result = CheckIfCanBeBorrowed ($RequestedValue, $request_file) or handle();
}
else
{
print STDERR "something went wrong with this request";
}
}
#deleting processed requests
if ( $result == 1 )
{
unlink $work_on;
}
sleep(2);
}
Take special note to my "mild" attempt and DRYing up your code by using the find_by subroutine. You had a LOT of duplicate code in your original script, which I moved into a single sub routine. DRY eq 'Don't Repeat Yourself'.
Let's say we have a script which open a file, then read it line by line and print the line to the terminal. We have a sigle thread and a multithread version.
The problem is than the resulting output of both scripts is almost the same, but not exactly. In the multithread versions there are about ten lines which missed the first 2 chars. I mean, if the real line is something line "Stackoverflow rocks", I obtain "ackoverflow rocks".
I think that this is related to some race condition since if I adjust the parameters to create a lot of little workers, I get more faults than If I use less and bigger workers.
The single thread is like this:
$file = "some/file.txt";
open (INPUT, $file) or die "Error: $!\n";
while ($line = <STDIN>) {
print $line;
}
The multithread version make's use of the thread queue and this implementation is based on the #ikegami approach:
use threads qw( async );
use Thread::Queue 3.01 qw( );
use constant NUM_WORKERS => 4;
use constant WORK_UNIT_SIZE => 100000;
sub worker {
my ($job) = #_;
for (#$job) {
print $_;
}
}
my $q = Thread::Queue->new();
async { while (defined( my $job = $q->dequeue() )) { worker($job); } }
for 1..NUM_WORKERS;
my $done = 0;
while (!$done) {
my #lines;
while (#lines < WORK_UNIT_SIZE) {
my $line = <>;
if (!defined($line)) {
$done = 1;
last;
}
push #lines, $line;
}
$q->enqueue(\#lines) if #lines;
}
$q->end();
$_->join for threads->list;
I tried your program and got similar (wrong) results. Instead of Thread::Semaphore I used lock from threads::shared around the print as it's simpler to use than T::S, i.e.:
use threads;
use threads::shared;
...
my $mtx : shared;
sub worker
{
my ($job) = #_;
for (#$job) {
lock($mtx); # (b)locks
print $_;
# autom. unlocked here
}
}
...
The global variable $mtx serves as a mutex. Its value doesn't matter, even undef (like here) is ok.
The call to lock blocks and returns only if no other threads currently holds the lock on that variable.
It automatically unlocks (and thus makes lock return) when it goes out of scope. In this sample that happens
after every single iteration of the for loop; there's no need for an extra {…} block.
Now we have syncronized the print calls…
But this didn't work either, because print does buffered I/O (well, only O). So I forced unbuffered output:
use threads;
use threads::shared;
...
my $mtx : shared;
$| = 1; # force unbuffered output
sub worker
{
# as above
}
...
and then it worked. To my surprise I could then remove the lock and it still worked. Perhaps by accident. Note that your script will run significantly slower without buffering.
My conclusion is: you're suffering from buffering.
My Expectation:
I have to use the following command to send the value of first argument to all the files calling perl.pl file.
./perl.pl 1
The one is read using the following file: (perl.pl)
#!/usr/bin/perl
package Black;
use strict;
use warnings;
#subroutines
sub get_x();
#variables
our $XE = -1;
my ($param1, $param2, $param3) = #ARGV;
my $x = get_x();
sub get_x()
{
$XE = $param1;
return $XE;
}
exit;
Then I wrote another script which performs some code base on the input to perl.pl (0 or 1).
The file is ./per.pl and I invoke in from linux terminal like this: ./per.pl
Here is the code I wrote for it:
#!/usr/bin/perl
require "perl.pl";
my $xd = Black::get_x();
if ($xd ==1){
print $xd;}
else {
print "5";
}
exit;
But this is what I get when I write these commands:
./perl.pl 1
I tried to print it and it prints 1...removed the print like from the code in this case
./per.pl
And now I get nothing. I would like the 1 getting printed out but no it doesn't
Thanks in Advance
Before we get started, you cannot possibly get the output you say you get because you tell the process to exit when the module is executed by require, so Black::get_x() is never reached. Change exit; to 1;.
Now on to your question. If I understand correctly, you want to pass a value to one process via its command line, and fetch that value by executing the same script without the parameter.
You did not even attempt to pass the variable from one process to another, so it shouldn't be a surprise that it doesn't work. Since the two processes don't even exist at the same time, you'll need to store the value somewhere such as the file system.
#!/usr/bin/perl
use strict;
use warnings;
my $conf_file = "$ENV{HOME}/.black";
my $default = -1;
sub store {
my ($val) = #_;
open(my $fh, '>', $conf_file) or die $!;
print $fh "$val\n";
return $val;
}
sub retrieve {
open(my $fh, '<', $conf_file)
or do {
return $default if $!{ENOENT};
die $!;
};
my $val = <$fh>;
chomp($val);
return $val;
}
my $xd = #ARGV ? store($ARGV[0]) : retrieve();
print("$xd\n");
I have a text file mapping of two integers, separated by commas:
123,456
789,555
...
It's 120Megs... so it's a very long file.
I keep to search for the first column and return the second, e.g., look up 789 --returns--> 555 and I need to do it FAST, using regular Linux built-ins.
I'm doing this right now and it takes several seconds per look-up.
If I had a database I could index it. I guess I need an indexed text file!
Here is what I'm doing now:
my $lineFound=`awk -F, '/$COLUMN1/ { print $2 }' ../MyBigMappingFile.csv`;
Is there any easy way to pull this off with a performance improvement?
The hash suggestions are the natural way an experienced Perler would do this, but it may be suboptimal in this case. It scans the entire file and builds a large, flat datastructure in linear time. Cruder methods can short circuit with a worst case linear time, usually less in practice.
I first made a big mapping file:
my $LEN = shift;
for (1 .. $LEN) {
my $rnd = int rand( 999 );
print "$_,$rnd\n";
}
With $LEN passed on the command line as 10000000, the file came out to 113MB. Then I benchmarked three implemntations. The first is the hash lookup method. The second slurps the file and scans it with a regex. The third reads line-by-line and stops when it matches. Complete implementation:
#!/usr/bin/perl
use strict;
use warnings;
use Benchmark qw{timethese};
my $FILE = shift;
my $COUNT = 100;
my $ENTRY = 40;
slurp(); # Initial file slurp, to get it into the hard drive cache
timethese( $COUNT, {
'hash' => sub { hash_lookup( $ENTRY ) },
'scalar' => sub { scalar_lookup( $ENTRY ) },
'linebyline' => sub { line_lookup( $ENTRY ) },
});
sub slurp
{
open( my $fh, '<', $FILE ) or die "Can't open $FILE: $!\n";
undef $/;
my $s = <$fh>;
close $fh;
return $s;
}
sub hash_lookup
{
my ($entry) = #_;
my %data;
open( my $fh, '<', $FILE ) or die "Can't open $FILE: $!\n";
while( <$fh> ) {
my ($name, $val) = split /,/;
$data{$name} = $val;
}
close $fh;
return $data{$entry};
}
sub scalar_lookup
{
my ($entry) = #_;
my $data = slurp();
my ($val) = $data =~ /\A $entry , (\d+) \z/x;
return $val;
}
sub line_lookup
{
my ($entry) = #_;
my $found;
open( my $fh, '<', $FILE ) or die "Can't open $FILE: $!\n";
while( <$fh> ) {
my ($name, $val) = split /,/;
if( $name == $entry ) {
$found = $val;
last;
}
}
close $fh;
return $found;
}
Results on my system:
Benchmark: timing 100 iterations of hash, linebyline, scalar...
hash: 47 wallclock secs (18.86 usr + 27.88 sys = 46.74 CPU) # 2.14/s (n=100)
linebyline: 47 wallclock secs (18.86 usr + 27.80 sys = 46.66 CPU) # 2.14/s (n=100)
scalar: 42 wallclock secs (16.80 usr + 24.37 sys = 41.17 CPU) # 2.43/s (n=100)
(Note I'm running this off an SSD, so I/O is very fast, and perhaps makes that initial slurp() unnecessary. YMMV.)
Interestingly, the hash implementation is just as fast as linebyline, which isn't what I expected. By using slurping, scalar may end up being faster on a traditional hard drive.
However, by far the fastest is a simple call to grep:
$ time grep '^40,' int_map.txt
40,795
real 0m0.508s
user 0m0.374s
sys 0m0.046
Perl could easily read that output and split apart the comma in hardly any time at all.
Edit: Never mind about grep. I misread the numbers.
120 meg isn't that big. Assuming you've got at least 512MB of ram, you could easily read the whole file into a hash and then do all of your lookups against that.
use:
sed -n "/^$COLUMN1/{s/.*,//p;q}" file
This optimizes your code in three ways:
1) No needless splitting each line in two on ",".
2) You stop processing the file after the first hit.
3) sed is faster than awk.
This should more than half your search time.
HTH Chris
It all depends on how often the data change and how often in the course of a single script invocation you need to look up.
If there are many lookups during each script invocation, I would recommend parsing the file into a hash (or array if the range of keys is narrow enough).
If the file changes every day, creating a new SQLite database might or might not be worth your time.
If each script invocation needs to look up just one key, and if the data file changes often, you might get an improvement by slurping the entire file into a scalar (minimizing memory overhead, and do a pattern match on that (instead of parsing each line).
#!/usr/bin/env perl
use warnings; use strict;
die "Need key\n" unless #ARGV;
my $lookup_file = 'lookup.txt';
my ($key) = #ARGV;
my $re = qr/^$key,([0-9]+)$/m;
open my $input, '<', $lookup_file
or die "Cannot open '$lookup_file': $!";
my $buffer = do { local $/; <$input> };
close $input;
if (my ($val) = ($buffer =~ $re)) {
print "$key => $val\n";
}
else {
print "$key not found\n";
}
On my old slow laptop, with a key towards the end of the file:
C:\Temp> dir lookup.txt
...
2011/10/14 10:05 AM 135,436,073 lookup.txt
C:\Temp> tail lookup.txt
4522701,5840
5439981,16075
7367284,649
8417130,14090
438297,20820
3567548,23410
2014461,10795
9640262,21171
5345399,31041
C:\Temp> timethis lookup.pl 5345399
5345399 => 31041
TimeThis : Elapsed Time : 00:00:03.343
This example loads the file into a hash (which takes about 20s for 120M on my system). Subsequent lookups are then nearly instantaneous. This assumes that each number in the left column is unique. If that's not the case then you would need to push numbers on the right with the same number on the left onto an array or something.
use strict;
use warnings;
my ($csv) = #ARGV;
my $start=time;
open(my $fh, $csv) or die("$csv: $!");
$|=1;
print("loading $csv... ");
my %numHash;
my $p=0;
while(<$fh>) { $p+=length; my($k,$v)=split(/,/); $numHash{$k}=$v }
print("\nprocessed $p bytes in ",time()-$start, " seconds\n");
while(1) { print("\nEnter number: "); chomp(my $i=<STDIN>); print($numHash{$i}) }
Example usage and output:
$ ./lookup.pl MyBigMappingFile.csv
loading MyBigMappingFile.csv...
processed 125829128 bytes in 19 seconds
Enter number: 123
322
Enter number: 456
93
Enter number:
does it help if you cp the file to your /dev/shm, and using /awk/sed/perl/grep/ack/whatever query a mapping?
don't tell me you are working on a 128MB ram machine. :)