How to save a table to a file from Lua - io

I'm having trouble printing a table to a file with lua (and I'm new to lua).
Here's some code I found here to print the table;
function print_r ( t )
local print_r_cache={}
local function sub_print_r(t,indent)
if (print_r_cache[tostring(t)]) then
print(indent.."*"..tostring(t))
else
print_r_cache[tostring(t)]=true
if (type(t)=="table") then
for pos,val in pairs(t) do
if (type(val)=="table") then
print(indent.."["..pos.."] => "..tostring(t).." {")
sub_print_r(val,indent..string.rep(" ",string.len(pos)+8))
print(indent..string.rep(" ",string.len(pos)+6).."}")
elseif (type(val)=="string") then
print(indent.."["..pos..'] => "'..val..'"')
else
print(indent.."["..pos.."] => "..tostring(val))
end
end
else
print(indent..tostring(t))
end
end
end
if (type(t)=="table") then
print(tostring(t).." {")
sub_print_r(t," ")
print("}")
else
sub_print_r(t," ")
end
print()
end
I have no idea where the 'print' command goes to, I'm running this lua code from within another program. What I would like to do is save the table to a .txt file. Here's what I've tried;
function savetxt ( t )
local file = assert(io.open("C:\temp\test.txt", "w"))
file:write(t)
file:close()
end
Then in the print-r function I've changed everywhere it says 'print' to 'savetxt'. This doesn't work. It doesn't seem to access the text file in any way. Can anyone suggest an alternative method?
I have a suspicion that this line is the problem;
local file = assert(io.open("C:\temp\test.txt", "w"))
Update;
I have tried the edit suggested by Diego Pino but still no success. I run this lua script from another program (for which I don't have the source), so I'm not sure where the default directory of the output file might be (is there a method to get this programatically?). Is is possible that since this is called from another program there's something blocking the output?
Update #2;
It seems like the problem is with this line:
local file = assert(io.open("C:\test\test2.txt", "w"))
I've tried changing it "C:\temp\test2.text", but that didn't work. I'm pretty confident it's an error at this point. If I comment out any line after this (but leave this line in) then it still fails, if I comment out this line (and any following 'file' lines) then the code runs. What could be causing this error?

I have no idea where the 'print' command goes to,
print() output goes to default output file, you can change that with io.output([file]), see Lua manuals for details on querying and changing default output.
where do files get created if I don't specify the directory
Typically it will land in current working directory.

Your print_r function prints out a table to stdout. What you want is to print out the output of print_r to a file. Change the print_r function so instead of printing to stdout, it prints out to a file descriptor. Perhaps the easiest way to do that is to pass a file descriptor to print_r and overwrite the print function:
function print_r (t, fd)
fd = fd or io.stdout
local function print(str)
str = str or ""
fd:write(str.."\n")
end
...
end
The rest of the print_r doesn't need any change.
Later in savetxt call print_r to print the table to a file.
function savetxt (t)
local file = assert(io.open("C:\temp\test.txt", "w"))
print_r(t, file)
file:close()
end

require("json")
result = {
["ip"]="192.168.0.177",
["date"]="2018-1-21",
}
local test = assert(io.open("/tmp/abc.txt", "w"))
result = json.encode(result)
test:write(result)
test:close()
local test = io.open("/tmp/abc.txt", "r")
local readjson= test:read("*a")
local table =json.decode(readjson)
test:close()
print("ip: " .. table["ip"])
2.Another way:
http://lua-users.org/wiki/SaveTableToFile
Save Table to File
function table.save( tbl,filename )
Load Table from File
function table.load( sfile )

Related

perl untar single file

So running into an issue with my code here not sure what exactly i'm doing wrong i pass it the two arguments it searches for the file but its always going to does not exist.
i pass this to the file
perl restore.cgi users_old_52715.tar.gz Ace_Walker
its not finding the file it exist i assure you.
#!/usr/bin/perl
use Archive::Tar;
my $tarPath = $ARGV[0];
my $playerfile = $ARGV[1].".ini";
my $tar = Archive::Tar->new($tarPath);
if ($tar->contains_file($playerfile)) {
$tar->read($tarPath);
$tar->extract_file($playerfile, './' );
print "Successfully restored $playerfile to production enviroment\n";
exit 0;
}else{
print $playefile." does not exist in this archive!\n";
exit 0;
}
Just writing Scott Hunter's comment as an answer:
Try using an absolute path instead of a relative one.
if( $tar->extract_file($playerfile, './'.$playerfile )){
print "Successfully restored $playerfile to production enviroment\n";
}
exit 0;
man Archive::Tar :
$tar->extract_file( $file, [$extract_path] )
Write an entry, whose name is equivalent to the file name provided to disk. Optionally takes a second parameter, which is the full native path (including filename) the entry will be written to.

How to modify a perl script to read excel instead of Html files

My first question is:
Is this possible to do this, since now I have a perl script which reads Html file and extract data to display on another html file.
If the answer for the question above is Yes, my second question would be:
How to do this?
Sorry to ask frankly as this, but since I'm so new for perl, and I have to take this task, so I'm here for some useful advice or suggestion to guide me through this task. Appreciate your help in advance.
Here's a part of the code, since the whole chunk is quite long:
$date=localtime();
($TWDAY, $TMTH, $TD1D, $TSE, $TYY) = split(/\s+/, $date);
$TSE =~ s/\://g;
$STAMP=_."$TD1D$TMTH$TYY";
#ServerInfo=();
#--------------------------------------------------------------------------- -------------------------------
# Read Directory
#----------------------------------------------------------------------------------------------------------
$myDir=getcwd;
#----------------------------------------------------------------------------------------------------------
# INITIALIZE HTML FORMAT
#----------------------------------------------------------------------------------------------------------
&HTML_FORMAT;
#----------------------------------------------------------------------------------------------------------
# REPORT
#----------------------------------------------------------------------------------------------------------
if (! -d "$myDir/report") { mkdir("$myDir/report");};
$REPORTFILE="$myDir/report/checkpack".".htm";
open OUT,">$REPORTFILE" or die "\nCannot open out file $REPORTFILE\n\n";
print OUT "$Tag_Header";
#----------------------------------------------------------------------------------------------------------
sub numSort {
if ($b < $a) { return -1; }
elsif ($a == $b) { return 0;}
elsif ($b > $a) { return 1; }
}
#ArrayDir = sort numSort #DirArray;
#while (<#ArrayDir>) {
#OutputDir=grep { -f and -T } glob "$myDir/*.htm $myDir/*.html";
#}
#----------------------------------------------------------------------------------------------------------
#ReadLine3=();
$xyxycnt=0;
foreach $InputFile (#OutputDir) { #---- MAIN
$filename=(split /\//, $InputFile) [-1]; print "-"x80 ; print "\nFilename\t:$filename\n";
open IN, "<$InputFile" or die "Cannot open Input file $InputFile\n";
#MyData=();
$DataCnt=0;
#MyLine=();
$MyLineCnt=0;
while (<IN>) {
$LINE=$_;
chomp($LINE);
$LINE=~s/\<br\>/XYXY/ig;
$LINE=~s/\<\/td\>/ \nXYZXYZ\n/ig;
$LINE=~s/\<dirname\>/xxxdirnameyyy/ig;
$LINE=linetrim3($LINE);
$LINE=linetrim($LINE);
$LINE=~s/XYXY/\<br\>/ig;
$LINE=~s/xxxdirnameyyy/&lt dirname &gt/ig;
$LINE=~s/^\s+//ig;
print OUT2 "$LINE\n";
if (defined($LINE)) { $MyData[$DataCnt]="$LINE"; $DataCnt++ ; }
}
close IN;
foreach $ReadFile (#MyData) { #--- Mydata
$MyLineCnt++;
$MyLine[$MyLineCnt]="";
#### FILENAME
$ServerInfo[0]="$filename";
#### IP ADDRESS
if ($ReadFile =~ /Host\/Device Name\:/) {
#print "$ReadFile\n"
($Hostname)=(split /\:|\s+/, $ReadFile)[3]; print "$Hostname\n";
&myServerInfo("$Hostname","1");
}
if ($ReadFile =~ /IP Address\(es\)/) {#ListIP=(); $SwIP=1; $CntIP=0 ; };
#### OPERATING SYSTEM & VERSION
if ($ReadFile =~ /Operating System\:/) {
$SwIP=0;
$OS= (split /\:|\s+/, $ReadFile)[3]; &myServerInfo("$OS","3") ; print "$OS\n";
$OSVer= (split /\:|\s+/, $ReadFile)[-2]; &myServerInfo("$OSVer","4") ; print "$OSVer\n";
};
#### GET IP VALUE
if ($SwIP==1) {
$ReadFile=(split /\:/,$ReadFile) [2];
$ReadFile=~s/[a-z|A-Z]|\(|\)|\// /ig; print "$ReadFile\n";
if ($CntIP==0) {
#$ListIP[$CntIP]=(split /\s+/,$ReadFile) [1];
#ListIP="$ReadFile";
} elsif ($CntIP==1) { print "\n\t\t $ReadFile\n" ; $ListIP[$CntIP]="\n$ReadFile";
} else { print "\t\t $ReadFile\n" ; $ListIP[$CntIP]="\n$ReadFile"; };
$CntIP++;
}
I'm afraid if you don't understand what is going on in this program and you also don't understand how to approach a task like this at all, Stack Overflow might not be the right place to get help.
Let me try to show you the approach I would take with this. I'm assuming there is more code.
First, write down a list of everything you know:
What is the input format of the existing file
Where does the existing file come from now
What is the output format of the existing file
Where does the generated output file go afterwards
What does the new file look like
Where does the new file come from
Use perltidy to indent the inherited code so you can read it better. The default options should be enough.
Read the code, take notes about what pieces do what, add comments
Write a unit test for the desired output format. You can use Test::More. Another useful testing module here is Test::File.
Refactor the part that generated the output format to work with a certain data structure. Use your tests to make sure you don't break it.
Write code to parse the new file into the data structure from the point above. Now you can plug that in and get the expected output.
Refactor the part that takes the old input file from the existing file location to be a function, so you can later switch it for the new one.
Write code to get the new file from the new file location.
Document what you did so the next guy is not in the same situation. Remember that could be you in half a year.
Also add use strict and use warnings while you refactor to catch errors more easily. If stuff breaks because of that, make it work before you continue. Those pragmas tell you what's wrong. The most common one you will encounter is Global symbol "$foo" requires explicit package name. That means you need to put my in front of the first assignment, or declare the variable before.
If you have specific questions, ask them as a new question with a short example. Read how to ask to make sure you will get help on those.
Good luck!
After seing your comment I am thinking you want a different input and a different output. In that case, disregard this, throw away the old code and start from scratch. If you don't know enough Perl, get a book like Curtis Poe's Beginning Perl if you already know programming. If not, check out Learning Perl by Randal L. Schwartz.

Lua script unable to detect/catch error while executing invalid linux command

I have the following function that works fine as long as I give it a valid command to execute. As soon as I give it a non-existent command, the script is interrupted with an error message.
#!/usr/bin/lua
function exec_com(com)
local ok,res=pcall(function() return io.popen(com) end)
if ok then
local tmp=res:read('*a')
res:close()
return ok,tmp
else
return ok,res
end
end
local st,val=exec_com('uptime')
print('Executed "uptime" with status:'..tostring(st)..' and value:'..val)
st,val=exec_com('zzzz')
print('Executed "zzzz" with status:'..tostring(st)..' and value:'..val)
When I run the script above I get the following output:
Executed "uptime" with status:true and value: 18:07:38 up 1 day, 23:00, 3 users, load average: 0.37, 0.20, 0.20
sh: zzzz: command not found
Executed "zzzz" with status:true and value:
You can clearly see above that pcall() function still reported success when executing "zzzz" which is odd.
Can someone help me devise a way to catch an exception when executing a non-existent or ill-formed Linux command using Lua script? Thanks.
Edit: Restated my request after getting the clarification that pcall() works as expected, and the problem is due to popen() failing to throw an error.
I use a method which is similar to your "temporary workaround" but which gives you more information:
local cmd = "uptime"
local f = io.popen(cmd .. " 2>&1 || echo ::ERROR::", "r")
local text = f:read "*a"
if text:find "::ERROR::" then
-- something went wrong
print("error: " .. text)
else
-- all is fine!!
print(text)
end
If you look at io.popen(), you'll see that it'll always return a file handle.
Starts program prog in a separated process and returns a file handle
that you can use to read data from this program (if mode is "r", the
default) or to write data to this program (if mode is "w").
Since, a file handle returned is still a valid value for lua, the pcall(), your local function inside the pcall is returning a true value (and an error is not being propagated); thereby, giving you a true status and no output.
I have come up with my own temporary workaround that pipes the error to /dev/null and determines the success/failure of executed command based on the text received from io.popen():read('*a') command.
Here is my new code:
#!/usr/bin/lua
function exec_com(com)
local res=io.popen(com..' 2>/dev/null')
local tmp=res:read('*a')
res:close()
if string.len(tmp)>0 then
return true,tmp
else
return false,'Error executing command: '..com
end
end
local st,val=exec_com('uptime')
print('Executed "uptime" with status:'..tostring(st)..' and value:'..val)
st,val=exec_com('cat /etc/shadow')
print('Executed "cat /etc/shadow" with status:'..tostring(st)..' and value:'..val)
And the corresponding output is now correct:
Executed "uptime" with status:true and value: 00:10:11 up 2 days, 5:02, 3 users, load average: 0.01, 0.05, 0.19
Executed "cat /etc/shadow" with status:false and value:Error executing command: cat /etc/shadow
In my example above I am creating a "generic" error description. This is an intermediate fix and I am still interested in seeing alternative solutions that can return a more meaningful error message describing why the command failed to execute.
Rather than taking the time reading the whole file into a variable, why not just check if the file is empty with f:read(0)?
Local f = io.popen("NotExist")
if not f:read(0) Then
for l in st:lines() do
print(l)
end
else
error("Command Does Not Exist")
end
From the lua Manual:
As a special case, io.read(0) works as a test for end of file: It returns an empty string if there is more to be read or nil otherwise.

Pattern match data from file in Lua

I've been tasked with creating a new server modification for Crysis Wars. I have ran into a particular issue that it cannot read the old ban-file (this is required in order to keep the server consistent). The Lua code itself does not seem to have any errors, but it's just not getting any of the data.
Looking at the code I'm using for this below, can you find anything wrong with it?
This is the code I'm using for this:
function rX.CheckBanlist(player)
local Root = System.GetCVar("sys_root");
local File = ""..Root.."System/Bansystem/Raptor.xml";
local FileHnd = io.open(File, "r");
for line in FileHnd:lines() do
if (not string.find(line, "User:Read")) then
System.Log("[rX] File Read Error: System/Raptor/Banfile.xml, The contents are unexpected.");
return false;
end
local Msg, Date, Reason, Type, Domain = string.match(line, "User:Read( '(.*)', { Date='(.*)'; Reason='(.*)'; Typ='(.*)'; Info='(.*)'; } );");
local rldomain = g_gameRules.game:GetDomain(player.id);
if (Domain == rldomain) then
return true;
else
return false;
end
end
end
Also, the actual file reads as this, but I can't get the " to work in Lua properly. Could this be the issue?
User:Read( "Banned", { Date="31.03.2011"; Reason="WEBSTREAM"; Typ="Inetnum"; Info="COMPUTER.SED.gg"; } );
You may prefer using Lua's [[ for multiline string when you want to include quotes inside quotes etc.
Also, you'd have to escape the ( and ) while matching:
local Msg, Date, Reason, Type, Domain = line:match([[User:Read%( "(.-)", { Date="(.+)"; Reason="(.+)"; Typ="(.+)"; Info="(.+)"; } %);]])
And the results will be as expected: http://codepad.org/gN8kSL6H

Compare many text files that contain duplicate "stubs" from the previous and next file and remove duplicate text automatically

I have a large number of text files (1000+) each containing an article from an academic journal. Unfortunately each article's file also contains a "stub" from the end of the previous article (at the beginning) and from the beginning of the next article (at the end).
I need to remove these stubs in preparation for running a frequency analysis on the articles because the stubs constitute duplicate data.
There is no simple field that marks the beginning and end of each article in all cases. However, the duplicate text does seem to formatted the same and on the same line in both cases.
A script that compared each file to the next file and then removed 1 copy of the duplicate text would be perfect. This seems like it would be a pretty common issue when programming so I am surprised that I haven't been able to find anything that does this.
The file names sort in order, so a script that compares each file to the next sequentially should work. E.G.
bul_9_5_181.txt
bul_9_5_186.txt
are two articles, one starting on page 181 and the other on page 186. Both of these articles are included bellow.
There is two volumes of test data located at [http://drop.io/fdsayre][1]
Note: I am an academic doing content analysis of old journal articles for a project in the history of psychology. I am no programmer, but I do have 10+ years experience with linux and can usually figure things out as I go.
Thanks for your help
FILENAME: bul_9_5_181.txt
SYN&STHESIA
ISI
the majority of Portugese words signifying black objects or ideas relating to black. This association is, admittedly, no true synsesthesia, but the author believes that it is only a matter of degree between these logical and spontaneous associations and genuine cases of colored audition.
REFERENCES
DOWNEY, JUNE E. A Case of Colored Gustation. Amer. J. of Psycho!., 1911, 22, S28-539MEDEIROS-E-ALBUQUERQUE. Sur un phenomene de synopsie presente par des millions de sujets. / . de psychol. norm, et path., 1911, 8, 147-151. MYERS, C. S. A Case of Synassthesia. Brit. J. of Psychol., 1911, 4, 228-238.
AFFECTIVE PHENOMENA — EXPERIMENTAL
BY PROFESSOR JOHN F. .SHEPARD
University of Michigan
Three articles have appeared from the Leipzig laboratory during the year. Drozynski (2) objects to the use of gustatory and olfactory stimuli in the study of organic reactions with feelings, because of the disturbance of breathing that may be involved. He uses rhythmical auditory stimuli, and finds that when given at different rates and in various groupings, they are accompanied by characteristic feelings in each subject. He records the chest breathing, and curves from a sphygmograph and a water plethysmograph. Each experiment began with a normal record, then the stimulus was given, and this was followed by a contrast stimulus; lastly, another normal was taken. The length and depth of breathing were measured (no time line was recorded), and the relation of length of inspiration to length of expiration was determined. The length and height of the pulsebeats were also measured. Tabular summaries are given of the number of times the author finds each quantity to have been increased or decreased during a reaction period with each type of feeling. The feeling state accompanying a given rhythm is always complex, but the result is referred to that dimension which seemed to be dominant. Only a few disconnected extracts from normal and reaction periods are reproduced from the records. The author states that excitement gives increase in the rate and depth of breathing, in the inspiration-expiration ratio, and in the rate and size of pulse. There are undulations in the arm volume. In so far as the effect is quieting, it causes decrease in rate and depth of
182
JOHN F. SHEPARD
breathing, in the inspiration-expiration ratio, and in the pulse rate and size. The arm volume shows a tendency to rise with respiratory waves. Agreeableness shows
It looks like a much simpler solution would actually work.
No one seems to be using the information provided by the filenames. If you do make use of this information, you may not have to do any comparisons between files to identify the area of overlap. Whoever wrote the OCR probably put some thought into this problem.
The last number in the file name tells you what the starting page number for that file is. This page number appears on a line by itself in the file as well. It also looks like this line is preceded and followed by blank lines. Therefore for a given file you should be able to look at the name of the next file in the sequence and determine the page number at which you should start removing text. Since this page number appears in your file just look for a line that contains only this number (preceded and followed by blank lines) and delete that line and everything after. The last file in the sequence can be left alone.
Here's an outline for an algorithm
choose a file; call it: file1
look at the filename of the next file; call it: file2
extract the page number from the filename of file2; call it: pageNumber
scan the contents of file1 until you find a line that contains only pageNumber
make sure this line is preceded and followed by a blank line.
remove this line and everything after
move on to the next file in the sequence
You should probably try something like this (I've now tested it on the sample data you provided):
#!/usr/bin/ruby
class A_splitter
Title = /^[A-Z]+[^a-z]*$/
Byline = /^BY /
Number = /^\d*$/
Blank_line = /^ *$/
attr_accessor :recent_lines,:in_references,:source_glob,:destination_path,:seen_in_last_file
def initialize(src_glob,dst_path=nil)
#recent_lines = []
#seen_in_last_file = {}
#in_references = false
#source_glob = src_glob
#destination_path = dst_path
#destination = STDOUT
#buffer = []
split_em
end
def split_here
if destination_path
#destination.close if #destination
#destination = nil
else
print "------------SPLIT HERE------------\n"
end
print recent_lines.shift
#in_references = false
end
def at_page_break
((recent_lines[0] =~ Title and recent_lines[1] =~ Blank_line and recent_lines[2] =~ Number) or
(recent_lines[0] =~ Number and recent_lines[1] =~ Blank_line and recent_lines[2] =~ Title))
end
def print(*args)
(#destination || #buffer) << args
end
def split_em
Dir.glob(source_glob).sort.each { |filename|
if destination_path
#destination.close if #destination
#destination = File.open(File.join(#destination_path,filename),'w')
print #buffer
#buffer.clear
end
in_header = true
File.foreach(filename) { |line|
line.gsub!(/\f/,'')
if in_header and seen_in_last_file[line]
#skip it
else
seen_in_last_file.clear if in_header
in_header = false
recent_lines << line
seen_in_last_file[line] = true
end
3.times {recent_lines.shift} if at_page_break
if recent_lines[0] =~ Title and recent_lines[1] =~ Byline
split_here
elsif in_references and recent_lines[0] =~ Title and recent_lines[0] !~ /\d/
split_here
elsif recent_lines.length > 4
#in_references ||= recent_lines[0] =~ /^REFERENCES *$/
print recent_lines.shift
end
}
}
print recent_lines
#destination.close if #destination
end
end
A_splitter.new('bul_*_*_*.txt','test_dir')
Basically, run through the files in order, and within each file run through the lines in order, omitting from each file the lines that were present in the preceding file and printing the rest to STDOUT (from which it can be piped) unless a destination director is specified (called 'test_dir' in the example see the last line) in which case files are created in the specified directory with the same name as the file which contained the bulk of their contents.
It also removes the page-break sections (journal title, author, and page number).
It does two split tests:
a test on the title/byline pair
a test on the first title-line after a reference section
(it should be obvious how to add tests for additional split-points).
Retained for posterity:
If you don't specify a destination directory it simply puts a split-here line in the output stream at the split point. This should make it easier for testing (you can just less the output) and when you want them in individual files just pipe it to csplit (e.g. with
csplit -f abstracts - '---SPLIT HERE---' '{*}'
or something) to cut it up.
Here's is the beginning of another possible solution in Perl (It works as is but could probably be made more sophisticated if needed). It sounds as if all you are concerned about is removing duplicates across the corpus and don't really care if the last part of one article is in the file for the next one as long as it isn't duplicated anywhere. If so, this solution will strip out the duplicate lines leaving only one copy of any given line in the set of files as a whole.
You can either just run the file in the directory containing the text files with no argument or alternately specify a file name containing the list of files you want to process in the order you want them processed. I recommend the latter as your file names (at least in the sample files you provided) do not naturally list out in order when using simple commands like ls on the command line or glob in the Perl script. Thus it won't necessarily compare the correct files to one another as it just runs down the list (entered or generated by the glob command). If you specify the list, you can guarantee that they will be processed in the correct order and it doesn't take that long to set it up properly.
The script simply opens two files and makes note of the first three lines of the second file. It then opens a new output file (original file name + '.new') for the first file and writes out all the lines from the first file into the new output file until it finds the first three lines of the second file. There is an off chance that there are not three lines from the second file in the last one but in all the files I spot checked that seemed to be the case because of the journal name header and page numbers. One line definitely wasn't enough as the journal title was often the first line and that would cut things off early.
I should also note that the last file in your list of files entered will not be processed (i.e. have a new file created based off of it) as it will not be changed by this process.
Here's the script:
#!/usr/bin/perl
use strict;
my #files;
my $count = #ARGV;
if ($count>0){
open (IN, "$ARGV[0]");
#files = <IN>;
close (IN);
} else {
#files = glob "bul_*.txt";
}
$count = #files;
print "Processing $count files.\n";
my $lastFile="";
foreach(#files){
if ($lastFile ne ""){
print "Processing $_\n";
open (FILEB,"$_");
my #fileBLines = <FILEB>;
close (FILEB);
my $line0 = $fileBLines[0];
if ($line0 =~ /\(/ || $line0 =~ /\)/){
$line0 =~ s/\(/\\\(/;
$line0 =~ s/\)/\\\)/;
}
my $line1 = $fileBLines[1];
my $line2 = $fileBLines[2];
open (FILEA,"$lastFile");
my #fileALines = <FILEA>;
close (FILEA);
my $newName = "$lastFile.new";
open (OUT, ">$newName");
my $i=0;
my $done = 0;
while ($done != 1 and $i < #fileALines){
if ($fileALines[$i] =~ /$line0/
&& $fileALines[$i+1] == $line1
&& $fileALines[$i+2] == $line2) {
$done=1;
} else {
print OUT $fileALines[$i];
$i++;
}
}
close (OUT);
}
$lastFile = $_;
}
EDIT: Added a check for parenthesis in the first line that goes into the regex check for duplicity later on and if found escapes them so that they don't mess up the duplicity check.
You have a nontrivial problem. It is easy to write code to find the duplicate text at the end of file 1 and the beginning of file 2. But you don't want to delete the duplicate text---you want to split it where the second article begins. Getting the split right might be tricky---one marker is the all caps, another is the BY at the start of the next line.
It would have helped to have examples from consecutive files, but the script below works on one test case. Before trying this code, back up all your files. The code overwrites existing files.
The implementation is in Lua.
The algorithm is roughly:
Ignore blank lines at the end of file 1 and the start of file 2.
Find a long sequence of lines common to end of file 1 and start of file 2.
This works by trying a sequence of 40 lines, then 39, and so on
Remove sequence from both files and call it overlap.
Split overlap at title
Append first part of overlap to file1; prepend second part to file2.
Overwrite contents of files with lists of lines.
Here's the code:
#!/usr/bin/env lua
local ext = arg[1] == '-xxx' and '.xxx' or ''
if #ext > 0 then table.remove(arg, 1) end
local function lines(filename)
local l = { }
for line in io.lines(filename) do table.insert(l, (line:gsub('', ''))) end
assert(#l > 0, "No lines in file " .. filename)
return l
end
local function write_lines(filename, lines)
local f = assert(io.open(filename .. ext, 'w'))
for i = 1, #lines do
f:write(lines[i], '\n')
end
f:close()
end
local function lines_match(line1, line2)
io.stderr:write(string.format("%q ==? %q\n", line1, line2))
return line1 == line2 -- could do an approximate match here
end
local function lines_overlap(l1, l2, k)
if k > #l2 or k > #l1 then return false end
io.stderr:write('*** k = ', k, '\n')
for i = 1, k do
if not lines_match(l2[i], l1[#l1 - k + i]) then
if i > 1 then
io.stderr:write('After ', i-1, ' matches: FAILED <====\n')
end
return false
end
end
return true
end
function find_overlaps(fname1, fname2)
local l1, l2 = lines(fname1), lines(fname2)
-- strip trailing and leading blank lines
while l1[#l1]:find '^[%s]*$' do table.remove(l1) end
while l2[1] :find '^[%s]*$' do table.remove(l2, 1) end
local matchsize -- # of lines at end of file 1 that are equal to the same
-- # at the start of file 2
for k = math.min(40, #l1, #l2), 1, -1 do
if lines_overlap(l1, l2, k) then
matchsize = k
io.stderr:write('Found match of ', k, ' lines\n')
break
end
end
if matchsize == nil then
return false -- failed to find an overlap
else
local overlap = { }
for j = 1, matchsize do
table.remove(l1) -- remove line from first set
table.insert(overlap, table.remove(l2, 1))
end
return l1, overlap, l2
end
end
local function split_overlap(l)
for i = 1, #l-1 do
if l[i]:match '%u' and not l[i]:match '%l' then -- has caps but no lowers
-- io.stderr:write('Looking for byline following ', l[i], '\n')
if l[i+1]:match '^%s*BY%s' then
local first = {}
for j = 1, i-1 do
table.insert(first, table.remove(l, 1))
end
-- io.stderr:write('Split with first line at ', l[1], '\n')
return first, l
end
end
end
end
local function strip_overlaps(filename1, filename2)
local l1, overlap, l2 = find_overlaps(filename1, filename2)
if not l1 then
io.stderr:write('No overlap in ', filename1, ' an
Are the stubs identical to the end of the previous file? Or different line endings/OCR mistakes?
Is there a way to discern an article's beginning? Maybe an indented abstract? Then you could go through each file and discard everything before the first and after (including) the second title.
Are the titles & author always on a single line? And does that line always contain the word "BY" in uppercase? If so, you can probably do a fair job withn awk, using those criteria as the begin/end marker.
Edit: I really don't think that using diff is going to work as it is a tool for comparing broadly similar files. Your files are (from diff's point of view) actually completely different - I think it will get out of sync immediately. But then, I'm not a diff guru :-)
A quick stab at it, assuming that the stub is strictly identical in both files:
#!/usr/bin/perl
use strict;
use List::MoreUtils qw/ indexes all pairwise /;
my #files = #ARGV;
my #previous_text;
for my $filename ( #files ) {
open my $in_fh, '<', $filename or die;
open my $out_fh, '>', $filename.'.clean' or die;
my #lines = <$in_fh>;
print $out_fh destub( \#previous_text, #lines );
#previous_text = #lines;
}
sub destub {
my #previous = #{ shift() };
my #lines = #_;
my #potential_stubs = indexes { $_ eq $lines[0] } #previous;
for my $i ( #potential_stubs ) {
# check if the two documents overlap for that index
my #p = #previous[ $i.. $#previous ];
my #l = #lines[ 0..$#previous-$i ];
return #lines[ $#previous-$i + 1 .. $#lines ]
if all { $_ } pairwise { $a eq $b } #p, #l;
}
# no stub detected
return #lines;
}

Resources