Change address of a bincopy segment - python-3.x

What is the easiest method to change the start address of a bincopy segment?
For example, I have the code:
import bincopy
f = bincopy.BinFile("sample.hex")
print( f.segments )
which gives:
Segment(address=0, data=bytearray(b'\xaa\x00\x00\x00\x11\x00\x00\xaa'))
Segment(address=96, data=bytearray(b'\xdd\x00\x00\x00\x22\x00\x00\xdd'))
Segment(address=160, data=bytearray(b'\xee\x00\x00\x00\x33\x00\x00\xee'))
How to change the start address of the second segment from 96 to 60, for example?

What I have found so far:
bindata = f.as_binary(minimum_address=MY_START_ADDRESS, maximum_address=MY_END_ADDRESS)
f.add_binary(bindata, address=MY_DEST_ADDRESS, overwrite=False)
However, this solution has the disadvantage that the new datarange is handled by bincopy as a single segment without splitting it into multiple segments if there are empty spaces in between.
Therefore, another solution is to loop over the segments in the relevant range and move the one by one:
g = bincopy.BinFile()
for seg in f.segments:
g.add_binary( seg.data, address= seg.address + offset, overwrite=False)

Related

Extract second IP from lines

Is there a way to extract the second IP address from a command-line output?
Command output
Manual NAT Policies (Section 1)
60 (sdf-app-vip) to (outside) source dynamic d-d-servers interface destination static obj-15.34.4.32 obj-159.13.9.12
translate_hits = 0, untranslate_hits = 0
61 (ds-app-vip) to (outside) source dynamic d-d-servers interface destination static obj-15.1.95.176 obj-15.13.5.176
translate_hits = 0, untranslate_hits = 0
152 (sd-app-vip) to (outside) source dynamic d-d-servers interface destination static obj-19.36.11.12 obj-19.36.15.12
translate_hits = 0, untranslate_hits = 0
Auto NAT Policies (Section 2)
115 (nk-app-vip) to (customer-vrf-sd) source static nat-10.19.2.190-customer-vrf-transit 10.223.2.2
translate_hits = 0, untranslate_hits = 4652
My code is able to extract both IP, but am not able to filter the second IP.
Code:
import re
#Truncate file
ft=open('puip_only.txt','w')
ft.truncate()
ft.close()
#Filter IP's from object group IP output
cip=open('puip.txt', 'r')
cs=cip.readlines()
for line in cs:
matches= re.findall(r'[0-9]+(?:\.[0-9]+){3}', line)
newlines=( ' '.join(matches))
outF = open("puip_only.txt", "a")
outF.write(newlines)
outF.write("\n")
outF.close()
Expected output is
159.13.9.12
15.13.5.176
19.36.15.12
10.223.2.2
If you only want the second IP, don't join it with the first:
if len(matches)>=2:
outF.write(matches[1])
instead of
outF.write(newlines)
You can match the following space, word chars and - and then capture the second ip number in group 1.
The group 1 values will be returned by re.findall.
\b[0-9]+(?:\.[0-9]+){3}\s+\w+-([0-9]+(?:\.[0-9]+){3})\b
Regex demo

IPv4Network in Python - Calculating next minimum subnet of different size?

I'm working with the ipaddress module in Python and trying to figure out a way of calculating the next available subnet (of either the same prefix or a different prefix) that doesn't overlap the existing subnet (the new subnet MUST be greater than the old one).
Lets say I start with network:
from ipaddress import IPv4Network
# From 10.90.1.0 to 10.90.1.31
main_net = IPv4Network("10.90.1.0/27")
I know the next available address is going to be 10.90.1.32, I can even figure this out quite easily by doing:
next_ip = main_net.broadcast_address + 1
# will output 10.90.1.32
print(next_ip)
If I wanted to find the next /27, I just create a new network like so:
# From 10.90.1.32 to 10.90.1.63
new_net = IPv4Network(f"{next_ip}/27")
This is all very straightforward so far, but now what if the next subnet I am looking for is a /26 or a /28 - how can I find the next minimum start IP address for either of these cases in Python?
I have explored using the supernet method, for example I could do something like this:
# Will print 10.90.1.0/26
print(main_net.supernet(new_prefix=27))
The problem with this method is that it will print 10.90.1.0/26 which overlaps the existing 10.90.1.0/27 network, I could make a loop and get it to keep generated the next /26 until they stop overlapping, but it seems inefficient to me. Surely, there is a better way?
Thanks to the help of Ron Maupin's helpful comment leading to a useful guide, I have managed to make a function that does this. I still want to test it a bit more but believe it is correct:
def calculate_next_ip_network(ip_bytes, current_prefix, next_prefix):
next_prefix_mask = (~((1 << (32 - next_prefix)) - 1)) & 0xFFFFFFFF
if next_prefix <= current_prefix:
bit_shift = 32 - next_prefix
else:
bit_shift = 32 - current_prefix
new_ip = (((next_prefix_mask & ip_bytes) >> bit_shift) + 1) << bit_shift
return bytes([new_ip >> i & 0xFF for i in (24, 16, 8, 0)])
Usage:
nt = IPv4Network("10.90.1.56/29")
current_prefix = nt.prefixlen
next_prefix = 25
ip_bytes = int.from_bytes(nt.network_address.packed, byteorder="big")
next_ip = calculate_next_ip_network(ip_bytes, current_prefix, next_prefix)
print(IPv4Address(next_ip))
# Should print "10.90.1.128"

Add comma after volume when no issue number biblatex

For the bibliography of my thesis I want to add a comma after the volume number when a journal article has no issue number present.
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{xpatch}
\usepackage[backend=biber, citestyle=nejm, sorting=none, natbib=true, isbn=false, url=false, doi=true, eprint = false, giveninits]{biblatex}
% remove pp
\DeclareFieldFormat{pages}{#1}
% No dot before number of articles
\xpatchbibmacro{volume+number+eid}{%
\setunit*{\adddot}%
}{%
}{}{}
% Number of articles in parentheses
\DeclareFieldFormat[article]{number}{\mkbibparens{#1}\addcomma}
% Comma before date; date not in parentheses
\renewbibmacro*{issue+date}{%
\setunit*{\addcomma\space}%
\iffieldundef{issue}
{\usebibmacro{date}}
{\printfield{issue}%
\setunit*{\addspace}%
\usebibmacro{date}}%
\newunit}
% comma after journal
\renewbibmacro*{journal+issuetitle}{%
\usebibmacro{journal}%
\setunit*{\addcomma\space}%
\iffieldundef{series}
{}
{\newunit
\printfield{series}%
\setunit{\addspace}}%
\usebibmacro{volume+number+eid}%
\setunit{\addspace}%
\usebibmacro{issue+date}%
\setunit{\addcolon\space}%
\usebibmacro{issue}%
\newunit}
\usepackage{filecontents}
\begin{filecontents}{sample.bib}
#article{einstein,
author = {Albert Einstein},
title = {On the electrodynamics of moving bodies},
journal = {Annalen der Physik},
volume = {322},
number = {10},
pages = {891--921},
year = {1905},
DOI = {http://dx.doi.org/10.1002/andp.19053221004}
}
#article{test,
author = {Example, Author},
journal = {Journal},
pages = {2},
title = {{Test paper}},
volume = {5},
year = {2018}
}
\end{filecontents}
\addbibresource{sample.bib}
\begin{document}
\nocite{*}
\printbibliography
\end{document}
Currently the code output looks like this:
I have attempted to adapt the code from this question asking about changing separator when no volume number is present (https://tex.stackexchange.com/questions/199301/change-separator-for-articles-in-journals-without-volume-number-biblatex), for issue number:
\renewcommand*{\bibpagespunct}{%
\ifentrytype{article}{%
\iffieldundef{number}
{\addcomma\space}}
{\space}}
However this removes the space and comma from between the year and page range, when there is an issue number present, not what I intended at all. Please can someone help to show me where I'm going wrong.
I dealt with this issue by inserting the code below in \DeclareBibliographyDriver.
\iffieldundef{number}
{\addcomma\space}
{\printfield{number}\addcomma\space}
Here's my code chunk.
\DeclareBibliographyDriver{article}{%
\printnames{author}%
\printfield{year}%
\printfield{title}%
\usebibmacro{journal}%
% \usebibmacro{volume+number+eid}%
\printfield{volume}%
\iffieldundef{number}
{\addcomma\space}
{\printfield{number}\adddot\space}%
\iffieldundef{pages}
{}
{\printfield{pages}\adddot}%
Though the reference format could be different, you may get the main idea that I try to deliver here.

Lua string from file

I'm trying to make a system which backs up and restores points for a gameserver, so it can safely restart without loosing anything.
I have made a script to do just this and the actual backing up part works fine, but the restore part does not.
This is the script that runs if 'Backup(read)' is used (Backup(write) works perfectly as it is designed to do):
if (source and read) then
System.LogAlways("[System] Restoring serverdata from file 'backup.CHK'");
for line in source:lines() do
Backup = {};
Backup.Date = (Date or line:match("File Last Modified: (.-)"));
Backup.Time = (Time or line:match("time: (.-)"));
US = tonumber((US or line:match("us: (.-)")));
NK = tonumber((NK or line:match("nk: (.-)")));
local params = {class = "Player";
position = {x = 1, y = 1, z = -1000};
Respawn = { bRespawn = 0; nTimer =0; bUnique = 1; };
bUsable = 0;
orientation = {0, 90, 135};
name = "BackupEntity"; };
local ent = System.SpawnEntity(params);
g_gameRules.game:SetTeam(1, ent.id);
g_gameRules.game:SetSynchedEntityValue(playerId, 100, (NK/3));
g_gameRules.game:SetTeam(2, ent.id);
g_gameRules.game:SetSynchedEntityValue(playerId, 100, (US/3));
System.RemoveEntity(params);
end
source:close();
return;
end
I'm not sure what I'm doing wrong,and most sites that I have looked at don't help that much. The problem is that it's not reading any values from the file.
Any help will be appreciated :).
Edit:
The reason that we have to divide the score by 3 is because the server multiplies all scores by 3. If we were not to divide it by 3, then the score will always be 3 times larger on each restore.
Example contents of the backup.CHK file:
The server is dependent on this file, and writes to it every hour. Please do not edit.
File Last Modified: 11/07/2013
This file was generated by the servers' autobackup system.
--------------------------
time: 22:51
us: 453445
nk: 454567
A couple of ideas of what might be causing the problem:
Use of (.-) lazy matching which matches the shortest pattern possible -- this can include an empty string. Usually, you want to make the pattern as specific as possible while still matching the required possible inputs. eg. It looks like (%d+) for us and nk is an appropriate fit.
The for line in source:lines() do reads one line at a time. That necessarily means not all the variables are going to be set inside the loop. Yet everything starting at local params and down uses those variables as if they were. It seems to me that section of code shouldn't even be in the loop.
Lastly, have you considered saving the Backup file as just another lua file? Doing so means you can let lua do the heavy lifting for you and you won't have to bother parsing it yourself. That also minimizes the risk for error.

Compare many text files that contain duplicate "stubs" from the previous and next file and remove duplicate text automatically

I have a large number of text files (1000+) each containing an article from an academic journal. Unfortunately each article's file also contains a "stub" from the end of the previous article (at the beginning) and from the beginning of the next article (at the end).
I need to remove these stubs in preparation for running a frequency analysis on the articles because the stubs constitute duplicate data.
There is no simple field that marks the beginning and end of each article in all cases. However, the duplicate text does seem to formatted the same and on the same line in both cases.
A script that compared each file to the next file and then removed 1 copy of the duplicate text would be perfect. This seems like it would be a pretty common issue when programming so I am surprised that I haven't been able to find anything that does this.
The file names sort in order, so a script that compares each file to the next sequentially should work. E.G.
bul_9_5_181.txt
bul_9_5_186.txt
are two articles, one starting on page 181 and the other on page 186. Both of these articles are included bellow.
There is two volumes of test data located at [http://drop.io/fdsayre][1]
Note: I am an academic doing content analysis of old journal articles for a project in the history of psychology. I am no programmer, but I do have 10+ years experience with linux and can usually figure things out as I go.
Thanks for your help
FILENAME: bul_9_5_181.txt
SYN&STHESIA
ISI
the majority of Portugese words signifying black objects or ideas relating to black. This association is, admittedly, no true synsesthesia, but the author believes that it is only a matter of degree between these logical and spontaneous associations and genuine cases of colored audition.
REFERENCES
DOWNEY, JUNE E. A Case of Colored Gustation. Amer. J. of Psycho!., 1911, 22, S28-539MEDEIROS-E-ALBUQUERQUE. Sur un phenomene de synopsie presente par des millions de sujets. / . de psychol. norm, et path., 1911, 8, 147-151. MYERS, C. S. A Case of Synassthesia. Brit. J. of Psychol., 1911, 4, 228-238.
AFFECTIVE PHENOMENA — EXPERIMENTAL
BY PROFESSOR JOHN F. .SHEPARD
University of Michigan
Three articles have appeared from the Leipzig laboratory during the year. Drozynski (2) objects to the use of gustatory and olfactory stimuli in the study of organic reactions with feelings, because of the disturbance of breathing that may be involved. He uses rhythmical auditory stimuli, and finds that when given at different rates and in various groupings, they are accompanied by characteristic feelings in each subject. He records the chest breathing, and curves from a sphygmograph and a water plethysmograph. Each experiment began with a normal record, then the stimulus was given, and this was followed by a contrast stimulus; lastly, another normal was taken. The length and depth of breathing were measured (no time line was recorded), and the relation of length of inspiration to length of expiration was determined. The length and height of the pulsebeats were also measured. Tabular summaries are given of the number of times the author finds each quantity to have been increased or decreased during a reaction period with each type of feeling. The feeling state accompanying a given rhythm is always complex, but the result is referred to that dimension which seemed to be dominant. Only a few disconnected extracts from normal and reaction periods are reproduced from the records. The author states that excitement gives increase in the rate and depth of breathing, in the inspiration-expiration ratio, and in the rate and size of pulse. There are undulations in the arm volume. In so far as the effect is quieting, it causes decrease in rate and depth of
182
JOHN F. SHEPARD
breathing, in the inspiration-expiration ratio, and in the pulse rate and size. The arm volume shows a tendency to rise with respiratory waves. Agreeableness shows
It looks like a much simpler solution would actually work.
No one seems to be using the information provided by the filenames. If you do make use of this information, you may not have to do any comparisons between files to identify the area of overlap. Whoever wrote the OCR probably put some thought into this problem.
The last number in the file name tells you what the starting page number for that file is. This page number appears on a line by itself in the file as well. It also looks like this line is preceded and followed by blank lines. Therefore for a given file you should be able to look at the name of the next file in the sequence and determine the page number at which you should start removing text. Since this page number appears in your file just look for a line that contains only this number (preceded and followed by blank lines) and delete that line and everything after. The last file in the sequence can be left alone.
Here's an outline for an algorithm
choose a file; call it: file1
look at the filename of the next file; call it: file2
extract the page number from the filename of file2; call it: pageNumber
scan the contents of file1 until you find a line that contains only pageNumber
make sure this line is preceded and followed by a blank line.
remove this line and everything after
move on to the next file in the sequence
You should probably try something like this (I've now tested it on the sample data you provided):
#!/usr/bin/ruby
class A_splitter
Title = /^[A-Z]+[^a-z]*$/
Byline = /^BY /
Number = /^\d*$/
Blank_line = /^ *$/
attr_accessor :recent_lines,:in_references,:source_glob,:destination_path,:seen_in_last_file
def initialize(src_glob,dst_path=nil)
#recent_lines = []
#seen_in_last_file = {}
#in_references = false
#source_glob = src_glob
#destination_path = dst_path
#destination = STDOUT
#buffer = []
split_em
end
def split_here
if destination_path
#destination.close if #destination
#destination = nil
else
print "------------SPLIT HERE------------\n"
end
print recent_lines.shift
#in_references = false
end
def at_page_break
((recent_lines[0] =~ Title and recent_lines[1] =~ Blank_line and recent_lines[2] =~ Number) or
(recent_lines[0] =~ Number and recent_lines[1] =~ Blank_line and recent_lines[2] =~ Title))
end
def print(*args)
(#destination || #buffer) << args
end
def split_em
Dir.glob(source_glob).sort.each { |filename|
if destination_path
#destination.close if #destination
#destination = File.open(File.join(#destination_path,filename),'w')
print #buffer
#buffer.clear
end
in_header = true
File.foreach(filename) { |line|
line.gsub!(/\f/,'')
if in_header and seen_in_last_file[line]
#skip it
else
seen_in_last_file.clear if in_header
in_header = false
recent_lines << line
seen_in_last_file[line] = true
end
3.times {recent_lines.shift} if at_page_break
if recent_lines[0] =~ Title and recent_lines[1] =~ Byline
split_here
elsif in_references and recent_lines[0] =~ Title and recent_lines[0] !~ /\d/
split_here
elsif recent_lines.length > 4
#in_references ||= recent_lines[0] =~ /^REFERENCES *$/
print recent_lines.shift
end
}
}
print recent_lines
#destination.close if #destination
end
end
A_splitter.new('bul_*_*_*.txt','test_dir')
Basically, run through the files in order, and within each file run through the lines in order, omitting from each file the lines that were present in the preceding file and printing the rest to STDOUT (from which it can be piped) unless a destination director is specified (called 'test_dir' in the example see the last line) in which case files are created in the specified directory with the same name as the file which contained the bulk of their contents.
It also removes the page-break sections (journal title, author, and page number).
It does two split tests:
a test on the title/byline pair
a test on the first title-line after a reference section
(it should be obvious how to add tests for additional split-points).
Retained for posterity:
If you don't specify a destination directory it simply puts a split-here line in the output stream at the split point. This should make it easier for testing (you can just less the output) and when you want them in individual files just pipe it to csplit (e.g. with
csplit -f abstracts - '---SPLIT HERE---' '{*}'
or something) to cut it up.
Here's is the beginning of another possible solution in Perl (It works as is but could probably be made more sophisticated if needed). It sounds as if all you are concerned about is removing duplicates across the corpus and don't really care if the last part of one article is in the file for the next one as long as it isn't duplicated anywhere. If so, this solution will strip out the duplicate lines leaving only one copy of any given line in the set of files as a whole.
You can either just run the file in the directory containing the text files with no argument or alternately specify a file name containing the list of files you want to process in the order you want them processed. I recommend the latter as your file names (at least in the sample files you provided) do not naturally list out in order when using simple commands like ls on the command line or glob in the Perl script. Thus it won't necessarily compare the correct files to one another as it just runs down the list (entered or generated by the glob command). If you specify the list, you can guarantee that they will be processed in the correct order and it doesn't take that long to set it up properly.
The script simply opens two files and makes note of the first three lines of the second file. It then opens a new output file (original file name + '.new') for the first file and writes out all the lines from the first file into the new output file until it finds the first three lines of the second file. There is an off chance that there are not three lines from the second file in the last one but in all the files I spot checked that seemed to be the case because of the journal name header and page numbers. One line definitely wasn't enough as the journal title was often the first line and that would cut things off early.
I should also note that the last file in your list of files entered will not be processed (i.e. have a new file created based off of it) as it will not be changed by this process.
Here's the script:
#!/usr/bin/perl
use strict;
my #files;
my $count = #ARGV;
if ($count>0){
open (IN, "$ARGV[0]");
#files = <IN>;
close (IN);
} else {
#files = glob "bul_*.txt";
}
$count = #files;
print "Processing $count files.\n";
my $lastFile="";
foreach(#files){
if ($lastFile ne ""){
print "Processing $_\n";
open (FILEB,"$_");
my #fileBLines = <FILEB>;
close (FILEB);
my $line0 = $fileBLines[0];
if ($line0 =~ /\(/ || $line0 =~ /\)/){
$line0 =~ s/\(/\\\(/;
$line0 =~ s/\)/\\\)/;
}
my $line1 = $fileBLines[1];
my $line2 = $fileBLines[2];
open (FILEA,"$lastFile");
my #fileALines = <FILEA>;
close (FILEA);
my $newName = "$lastFile.new";
open (OUT, ">$newName");
my $i=0;
my $done = 0;
while ($done != 1 and $i < #fileALines){
if ($fileALines[$i] =~ /$line0/
&& $fileALines[$i+1] == $line1
&& $fileALines[$i+2] == $line2) {
$done=1;
} else {
print OUT $fileALines[$i];
$i++;
}
}
close (OUT);
}
$lastFile = $_;
}
EDIT: Added a check for parenthesis in the first line that goes into the regex check for duplicity later on and if found escapes them so that they don't mess up the duplicity check.
You have a nontrivial problem. It is easy to write code to find the duplicate text at the end of file 1 and the beginning of file 2. But you don't want to delete the duplicate text---you want to split it where the second article begins. Getting the split right might be tricky---one marker is the all caps, another is the BY at the start of the next line.
It would have helped to have examples from consecutive files, but the script below works on one test case. Before trying this code, back up all your files. The code overwrites existing files.
The implementation is in Lua.
The algorithm is roughly:
Ignore blank lines at the end of file 1 and the start of file 2.
Find a long sequence of lines common to end of file 1 and start of file 2.
This works by trying a sequence of 40 lines, then 39, and so on
Remove sequence from both files and call it overlap.
Split overlap at title
Append first part of overlap to file1; prepend second part to file2.
Overwrite contents of files with lists of lines.
Here's the code:
#!/usr/bin/env lua
local ext = arg[1] == '-xxx' and '.xxx' or ''
if #ext > 0 then table.remove(arg, 1) end
local function lines(filename)
local l = { }
for line in io.lines(filename) do table.insert(l, (line:gsub('', ''))) end
assert(#l > 0, "No lines in file " .. filename)
return l
end
local function write_lines(filename, lines)
local f = assert(io.open(filename .. ext, 'w'))
for i = 1, #lines do
f:write(lines[i], '\n')
end
f:close()
end
local function lines_match(line1, line2)
io.stderr:write(string.format("%q ==? %q\n", line1, line2))
return line1 == line2 -- could do an approximate match here
end
local function lines_overlap(l1, l2, k)
if k > #l2 or k > #l1 then return false end
io.stderr:write('*** k = ', k, '\n')
for i = 1, k do
if not lines_match(l2[i], l1[#l1 - k + i]) then
if i > 1 then
io.stderr:write('After ', i-1, ' matches: FAILED <====\n')
end
return false
end
end
return true
end
function find_overlaps(fname1, fname2)
local l1, l2 = lines(fname1), lines(fname2)
-- strip trailing and leading blank lines
while l1[#l1]:find '^[%s]*$' do table.remove(l1) end
while l2[1] :find '^[%s]*$' do table.remove(l2, 1) end
local matchsize -- # of lines at end of file 1 that are equal to the same
-- # at the start of file 2
for k = math.min(40, #l1, #l2), 1, -1 do
if lines_overlap(l1, l2, k) then
matchsize = k
io.stderr:write('Found match of ', k, ' lines\n')
break
end
end
if matchsize == nil then
return false -- failed to find an overlap
else
local overlap = { }
for j = 1, matchsize do
table.remove(l1) -- remove line from first set
table.insert(overlap, table.remove(l2, 1))
end
return l1, overlap, l2
end
end
local function split_overlap(l)
for i = 1, #l-1 do
if l[i]:match '%u' and not l[i]:match '%l' then -- has caps but no lowers
-- io.stderr:write('Looking for byline following ', l[i], '\n')
if l[i+1]:match '^%s*BY%s' then
local first = {}
for j = 1, i-1 do
table.insert(first, table.remove(l, 1))
end
-- io.stderr:write('Split with first line at ', l[1], '\n')
return first, l
end
end
end
end
local function strip_overlaps(filename1, filename2)
local l1, overlap, l2 = find_overlaps(filename1, filename2)
if not l1 then
io.stderr:write('No overlap in ', filename1, ' an
Are the stubs identical to the end of the previous file? Or different line endings/OCR mistakes?
Is there a way to discern an article's beginning? Maybe an indented abstract? Then you could go through each file and discard everything before the first and after (including) the second title.
Are the titles & author always on a single line? And does that line always contain the word "BY" in uppercase? If so, you can probably do a fair job withn awk, using those criteria as the begin/end marker.
Edit: I really don't think that using diff is going to work as it is a tool for comparing broadly similar files. Your files are (from diff's point of view) actually completely different - I think it will get out of sync immediately. But then, I'm not a diff guru :-)
A quick stab at it, assuming that the stub is strictly identical in both files:
#!/usr/bin/perl
use strict;
use List::MoreUtils qw/ indexes all pairwise /;
my #files = #ARGV;
my #previous_text;
for my $filename ( #files ) {
open my $in_fh, '<', $filename or die;
open my $out_fh, '>', $filename.'.clean' or die;
my #lines = <$in_fh>;
print $out_fh destub( \#previous_text, #lines );
#previous_text = #lines;
}
sub destub {
my #previous = #{ shift() };
my #lines = #_;
my #potential_stubs = indexes { $_ eq $lines[0] } #previous;
for my $i ( #potential_stubs ) {
# check if the two documents overlap for that index
my #p = #previous[ $i.. $#previous ];
my #l = #lines[ 0..$#previous-$i ];
return #lines[ $#previous-$i + 1 .. $#lines ]
if all { $_ } pairwise { $a eq $b } #p, #l;
}
# no stub detected
return #lines;
}

Resources