Perl - Concatenate non-empty strings - string

I'm in need of a little help "optimizing" my code because I'm convinced there's a better, cleaner way to do it. I have 6 variables that are created by being parsed out of a longer string:
Year
Make
Model
Color
ColorLower
Style
Depending on the record I may have details in some or all of these variables. In most cases, though, some are blank. Following the variables being populated I add them into a database field that is the description of a car/vehicle.
Currently my if/else block goes one by one and if a variable has a non-zero length, the concatenated description variable
if (length($Year)>0)
{
$Description == $Description + " " + Year
}
elsif (length($Make) > 0)
$Description == $Description + " " + $Make
} ...and so on
TMTOWTDI definitely applies here, and I always marvel at the elegant one-liners that the experts come up with. Although what I have now is working, I'd be interested in hearing is there is a shorter, more compact way that I could maximize my code.
Thanks all.

Perhaps something like this:
$desc = join ' ', grep { length $_ > 0 }
$Year, $Make, $Model, $Color, $ColorLower, $Style;

There is no need for the length test. An empty string is false, so this will work
$desc = join ' ', grep $_, $year, $make, $model, $color, $color_lower, $style;
It's also worth pointing out that capital letters are reserved for Perl global identifiers such as package names. Mixed-case identifiers are also particularly difficult for those who don't have English as their first language, and Wikipedia has this to say
"At least one study found that readers can recognize snake case values more quickly than CamelCase"

Related

I have a string, need that string to be compared with list of strings in TCL

Need to compare string1 with string2 in TCL
set string1 {laptop Keyboard mouse MONITOR PRINTER}
set string2 {mouse}
Well, you can use:
if {$string2 in $string1} {
puts "present in the list"
}
Or you can use lsearch if you want to know where (it returns the index that it finds the element at, or -1 if it isn't there). This is most useful when you want to know where in the list the value is. It also has options to do binary searching (if you know the list is sorted) which is far faster than a linear search.
set idx [lsearch -exact $string1 $string2]
if {$idx >= 0} {
puts "present in the list at index $idx"
}
But if you are doing a lot of searching, it can be best to create a hash table using an array or a dictionary. Those are extremely fast but require some setup. Whether the setup costs are worth it depends on your application.
set words {}
foreach word $string1 {dict set words $word 1}
if {[dict exists $words $string2]} {
puts "word is present"
}
Note that if you're dealing with ordinary user input, you probably want a sanitization step or two. Tcl lists aren't exactly sentences, and the differences can really catch you out once you move to production. The two main tools for that are split and regexp -all -inline.
set words [split $sentence]
set words [regexp -all -inline {\S+} $sentence]
Understanding how to do the cleanup requires understanding your input data more completely than I do.
There is string first
if {[string first $string2 $string1] != -1} {
puts "string1 contains string2"
}
or
if {[string match *$string2* $string1]} {
puts "string1 contains string2"
}

Perl Morgan and a String?

I am trying to solve this problem on hackerrank:
So the problem is:
Jack and Daniel are friends. Both of them like letters, especially upper-case ones.
They are cutting upper-case letters from newspapers, and each one of them has their collection of letters stored in separate stacks.
One beautiful day, Morgan visited Jack and Daniel. He saw their collections. Morgan wondered what is the lexicographically minimal string, made of that two collections. He can take a letter from a collection when it is on the top of the stack.
Also, Morgan wants to use all the letters in the boys' collections.
This is my attempt in Perl:
#!/usr/bin/perl
use strict;
use warnings;
chomp(my $n=<>);
while($n>0){
chomp(my $string1=<>);
chomp(my $string2=<>);
lexi($string1,$string2);
$n--;
}
sub lexi{
my($str1,$str2)=#_;
my #str1=split(//,$str1);
my #str2=split(//,$str2);
my $final_string="";
while(#str2 && #str1){
my $st2=$str2[0];
my $st1=$str1[0];
if($st1 le $st2){
$final_string.=$st1;
shift #str1;
}
else{
$final_string.=$st2;
shift #str2;
}
}
if(#str1){
$final_string=$final_string.join('',#str1);
}
else{
$final_string=$final_string.join('',#str2);
}
print $final_string,"\n";
}
Sample Input:
2
JACK
DANIEL
ABACABA
ABACABA
The first line contains the number of test cases, T.
Every next two lines have such format: the first line contains string A, and the second line contains string B.
Sample Output:
DAJACKNIEL
AABABACABACABA
But for Sample test-case it is giving right results while it is giving wrong results for other test-cases. One case for which it gives an incorrect result is
1
AABAC
AACAB
It outputs AAAABACCAB instead of AAAABACABC.
I don't know what is wrong with the algorithm and why it is failing with other test cases?
Update:
As per #squeamishossifrage comments If I add
($str1,$str2)=sort{$a cmp $b}($str1,$str2);
The results become same irrespective of user-inputs but still the test-case fails.
The problem is in your handling of the equal characters. Take the following example:
ACBA
BCAB
When faced with two identical characters (C in my example), you naïvely chose the one from the first string, but that's not always correct. You need to look ahead to break ties. You may even need to look many characters ahead. In this case, next character after C of the second string is lower than the next character of the first string, so you should take the C from the second string first.
By leaving the strings as strings, a simple string comparison will compare as many characters as needed to determine which character to consume.
sub lexi {
my ($str1, $str2) = #_;
utf8::downgrade($str1); # Makes sure length() will be fast
utf8::downgrade($str2); # since we only have ASCII letters.
my $final_string = "";
while (length($str2) && length($str1)) {
$final_string .= substr($str1 le $str2 ? $str1 : $str2, 0, 1, '');
}
$final_string .= $str1;
$final_string .= $str2;
print $final_string, "\n";
}
Too little rep to comment thus the answer:
What you need to do is to look ahead if the two characters match. You currently do a simple le match and in the case of
ZABB
ZAAA
You'll get ZABBZAA since the first match Z will be le Z. So what you need to do (a naive solution which most likely won't be very effective) is to keep looking as long as the strings/chars match so:
Z eq Z
ZA eq ZA
ZAB gt ZAA
and at that point will you know that the second string is the one you want to pop from for the first character.
Edit
You updated with sorting the strings, but like I wrote you still need to look ahead. The sorting will solve the two above strings but will fail with these two:
ZABAZA
ZAAAZB
ZAAAZBZABAZA
Because here the correct answer is ZAAAZABAZAZB and you can't find that will simply comparing character per character

Getting precision of a float in Perl?

Let's say I had a Perl variable:
my $string = "40.23";
my $convert_to_num = $string * 1;
Is there a way I can find the precision of this float value? My solution so far was to simply just loop through the string, find the first instance of '.', and just start counting how many decimal places, returning 2 in this case. I'm just wondering if there was a more elegant or built-in function for this sort of thing. Thanks!
Here is an answer for "number of things after the period" in $nstring
length(($nstring =~ /\.(.*)/)[0]);
The matching part first finds . (\.), then matches everything else (.*). Since .* is in parentheses, it is returned as the first array element ([0]). Then I count how many with the length() function.
Anything you do in Perl with plain variables will be dependent on the compiler and hardware you use. If you really care about the precision, use
use "Math::BigFloat";
And set the desired properties. The number of digits is more properly termed accuracy in Math::BigFloat.
use Math::BigFloat;
Math::BigFloat->accuracy(12);
$n = new Math::BigFloat "52.12";
print "Accuracy of $n is ", $n->accuracy(), " length ",scalar($n->length()),"\n";
Will return
Accuracy of 52.1200000000 is 12 length 4

Any perl standard library to check if a string contains a given substring

Given a query, I would like to check if this contains a given substring (can contain more than one word) . But I don't want exhaustive search, because this substring can only start a fresh word.
Any perl standard libraries for this, so that I get something efficient and don't have to reinvent the wheel?
Thanks,
Maybe you'll find builtin index() suited for the job.
It's a very fast substring search function ( implements the Boyer-Moore algorithm ).
Just check its documentation with perldoc -f index.
I would make a hash with the key being the first word of the 9000 substrings and the value an array with all substrings with that first word. If many strings contain the same first word, you could use the first two words.
Then for each query, for each word, I would see if that word is in the hash, and then need to match only those strings in the hash's array, starting at that point in the string using the index function.
Assuming that matching is sparse, this would be pretty efficient. One hash lookup per word and minimal searching for potential matches.
As I write this it reminds me of an Aho-Corasick search. (See Algorithm::AhoCorasick in CPAN.) I've never used the module, but the algorithm spends a lot of time building a finite state machine out of the search keys so finding a match is super efficient. I don't know if the CPAN implementation handles word boundaries issues.
You can use this approach:
# init
my $re = join"|", map quotemeta, sort #substrings;
$re = qr/\b(?:$re)/;
# usage
while (<>) {
found($1) if /($re)/;
}
where found is action what you want to do if substring found.
The builtin index function is the fastest general purpose way to check if a string contains a substring.
my $find = 'abc';
my $str = '123 abc xyz';
if (index($str, $find) != -1) {
# process matching $str here
}
If index still is not fast enough, and you know where in the string your substring might be, you can narrow down on it using substr and then use eq for the actual comparison:
my $find = 'abc';
my $str = '123 abc xyz';
if (substr($str, 4, 3) eq $find) {
# process matching $str here
}
You are not going to get faster than that in Perl without dropping down to C.
This sounds like the perfect job for regular expressions:
if($string =~ m/your substring/) {
say "substring found";
} else {
say "nothing found";
}

How can I split multiple joined words?

I have an array of 1000 or so entries, with examples below:
wickedweather
liquidweather
driveourtrucks
gocompact
slimprojector
I would like to be able to split these into their respective words, as:
wicked weather
liquid weather
drive our trucks
go compact
slim projector
I was hoping a regular expression my do the trick. But, since there is no boundary to stop on, nor is there any sort of capitalization that I could possibly key on, I am thinking, that some sort of reference to a dictionary might be necessary?
I suppose it could be done by hand, but why - when it can be done with code! =) But this has stumped me. Any ideas?
The Viterbi algorithm is much faster. It computes the same scores as the recursive search in Dmitry's answer above, but in O(n) time. (Dmitry's search takes exponential time; Viterbi does it by dynamic programming.)
import re
from collections import Counter
def viterbi_segment(text):
probs, lasts = [1.0], [0]
for i in range(1, len(text) + 1):
prob_k, k = max((probs[j] * word_prob(text[j:i]), j)
for j in range(max(0, i - max_word_length), i))
probs.append(prob_k)
lasts.append(k)
words = []
i = len(text)
while 0 < i:
words.append(text[lasts[i]:i])
i = lasts[i]
words.reverse()
return words, probs[-1]
def word_prob(word): return dictionary[word] / total
def words(text): return re.findall('[a-z]+', text.lower())
dictionary = Counter(words(open('big.txt').read()))
max_word_length = max(map(len, dictionary))
total = float(sum(dictionary.values()))
Testing it:
>>> viterbi_segment('wickedweather')
(['wicked', 'weather'], 5.1518198982768158e-10)
>>> ' '.join(viterbi_segment('itseasyformetosplitlongruntogetherblocks')[0])
'its easy for me to split long run together blocks'
To be practical you'll likely want a couple refinements:
Add logs of probabilities, don't multiply probabilities. This avoids floating-point underflow.
Your inputs will in general use words not in your corpus. These substrings must be assigned a nonzero probability as words, or you end up with no solution or a bad solution. (That's just as true for the above exponential search algorithm.) This probability has to be siphoned off the corpus words' probabilities and distributed plausibly among all other word candidates: the general topic is known as smoothing in statistical language models. (You can get away with some pretty rough hacks, though.) This is where the O(n) Viterbi algorithm blows away the search algorithm, because considering non-corpus words blows up the branching factor.
Can a human do it?
farsidebag
far sidebag
farside bag
far side bag
Not only do you have to use a dictionary, you might have to use a statistical approach to figure out what's most likely (or, god forbid, an actual HMM for your human language of choice...)
For how to do statistics that might be helpful, I turn you to Dr. Peter Norvig, who addresses a different, but related problem of spell-checking in 21 lines of code:
http://norvig.com/spell-correct.html
(he does cheat a bit by folding every for loop into a single line.. but still).
Update This got stuck in my head, so I had to birth it today. This code does a similar split to the one described by Robert Gamble, but then it orders the results based on word frequency in the provided dictionary file (which is now expected to be some text representative of your domain or English in general. I used big.txt from Norvig, linked above, and catted a dictionary to it, to cover missing words).
A combination of two words will most of the time beat a combination of 3 words, unless the frequency difference is enormous.
I posted this code with some minor changes on my blog
http://squarecog.wordpress.com/2008/10/19/splitting-words-joined-into-a-single-string/
and also wrote a little about the underflow bug in this code.. I was tempted to just quietly fix it, but figured this may help some folks who haven't seen the log trick before:
http://squarecog.wordpress.com/2009/01/10/dealing-with-underflow-in-joint-probability-calculations/
Output on your words, plus a few of my own -- notice what happens with "orcore":
perl splitwords.pl big.txt words
answerveal: 2 possibilities
- answer veal
- answer ve al
wickedweather: 4 possibilities
- wicked weather
- wicked we at her
- wick ed weather
- wick ed we at her
liquidweather: 6 possibilities
- liquid weather
- liquid we at her
- li quid weather
- li quid we at her
- li qu id weather
- li qu id we at her
driveourtrucks: 1 possibilities
- drive our trucks
gocompact: 1 possibilities
- go compact
slimprojector: 2 possibilities
- slim projector
- slim project or
orcore: 3 possibilities
- or core
- or co re
- orc ore
Code:
#!/usr/bin/env perl
use strict;
use warnings;
sub find_matches($);
sub find_matches_rec($\#\#);
sub find_word_seq_score(#);
sub get_word_stats($);
sub print_results($#);
sub Usage();
our(%DICT,$TOTAL);
{
my( $dict_file, $word_file ) = #ARGV;
($dict_file && $word_file) or die(Usage);
{
my $DICT;
($DICT, $TOTAL) = get_word_stats($dict_file);
%DICT = %$DICT;
}
{
open( my $WORDS, '<', $word_file ) or die "unable to open $word_file\n";
foreach my $word (<$WORDS>) {
chomp $word;
my $arr = find_matches($word);
local $_;
# Schwartzian Transform
my #sorted_arr =
map { $_->[0] }
sort { $b->[1] <=> $a->[1] }
map {
[ $_, find_word_seq_score(#$_) ]
}
#$arr;
print_results( $word, #sorted_arr );
}
close $WORDS;
}
}
sub find_matches($){
my( $string ) = #_;
my #found_parses;
my #words;
find_matches_rec( $string, #words, #found_parses );
return #found_parses if wantarray;
return \#found_parses;
}
sub find_matches_rec($\#\#){
my( $string, $words_sofar, $found_parses ) = #_;
my $length = length $string;
unless( $length ){
push #$found_parses, $words_sofar;
return #$found_parses if wantarray;
return $found_parses;
}
foreach my $i ( 2..$length ){
my $prefix = substr($string, 0, $i);
my $suffix = substr($string, $i, $length-$i);
if( exists $DICT{$prefix} ){
my #words = ( #$words_sofar, $prefix );
find_matches_rec( $suffix, #words, #$found_parses );
}
}
return #$found_parses if wantarray;
return $found_parses;
}
## Just a simple joint probability
## assumes independence between words, which is obviously untrue
## that's why this is broken out -- feel free to add better brains
sub find_word_seq_score(#){
my( #words ) = #_;
local $_;
my $score = 1;
foreach ( #words ){
$score = $score * $DICT{$_} / $TOTAL;
}
return $score;
}
sub get_word_stats($){
my ($filename) = #_;
open(my $DICT, '<', $filename) or die "unable to open $filename\n";
local $/= undef;
local $_;
my %dict;
my $total = 0;
while ( <$DICT> ){
foreach ( split(/\b/, $_) ) {
$dict{$_} += 1;
$total++;
}
}
close $DICT;
return (\%dict, $total);
}
sub print_results($#){
#( 'word', [qw'test one'], [qw'test two'], ... )
my ($word, #combos) = #_;
local $_;
my $possible = scalar #combos;
print "$word: $possible possibilities\n";
foreach (#combos) {
print ' - ', join(' ', #$_), "\n";
}
print "\n";
}
sub Usage(){
return "$0 /path/to/dictionary /path/to/your_words";
}
pip install wordninja
>>> import wordninja
>>> wordninja.split('bettergood')
['better', 'good']
The best tool for the job here is recursion, not regular expressions. The basic idea is to start from the beginning of the string looking for a word, then take the remainder of the string and look for another word, and so on until the end of the string is reached. A recursive solution is natural since backtracking needs to happen when a given remainder of the string cannot be broken into a set of words. The solution below uses a dictionary to determine what is a word and prints out solutions as it finds them (some strings can be broken out into multiple possible sets of words, for example wickedweather could be parsed as "wicked we at her"). If you just want one set of words you will need to determine the rules for selecting the best set, perhaps by selecting the solution with fewest number of words or by setting a minimum word length.
#!/usr/bin/perl
use strict;
my $WORD_FILE = '/usr/share/dict/words'; #Change as needed
my %words; # Hash of words in dictionary
# Open dictionary, load words into hash
open(WORDS, $WORD_FILE) or die "Failed to open dictionary: $!\n";
while (<WORDS>) {
chomp;
$words{lc($_)} = 1;
}
close(WORDS);
# Read one line at a time from stdin, break into words
while (<>) {
chomp;
my #words;
find_words(lc($_));
}
sub find_words {
# Print every way $string can be parsed into whole words
my $string = shift;
my #words = #_;
my $length = length $string;
foreach my $i ( 1 .. $length ) {
my $word = substr $string, 0, $i;
my $remainder = substr $string, $i, $length - $i;
# Some dictionaries contain each letter as a word
next if ($i == 1 && ($word ne "a" && $word ne "i"));
if (defined($words{$word})) {
push #words, $word;
if ($remainder eq "") {
print join(' ', #words), "\n";
return;
} else {
find_words($remainder, #words);
}
pop #words;
}
}
return;
}
I think you're right in thinking that it's not really a job for a regular expression. I would approach this using the dictionary idea - look for the longest prefix that is a word in the dictionary. When you find that, chop it off and do the same with the remainder of the string.
The above method is subject to ambiguity, for example "drivereallyfast" would first find "driver" and then have trouble with "eallyfast". So you would also have to do some backtracking if you ran into this situation. Or, since you don't have that many strings to split, just do by hand the ones that fail the automated split.
This is related to a problem known as identifier splitting or identifier name tokenization. In the OP's case, the inputs seem to be concatenations of ordinary words; in identifier splitting, the inputs are class names, function names or other identifiers from source code, and the problem is harder. I realize this is an old question and the OP has either solved their problem or moved on, but in case someone else comes across this question while looking for identifier splitters (like I was, not long ago), I would like to offer Spiral ("SPlitters for IdentifieRs: A Library"). It is written in Python but comes with a command-line utility that can read a file of identifiers (one per line) and split each one.
Splitting identifiers is deceptively difficult. Programmers commonly use abbreviations, acronyms and word fragments when naming things, and they don't always use consistent conventions. Even in when identifiers do follow some convention such as camel case, ambiguities can arise.
Spiral implements numerous identifier splitting algorithms, including a novel algorithm called Ronin. It uses a variety of heuristic rules, English dictionaries, and tables of token frequencies obtained from mining source code repositories. Ronin can split identifiers that do not use camel case or other naming conventions, including cases such as splitting J2SEProjectTypeProfiler into [J2SE, Project, Type, Profiler], which requires the reader to recognize J2SE as a unit. Here are some more examples of what Ronin can split:
# spiral mStartCData nonnegativedecimaltype getUtf8Octets GPSmodule savefileas nbrOfbugs
mStartCData: ['m', 'Start', 'C', 'Data']
nonnegativedecimaltype: ['nonnegative', 'decimal', 'type']
getUtf8Octets: ['get', 'Utf8', 'Octets']
GPSmodule: ['GPS', 'module']
savefileas: ['save', 'file', 'as']
nbrOfbugs: ['nbr', 'Of', 'bugs']
Using the examples from the OP's question:
# spiral wickedweather liquidweather driveourtrucks gocompact slimprojector
wickedweather: ['wicked', 'weather']
liquidweather: ['liquid', 'weather']
driveourtrucks: ['driveourtrucks']
gocompact: ['go', 'compact']
slimprojector: ['slim', 'projector']
As you can see, it is not perfect. It's worth noting that Ronin has a number of parameters and adjusting them makes it possible to split driveourtrucks too, but at the cost of worsening performance on program identifiers.
More information can be found in the GitHub repo for Spiral.
A simple solution with Python: install the wordsegment package: pip install wordsegment.
$ echo thisisatest | python -m wordsegment
this is a test
Well, the problem itself is not solvable with just a regular expression. A solution (probably not the best) would be to get a dictionary and do a regular expression match for each work in the dictionary to each word in the list, adding the space whenever successful. Certainly this would not be terribly quick, but it would be easy to program and faster than hand doing it.
A dictionary based solution would be required. This might be simplified somewhat if you have a limited dictionary of words that can occur, otherwise words that form the prefix of other words are going to be a problem.
There is python package released Santhosh thottingal called mlmorph which can be used for morphological analysis.
https://pypi.org/project/mlmorph/
Examples:
from mlmorph import Analyser
analyser = Analyser()
analyser.analyse("കേരളത്തിന്റെ")
Gives
[('കേരളം<np><genitive>', 179)]
He also wrote a blog on the topic https://thottingal.in/blog/2017/11/26/towards-a-malayalam-morphology-analyser/
This will work if the are camelCase. JavaScript!!!
function spinalCase(str) {
let lowercase = str.trim()
let regEx = /\W+|(?=[A-Z])|_/g
let result = lowercase.split(regEx).join("-").toLowerCase()
return result;
}
spinalCase("AllThe-small Things");
One of the solutions could be with recurssion (the same can be converted into dynamic-programming):
static List<String> wordBreak(
String input,
Set<String> dictionary
) {
List<List<String>> result = new ArrayList<>();
List<String> r = new ArrayList<>();
helper(input, dictionary, result, "", 0, new Stack<>());
for (List<String> strings : result) {
String s = String.join(" ", strings);
r.add(s);
}
return r;
}
static void helper(
final String input,
final Set<String> dictionary,
final List<List<String>> result,
String state,
int index,
Stack<String> stack
) {
if (index == input.length()) {
// add the last word
stack.push(state);
for (String s : stack) {
if (!dictionary.contains(s)) {
return;
}
}
result.add((List<String>) stack.clone());
return;
}
if (dictionary.contains(state)) {
// bifurcate
stack.push(state);
helper(input, dictionary, result, "" + input.charAt(index),
index + 1, stack);
String pop = stack.pop();
String s = stack.pop();
helper(input, dictionary, result, s + pop.charAt(0),
index + 1, stack);
}
else {
helper(input, dictionary, result, state + input.charAt(index),
index + 1, stack);
}
return;
}
The other possible solution would be the use of Tries data structure.
output :-
['better', 'good'] ['coffee', 'shop']
['coffee', 'shop']
pip install wordninja
import wordninja
n=wordninja.split('bettergood')
m=wordninja.split("coffeeshop")
print(n,m)
list=['hello','coffee','shop','better','good']
mat='coffeeshop'
expected=[]
for i in list:
if i in mat:
expected.append(i)
print(expected)
So I spent like 2 days on this answer, since I need it for my own NLP work. My answer is derived from Darius Bacon's answer, which itself was derived from the Viterbi algorithm. I also abstracted it to take each word in a message, attempt to split it, and then reassemble the message. I expanded Darius's code to make it debuggable. I also swapped out the need for "big.txt", and use the wordfreq library instead. Some comments stress the need to use a non-zero word frequency for non-existent words. I found that using any frequency higher than zero would cause "itseasyformetosplitlongruntogetherblocks" to undersplit into "itseasyformetosplitlongruntogether blocks". The algorithm in general tends to either oversplit or undersplit various test messages depending on how you combine word frequencies and how you handle missing word frequencies. I played around with many tweaks until it behaved well. My solution uses a 0.0 frequency for missing words. It also adds a reward for word length (otherwise it tends to split words into characters). I tried many length rewards, and the one that seems to work best for my test cases is word_frequency * (e ** word_length). There were also comments warning against multiplying word frequencies together. I tried adding them, using the harmonic mean, and using 1-freq instead of the 0.00001 form. They all tended to oversplit the test cases. Simply multiplying word frequencies together worked best. I left my debugging print statements in there, to make it easier for others to continue tweaking. Finally, there's a special case where if your whole message is a word that doesn't exist, like "Slagle's", then the function splits the word into individual letters. In my case, I don't want that, so I have a special return statement at the end to return the original message in those cases.
import numpy as np
from wordfreq import get_frequency_dict
word_prob = get_frequency_dict(lang='en', wordlist='large')
max_word_len = max(map(len, word_prob)) # 34
def viterbi_segment(text, debug=False):
probs, lasts = [1.0], [0]
for i in range(1, len(text) + 1):
new_probs = []
for j in range(max(0, i - max_word_len), i):
substring = text[j:i]
length_reward = np.exp(len(substring))
freq = word_prob.get(substring, 0) * length_reward
compounded_prob = probs[j] * freq
new_probs.append((compounded_prob, j))
if debug:
print(f'[{j}:{i}] = "{text[lasts[j]:j]} & {substring}" = ({probs[j]:.8f} & {freq:.8f}) = {compounded_prob:.8f}')
prob_k, k = max(new_probs) # max of a touple is the max across the first elements, which is the max of the compounded probabilities
probs.append(prob_k)
lasts.append(k)
if debug:
print(f'i = {i}, prob_k = {prob_k:.8f}, k = {k}, ({text[k:i]})\n')
# when text is a word that doesn't exist, the algorithm breaks it into individual letters.
# in that case, return the original word instead
if len(set(lasts)) == len(text):
return text
words = []
k = len(text)
while 0 < k:
word = text[lasts[k]:k]
words.append(word)
k = lasts[k]
words.reverse()
return ' '.join(words)
def split_message(message):
new_message = ' '.join(viterbi_segment(wordmash, debug=False) for wordmash in message.split())
return new_message
messages = [
'tosplit',
'split',
'driveourtrucks',
"Slagle's",
"Slagle's wickedweather liquidweather driveourtrucks gocompact slimprojector",
'itseasyformetosplitlongruntogetherblocks',
]
for message in messages:
print(f'{message}')
new_message = split_message(message)
print(f'{new_message}\n')
tosplit
to split
split
split
driveourtrucks
drive our trucks
Slagle's
Slagle's
Slagle's wickedweather liquidweather driveourtrucks gocompact slimprojector
Slagle's wicked weather liquid weather drive our trucks go compact slim projector
itseasyformetosplitlongruntogetherblocks
its easy for me to split long run together blocks
I may get downmodded for this, but have the secretary do it.
You'll spend more time on a dictionary solution than it would take to manually process. Further, you won't possibly have 100% confidence in the solution, so you'll still have to give it manual attention anyway.

Resources