Fast partial string matching in R - string

Given a vector of strings texts and a vector of patterns patterns, I want to find any matching pattern for each text.
For small datasets, this can be easily done in R with grepl:
patterns = c("some","pattern","a","horse")
texts = c("this is a text with some pattern", "this is another text with a pattern")
# for each x in patterns
lapply( patterns, function(x){
# match all texts against pattern x
res = grepl( x, texts, fixed=TRUE )
print(res)
# do something with the matches
# ...
})
This solution is correct, but it doesn't scale up. Even with moderately bigger datasets (~500 texts and patterns), this code is embarassingly slow, solving only about 100 cases per sec on a modern machine - which is ridiculous considering that this is a crude string partial matching, without regex (set with fixed=TRUE). Even making the lapply parallel does not solve the issue.
Is there a way to re-write this code efficiently?
Thanks,
Mulone

Use stringi package - it's even faster than grepl. Check the benchmarks!
I used text from #Martin-Morgan post
require(stringi)
require(microbenchmark)
text = readLines("~/Desktop/pg100.txt")
pattern <- strsplit("all the world's a stage and all the people players", " ")[[1]]
grepl_fun <- function(){
lapply(pattern, grepl, text, fixed=TRUE)
}
stri_fixed_fun <- function(){
lapply(pattern, function(x) stri_detect_fixed(text,x,NA))
}
# microbenchmark(grepl_fun(), stri_fixed_fun())
# Unit: milliseconds
# expr min lq median uq max neval
# grepl_fun() 432.9336 435.9666 446.2303 453.9374 517.1509 100
# stri_fixed_fun() 213.2911 218.1606 227.6688 232.9325 285.9913 100
# if you don't believe me that the results are equal, you can check :)
xx <- grepl_fun()
stri <- stri_fixed_fun()
for(i in seq_along(xx)){
print(all(xx[[i]] == stri[[i]]))
}

Have you accurately characterized your problem and the performance you're seeing? Here are the Complete Works of William Shakespeare and a query against them
text = readLines("~/Downloads/pg100.txt")
pattern <-
strsplit("all the world's a stage and all the people players", " ")[[1]]
which seems to be much more performant than you imply?
> length(text)
[1] 124787
> system.time(xx <- lapply(pattern, grepl, text, fixed=TRUE))
user system elapsed
0.444 0.001 0.444
## avoid retaining memory; 500 x 500 case; no blank lines
> text = text[nzchar(text)]
> system.time({ for (p in rep(pattern, 50)) grepl(p, text[1:500], fixed=TRUE) })
user system elapsed
0.096 0.000 0.095
We're expecting linear scaling with both the length (number of elements) of pattern and text. It seems I mis-remember my Shakespeare
> idx = Reduce("+", lapply(pattern, grepl, text, fixed=TRUE))
> range(idx)
[1] 0 7
> sum(idx == 7)
[1] 8
> text[idx == 7]
[1] " And all the men and women merely players;"
[2] " cicatrices to show the people when he shall stand for his place."
[3] " Scandal'd the suppliants for the people, call'd them"
[4] " all power from the people, and to pluck from them their tribunes"
[5] " the fashion, and so berattle the common stages (so they call"
[6] " Which God shall guard; and put the world's whole strength"
[7] " Of all his people and freeze up their zeal,"
[8] " the world's end after my name-call them all Pandars; let all"

Related

I am getting a "Time Limit Exceeded " error in the following code. How to fix that error

The following code is my view of checking whether the sum of a number and it's reverse is a palindrome or not.If the sum is a palindrome then sum will be displayed.Otherwise the process will be repeated until we get a palindrome. When it is set to execute, I am getting a time limit exceeded error.Where do I need to correct the code?
def pal(n1):
temp=n1
rev=0
while(temp>0):
rev=(rev*10)+(temp%10)
temp=temp/10
sum1=n1+rev
temp=sum1
rev=0
while(temp>0):
rev=(rev*10)+(temp%10)
temp=temp/10
if(rev==sum1):
print(sum1)
else:
pal(sum1)
n=int(input())
pal(n)
I expect the output of a number 453 to be 6666.
i.e.
453+354=807 (not a palindrome. So repeat the process)
807+708=1515
1515+5151=6666 (it is a palindrome)
Your problem is that you are checking for while temp > 0: but inside that loop you are using float division: temp=temp/10. So the condition will always hold. For example:
>>> 8/10
0.8
>>> 0.8/10
0.08
What you want is to change your divisions to int division:
>>> 8//10
0
Still you might consider working with strings which is much easier in that case:
def pal(n):
rev_n = str(n)[::-1]
sum_str = str(n + int(rev_n))
while sum_str != sum_str[::-1]:
# print(sum_str)
sum_rev = sum_str[::-1]
sum_str = str(int(sum_str) + int(sum_rev))
print(sum_str)
And with the commented print this gives:
>>> pal(453)
807
1515
6666
Here is one way of doing this using string manipulation, which goes a lot easier than trying to do this with numbers. It is also a more direct translation of what you describe afterwards. (I do not really see the link between your code and your description...)
def is_palindrome(text):
# : approach1, faster for large inputs
# mid_length = len(text) // 2
# offset = 0 if len(text) % 2 else 1
# return text[:mid_length] == text[:-mid_length - offset:-1]
# : approach2, faster for small inputs
return text == text[::-1]
def palindrome_sum(num):
while not is_palindrome(num):
num = str(int(num) + int(num[::-1]))
return num
num = input() # 453
palindrome = palindrome_sum(num)
print(palindrome)
# 6666

algorithm/code in R to find pattern from any position in a string

I want to find the pattern from any position in any given string such that the pattern repeats for a threshold number of times at least.
For example for the string "a0cc0vaaaabaaaabaaaabaa00bvw" the pattern should come out to be "aaaab". Another example: for the string "ff00f0f0f0f0f0f0f0f0000" the pattern should be "0f".
In both cases threshold has been taken as 3 i.e. the pattern should be repeated for at least 3 times.
If someone can suggest an optimized method in R for finding a solution to this problem, please do share with me. Currently I am achieving this by using 3 nested loops, and it's taking a lot of time.
Thanks!
Use regular expressions, which are made for this type of stuff. There may be more optimized ways of doing it, but in terms of easy to write code, it's hard to beat. The data:
vec <- c("a0cc0vaaaabaaaabaaaabaa00bvw","ff00f0f0f0f0f0f0f0f0000")
The function that does the matching:
find_rep_path <- function(vec, reps) {
regexp <- paste0(c("(.+)", rep("\\1", reps - 1L)), collapse="")
match <- regmatches(vec, regexpr(regexp, vec, perl=T))
substr(match, 1, nchar(match) / reps)
}
And some tests:
sapply(vec, find_rep_path, reps=3L)
# a0cc0vaaaabaaaabaaaabaa00bvw ff00f0f0f0f0f0f0f0f0000
# "aaaab" "0f0f"
sapply(vec, find_rep_path, reps=5L)
# $a0cc0vaaaabaaaabaaaabaa00bvw
# character(0)
#
# $ff00f0f0f0f0f0f0f0f0000
# [1] "0f"
Note that with threshold as 3, the actual longest pattern for the second string is 0f0f, not 0f (reverts to 0f at threshold 5). In order to do this, I use back references (\\1), and repeat these as many time as necessary to reach threshold. I need to then substr the result because annoyingly base R doesn't have an easy way to get just the captured sub expressions when using perl compatible regular expressions. There is probably a not too hard way to do this, but the substr approach works well in this example.
Also, as per the discussion in #G. Grothendieck's answer, here is the version with the cap on length of pattern, which is just adding the limit argument and the slight modification of the regexp.
find_rep_path <- function(vec, reps, limit) {
regexp <- paste0(c("(.{1,", limit,"})", rep("\\1", reps - 1L)), collapse="")
match <- regmatches(vec, regexpr(regexp, vec, perl=T))
substr(match, 1, nchar(match) / reps)
}
sapply(vec, find_rep_path, reps=3L, limit=3L)
# a0cc0vaaaabaaaabaaaabaa00bvw ff00f0f0f0f0f0f0f0f0000
# "a" "0f"
find.string finds substring of maximum length subject to (1) substring must be repeated consecutively at least th times and (2) substring length must be no longer than len.
reps <- function(s, n) paste(rep(s, n), collapse = "") # repeat s n times
find.string <- function(string, th = 3, len = floor(nchar(string)/th)) {
for(k in len:1) {
pat <- paste0("(.{", k, "})", reps("\\1", th-1))
r <- regexpr(pat, string, perl = TRUE)
if (attr(r, "capture.length") > 0) break
}
if (r > 0) substring(string, r, r + attr(r, "capture.length")-1) else ""
}
and here are some tests. The last test processes the entire text of James Joyce's Ulysses in 1.4 seconds on my laptop:
> find.string("a0cc0vaaaabaaaabaaaabaa00bvw")
[1] "aaaab"
> find.string("ff00f0f0f0f0f0f0f0f0000")
[1] "0f0f"
>
> joyce <- readLines("http://www.gutenberg.org/files/4300/4300-8.txt")
> joycec <- paste(joyce, collapse = " ")
> system.time(result <- find.string2(joycec, len = 25))
user system elapsed
1.36 0.00 1.39
> result
[1] " Hoopsa boyaboy hoopsa!"
ADDED
Although I developed my answer before having seen BrodieG's, as he points out they are very similar to each other. I have added some features of his to the above to get the solution below and tried the tests again. Unfortunately when I added the variation of his code the James Joyce example no longer works although it does work on the other two examples shown. The problem seems to be in adding the len constraint to the code and may represent a fundamental advantage of the code above (i.e. it can handle such a constraint and such constraints may be essential for very long strings).
find.string2 <- function(string, th = 3, len = floor(nchar(string)/th)) {
pat <- paste0(c("(.", "{1,", len, "})", rep("\\1", th-1)), collapse = "")
r <- regexpr(pat, string, perl = TRUE)
ifelse(r > 0, substring(string, r, r + attr(r, "capture.length")-1), "")
}
> find.string2("a0cc0vaaaabaaaabaaaabaa00bvw")
[1] "aaaab"
> find.string2("ff00f0f0f0f0f0f0f0f0000")
[1] "0f0f"
> system.time(result <- find.string2(joycec, len = 25))
user system elapsed
0 0 0
> result
[1] "w"
REVISED The James Joyce test that was supposed to be testing find.string2 was actually using find.string. This is now fixed.
Not optimized (even it is fast) function , but I think it is more R way to do this.
Get all patterns of certains length > threshold : vectorized using mapply and substr
Get the occurrence of these patterns and extract the one with maximum occurrence : vectorized using str_locate_all.
Repeat 1-2 this for all lengths and tkae the one with maximum occurrence.
Here my code. I am creating 2 functions ( steps 1-2) and step 3:
library(stringr)
ss = "ff00f0f0f0f0f0f0f0f0000"
ss <- "a0cc0vaaaabaaaabaaaabaa00bvw"
find_pattern_length <-
function(length=1,ss){
patt = mapply(function(x,y) substr(ss,x,y),
1:(nchar(ss)-length),
(length+1):nchar(ss))
res = str_locate_all(ss,unique(patt))
ll = unlist(lapply(res,length))
list(patt = patt[which.max(ll)],
rep = max(ll))
}
get_pattern_threshold <-
function(ss,threshold =3 ){
res <-
sapply(seq(threshold,nchar(ss)),find_pattern_length,ss=ss)
res[,which.max(res['rep',])]
}
some tests:
get_pattern_threshold('ff00f0f0f0f0f0f0f0f0000',5)
$patt
[1] "0f0f0"
$rep
[1] 6
> get_pattern_threshold('ff00f0f0f0f0f0f0f0f0000',2)
$patt
[1] "f0"
$rep
[1] 18
Since you want at least three repetitions, there is a nice O(n^2) approach.
For each possible pattern length d cut string into parts of length d. In case of d=5 it would be:
a0cc0
vaaaa
baaaa
baaaa
baa00
bvw
Now look at each pairs of subsequent strings A[k] and A[k+1]. If they are equal then there is a pattern of at least two repetitions. Then go further (k+2, k+3) and so on. Finally you also check if suffix of A[k-1] and prefix of A[k+n] fit (where k+n is the first string that doesn't match).
Repeat it for each d starting from some upper bound (at most n/3).
You have n/3 possible lengths, then n/d strings of length d to check for each d. It should give complexity O(n (n/d) d)= O(n^2).
Maybe not optimal but I found this cutting idea quite neat ;)
For a bounded pattern (i.e not huge) it's best I think to just create all possible substrings first and then count them. This is if the sub-patterns can overlap. If not change the step fun in the loop.
pat="a0cc0vaaaabaaaabaaaabaa00bvw"
len=nchar(pat)
thr=3
reps=floor(len/2)
# all poss strings up to half length of pattern
library(stringr)
pat=str_split(pat, "")[[1]][-1]
str.vec=vector()
for(win in 2:reps)
{
str.vec= c(str.vec, rollapply(data=pat,width=win,FUN=paste0, collapse=""))
}
# the max length string repeated more than 3 times
tbl=table(str.vec)
tbl=tbl[tbl>=3]
tbl[which.max(nchar(names(tbl)))]
aaaabaa
3
NB Whilst I'm lazy and append/grow the str.vec here in a loop, for a larger problem I'm pretty sure the actual length of str.vec is predetermined by the length of the pattern if you care to work it out.
Here is my solution, it's not optimized (build vector with patterns <- c() ; pattern <- c(patterns, x) for example) and can be improve but simpler than yours, I think.
I can't understand which pattern exactly should (I just return the max) be returned but you can adjust the code to what you want exactly.
str <- "a0cc0vaaaabaaaabaaaabaa00bvw"
findPatternMax <- function(str){
nb <- nchar(str):1
length.patt <- rev(nb)
patterns <- c()
for (i in 1:length(nb)){
for (j in 1:nb[i]){
patterns <- c(patterns, substr(str, j, j+(length.patt[i]-1)))
}
}
patt.max <- names(which(table(patterns) == max(table(patterns))))
return(patt.max)
}
findPatternMax(str)
> findPatternMax(str)
[1] "a"
EDIT :
Maybe you want the returned pattern have a min length ?
then you can add a nchar.patt parameter for example :
nchar.patt <- 2 #For a pattern of 2 char min
nb <- nb[length.patt >= nchar.patt]
length.patt <- length.patt[length.patt >= nchar.patt]

Automatic acronyms of strings in R

Long strings in plots aren't always attractive. What's the shortest way of making an acronym in R? E.g., "Hello world" to "HW", and preferably to have unique acronyms.
There's function abbreviate, but it just removes some letters from the phrase, instead of taking first letters of each word.
An easy way would be to use a combination of strsplit, substr, and make.unique.
Here's an example function that can be written:
makeInitials <- function(charVec) {
make.unique(vapply(strsplit(toupper(charVec), " "),
function(x) paste(substr(x, 1, 1), collapse = ""),
vector("character", 1L)))
}
Test it out:
X <- c("Hello World", "Home Work", "holidays with children", "Hello Europe")
makeInitials(X)
# [1] "HW" "HW.1" "HWC" "HE"
That said, I do think that abbreviate should suffice, if you use some of its arguments:
abbreviate(X, minlength=1)
# Hello World Home Work holidays with children Hello Europe
# "HlW" "HmW" "hwc" "HE"
Using regex you can do following. The regex pattern ((?<=\\s).|^.) looks for any letter followed by space or first letter of the string. Then we just paste resulting vectors using collapse argument to get first letter based acronym. And as Ananda suggested, if you want to make unique pass the result through make.unique.
X <- c("Hello World", "Home Work", "holidays with children")
sapply(regmatches(X, gregexpr(pattern = "((?<=\\s).|^.)", text = X, perl = T)), paste, collapse = ".")
## [1] "H.W" "H.W" "h.w.c"
# If you want to make unique
make.unique(sapply(regmatches(X, gregexpr(pattern = "((?<=\\s).|^.)", text = X, perl = T)), paste, collapse = "."))
## [1] "H.W" "H.W.1" "h.w.c"

Reduce string length by removing contiguous duplicates

I have an R dataframe whith 2 fields:
ID WORD
1 AAAAABBBBB
2 ABCAAABBBDDD
3 ...
I'd like to simplify the words with repeating letters by keeping only the letter and not the duplicates in a repetition:
e.g.: AAAAABBBBB should give me AB
and ABCAAABBBDDD should give me ABCABD
Anyone has an idea on how to do this?
Here's a solution with regex:
x <- c('AAAAABBBBB', 'ABCAAABBBDDD')
gsub("([A-Za-z])\\1+","\\1",x)
EDIT: By request, some benchmarking. I added Matthew Lundberg's pattern in the comment, matching any character. It appears that gsub is faster by an order of magnitude, and matching any character is faster than matching letters.
library(microbenchmark)
set.seed(1)
##create sample dataset
x <- apply(
replicate(100,sample(c(LETTERS[1:3],""),10,replace=TRUE))
,2,paste0,collapse="")
##benchmark
xm <- microbenchmark(
SAPPLY = sapply(strsplit(x, ''), function(x) paste0(rle(x)$values, collapse=''))
,GSUB.LETTER = gsub("([A-Za-z])\\1+","\\1",x)
,GSUB.ANY = gsub("(.)\\1+","\\1",x)
)
##print results
print(xm)
# Unit: milliseconds
# expr min lq median uq max
# 1 GSUB.ANY 1.433873 1.509215 1.562193 1.664664 3.324195
# 2 GSUB.LETTER 1.940916 2.059521 2.108831 2.227435 3.118152
# 3 SAPPLY 64.786782 67.519976 68.929285 71.164052 77.261952
##boxplot of times
boxplot(xm)
##plot with ggplot2
library(ggplot2)
qplot(y=time, data=xm, colour=expr) + scale_y_log10()
x <- c('AAAAABBBBB', 'ABCAAABBBDDD')
sapply(strsplit(x, ''), function(x) paste0(rle(x)$values, collapse=''))
## [1] "AB" "ABCABD"

How do I get to haskell to output numbers NOT in scientific notation?

I have a some items that I want to partition in to a number of buckets, such that each bucket is some fraction larger than the last.
items = 500
chunks = 5
increment = 0.20
{- find the proportions -}
sizes = take chunks (iterate (+increment) 1)
base = sum sizes / items
buckets = map (base *) sizes
main = print buckets
I'm sure there is a mathematically more elegant way to do this, but that's not my question.
The end step is always printing out in scientific notation.
How do I get plain decimal output? I've looked at the Numeric package but I'm getting nowhere fast.
> putStrLn $ Numeric.showFFloat Nothing 1e40 ""
10000000000000000000000000000000000000000.0
Try printf. e.g.:
> import Text.Printf
> printf "%d\n" (23::Int)
23
> printf "%s %s\n" "Hello" "World"
Hello World
> printf "%.2f\n" pi
3.14

Resources