dimitris_ps
dimitris_ps

Reputation: 5951

Vectorized pattern matching returning the pattern in R

My problem is mostly that of efficiency.

I have a vector of patterns that i would like to match against a vector x.

The end result should return the pattern that is match to each element of the vector. A second criteria would be, if many patterns are matched for a specific element of the vector x, then return the first pattern matched.

For example, lets say the vector of patterns is:

patterns <- c("[0-9]{2}[a-zA-Z]", "[0-9][a-zA-Z] ", " [a-zA-Z]{3} ")

and the vector x is:

x <- c("abc 123ab abc", "abc 123 abc ", "a", "12a ", "1a ")

The end result would be:

customeRExp(patterns, x)
[1] "[0-9]{2}[a-zA-Z]" " [a-zA-Z]{3} "
[3]  NA                "[0-9]{2}[a-zA-Z]"
[5] "[0-9][a-zA-Z] "

This is what i have so far:

customeRExp <- function(pattern, x){
                        m <- matrix(NA, ncol=length(x), nrow=length(pattern))
                        for(i in 1:length(pattern)){
                            m[i, ] <- grepl(pattern[i], x)}
                        indx <- suppressWarnings(apply(m, 2, function(y) min(which(y, TRUE))))
                        pattern[indx]
}

customeRExp(patterns, x)

Which correctly returns:

[1] "[0-9]{2}[a-zA-Z]" " [a-zA-Z]{3} "    NA                
[4] "[0-9]{2}[a-zA-Z]" "[0-9][a-zA-Z] "

The problem is that my dataset is huge, and the list of patterns quite big also.

Is there a more efficient way of doing the same?

Upvotes: 2

Views: 802

Answers (3)

alexis_laz
alexis_laz

Reputation: 13122

Conceptually similar to nrussell's approach, we could discard elements of "x" that have been matched from following greps:

ff = function(x, p)
{
    ans = rep_len(NA_integer_, length(x))
    for(i in seq_along(p)) {
        nas = which(is.na(ans))
        ans[nas[grepl(p[i], x[nas])]] = i
    }
    p[ans]    
}
ff(x, patterns)
#[1] "[0-9]{2}[a-zA-Z]" " [a-zA-Z]{3} "    NA                 "[0-9]{2}[a-zA-Z]" "[0-9][a-zA-Z] "

Subsetting "x" in each iteration might be more costly than it looks especially if subsetting ends up ignoring only a small amount of "x" 's elements where -in that case- we end up copying a large "x" (few elements shorter) and, yet still, greping a large "x". It can be more efficient, though, if (1) a large fraction of "x", indeed, has a match and, (2) if a significant fraction of "x" is matched in each (and, probably, early) iteration. Using nrussell's example, we have such a case where, indeed, many elements of "x" are discarded in each iteration along "patterns":

microbenchmark::microbenchmark(ff(x2, p2), first_match(x2, p2), customeRExp(p2, x2), times = 25)
#Unit: milliseconds
                expr       min        lq      mean    median        uq       max neval cld
#          ff(x2, p2)  299.7235  306.0875  312.9303  308.0544  320.6126  333.9144    25 a  
# first_match(x2, p2) 1581.4085 1606.3984 1642.4471 1643.0671 1661.9499 1734.9066    25  b 
# customeRExp(p2, x2) 3464.4267 3515.7499 3623.0920 3611.0809 3694.3931 3849.0399    25   c

all.equal(ff(x2, p2), customeRExp(p2, x2))
#[1] TRUE
all.equal(ff(x2, p2), first_match(x2, p2))
#[1] TRUE

nrussell's approach still does the minimal work needed even in edge cases (where the other two will add more computational time than necessary).

Upvotes: 3

nrussell
nrussell

Reputation: 18602

My default approach to speeding up loops like the above is generally to just rewrite in C++. Here's a quick attempt using Boost Xpressive:

// [[Rcpp::depends(BH)]]
#include <Rcpp.h>
#include <boost/xpressive/xpressive.hpp>

namespace xp = boost::xpressive;

// [[Rcpp::export]]
Rcpp::CharacterVector
first_match(Rcpp::CharacterVector x, Rcpp::CharacterVector re) {
    R_xlen_t nx = x.size(), nre = re.size(), i = 0, j = 0;
    Rcpp::CharacterVector result(nx, NA_STRING);
    std::vector<xp::sregex> vre(nre);

    for ( ; j < nre; j++) {
        vre[j] = xp::sregex::compile(std::string(re[j]));
    }

    for ( ; i < nx; i++) {
        for (j = 0; j < nre; j++) {
            if (xp::regex_search(std::string(x[i]), vre[j])) {
                result[i] = re[j];
                break;
            }
        }
    }

    return result;
} 

The point of this approach is to save unnecessary calculations by breaking as soon as we find a matching regular expression.


The performance increase isn't earth-shattering (~40%), but it is an improvement over your current function. Here is a test using larger versions of your sample data:

x2 <- rep(x, 5000)
p2 <- rep(patterns, 100)

all.equal(first_match(x2, p2), customeRExp(p2, x2))
#[1] TRUE

microbenchmark::microbenchmark(
    first_match(x2, p2),
    customeRExp(p2, x2),
    times = 50
)
# Unit: seconds
#                 expr      min       lq     mean   median       uq      max neval
#  first_match(x2, p2) 1.743407 1.780649 1.900954 1.836840 1.931783 2.544041    50
#  customeRExp(p2, x2) 2.368621 2.459748 2.681101 2.566717 2.824887 3.553025    50

Another option would be to look into using the stringi package which generally outperforms base R by a good margin.

Upvotes: 3

shayaa
shayaa

Reputation: 2797

library(purrr) 
library(stringr)
bool_results <- x %>% map(str_detect, patterns)

returns the value of which pattern was matched for each element of x, as follows

[[1]]
[1]  TRUE FALSE FALSE

[[2]]
[1] FALSE FALSE FALSE

[[3]]
[1] FALSE FALSE FALSE

[[4]]
[1]  TRUE  TRUE FALSE

[[5]]
[1] FALSE  TRUE FALSE

To extract which patterns is associated with which boolean, you can

lapply(bool_results, function(x) patterns[which(x == TRUE)])

which gives

[[1]]
[1] "[0-9]{2}[a-zA-Z]"

[[2]]
character(0)

[[3]]
character(0)

[[4]]
[1] "[0-9]{2}[a-zA-Z]" "[0-9][a-zA-Z] "  

[[5]]
[1] "[0-9][a-zA-Z] "

Upvotes: 3

Related Questions