Will
Will

Reputation: 75615

finding long repeated substrings in a massive string

I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for.

I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM.

This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree:

storing a string's suffix tree typically requires significantly more space than storing the string itself.

The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures.

And that was wikipedia's comments on the tree, not trie.

How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)?

(Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

Upvotes: 11

Views: 8862

Answers (9)

Amit Tyagi
Amit Tyagi

Reputation: 320

what about a simple program like this :

S = "ABAABBCCAAABBCCM"

def findRepeat(S):
    n = len(S)
    #find the maxim lenth of repeated string first 
    msn = int(floor(n/2))
    #start with maximum length 
    for i in range(msn,1,-1):
        substr = findFixedRepeat(S, i)
        if substr:
            return substr
    print 'No repeated string'
    return 0

def findFixedRepeat(str, n):
    l = len(str)
    i = 0
    while  ((i + n -1) < l):
        ss = S[i:i+n]
        bb = S[i+n:]
        try:
            ff = bb.index(ss)
        except:
            ff = -1

        if ff >= 0:
            return ss;
        i = i+1
    return 0
print findRepeat(S)

Upvotes: 2

Will
Will

Reputation: 75615

The effective way to do this is to create an index of the sub-strings, and sort them. This is an O(n lg n) operation.

BWT compression does this step, so its a well understood problem and there are radix and suffix (claim O(n)) sort implementations and such to make it as efficient as possible. It still takes a long time, perhaps several seconds for large texts.

If you want to use utility code, C++ std::stable_sort() performs much better than std::sort() for natural language (and much faster than C's qsort(), but for different reasons).

Then visiting each item to see the length of its common substring with its neighbours is O(n).

Upvotes: 6

Mr.Ree
Mr.Ree

Reputation: 8418

Just a belated thought that occurred to me...

Depending on your OS/environment. (E.g. 64 bit pointers & mmap() available.)

You might be able to create a very large Suffix-tree on disk through mmap(), and then keep a cached most-frequently-accessed subset of that tree in memory.

Upvotes: 0

Will
Will

Reputation: 75615

Answering my own question:

Given that a long match is also a short match, you can trade multiple passes for RAM by first finding shorter matches and then seeing if you can 'grow' these matches.

The literal approach to this is to build a trie (with counts in each node) of all sequences of some fixed length in the data. You then cull all those nodes that are not matching your criteria (e.g. the longest match). Then then do a subsequent pass through the data, building the trie out deeper, but not broader. Repeat until you've found the longest repeated sequence(s).

A good friend suggested to use hashing. By hashing the fixed-length character sequence starting at each character you now have the issue of finding duplicate hash values (and verifying the duplication, as hashing is lossy). If you allocate an array the length of the data to hold the hash values, you can do interesting things e.g. to see if a match is longer than your fixed-length pass of the data, you can just compare the sequences of hashes rather than regenerating them. Etc.

Upvotes: 2

FryGuy
FryGuy

Reputation: 8744

You could solve this using divide and conquer. I think this should be the same algorithmic complexity as using a trie, but maybe less efficient implementation-wise

void LongSubstrings(string data, string prefix, IEnumerable<int> positions)
{
    Dictionary<char, DiskBackedBuffer> buffers = new Dictionary<char, DiskBackedBuffer>();
    foreach (int position in positions)
    {
        char nextChar = data[position];
        buffers[nextChar].Add(position+1);
    }

    foreach (char c in buffers.Keys)
    {
        if (buffers[c].Count > 1)
            LongSubstrings(data, prefix + c, buffers[c]);
        else if (buffers[c].Count == 1)
            Console.WriteLine("Unique sequence: {0}", prefix + c);
    }
}

void LongSubstrings(string data)
{
    LongSubstrings(data, "", Enumerable.Range(0, data.Length));
}

After this, you would need to make a class that implemented DiskBackedBuffer such that it was a list of numbers, and when the buffer got to a certain size, it would write itself out to disk using a temporary file, and recall from disk when read from.

Upvotes: 2

Steve Steiner
Steve Steiner

Reputation: 5359

Can you solve your problem by building a suffix array instead? Otherwise you'll likely need to use one of the disk-based suffix trees mentioned in the other answers.

Upvotes: 0

orip
orip

Reputation: 75437

You could look at disk-based suffix trees. I found this Suffix tree implementation library through Google, plus a bunch of articles that could help implementing it yourself.

Upvotes: 3

Eclipse
Eclipse

Reputation: 45493

The easiest way might just be to plunk down the $100 for a bunch more RAM. Otherwise, you'll likely have to look at disk backed structures for holding your suffix tree.

Upvotes: -1

Charlie Martin
Charlie Martin

Reputation: 112346

Is this text with word breaks? Then I'd suspect you want a variation of keyword-in-context: make a copy of each line n times for n words in a line, breaking each line at each word; sort alpha of the whole thing; look for repeats.

If it's a single long honking string, like say bioinformatic DNA sequences, then you want to build something like your trie on disk; build a record for each character with a disk offset for the next-nodes. I'd have a look at Volume 3 of Knuth, section 5.4, "external sorting".

Upvotes: 1

Related Questions