aheuertz
aheuertz

Reputation: 163

Python regex negation within regex

Given:

ABC
content 1
123
content 2
ABC
content 3
XYZ

Is it possible to create a regex that matches the shortest version of "ABC[\W\w]+?XYZ"

Essentially, I'm looking for "ABC followed by any characters terminating with XYZ, but don't match if I encounter ABC in between" (but think of ABC as a potential regex itself, as it would not always be a set length...so ABC or ABcC could also match)

So, more generally: REGEX1 followed by any character and terminated by REGEX2, not matching if REGEX1 occurs in between.

In this example, I would not want the first 4 lines.

(I'm sure this explanation could potentially need...further explanation haha)

EDIT:

Alright, I see the need for further explanation now! Thanks for the suggestions thus far. I'll at least give you all more to think about while I start looking into how each of your proposed solutions can be applied to my problem.

Proposal 1: Reverse the string contents and the regex.

This is certainly a very fun hack that solves the problem based on what I explained. In simplifying the issue, I failed to also mention that the same thing could happen in reverse because the ending signature could exist later on also (and has proven to be in my specific situation). That introduces the problem illustrated below:

ABC
content 1
123
content 2
ABC
content 3
XYZ
content 4
MNO
content 5
XYZ

In this instance, I would check for something like "ABC through XYZ" meaning to catch [ABC, content 1, XYZ]...but accidentally catching [ABC, content 1, 123, content 2, ABC, content 3, XYZ]. Reversing that would catch [ABC, content 3, XYZ, content 4, MNO, content 5, XYZ] instead of the [ABC, content 2, XYZ] that we want again. The point is to try to make it as generalized as possible because I will also be searching for things that could potentially have the same starting signature (regex "ABC" in this case), and different ending signatures.

If there is a way to build the regexes so that they encapsulate this sort of limitation, it could prove much easier to just reference that any time I build a regex to search for in this type of string, rather than creating a custom search algorithm that deals with it.

Proposal 2: A+B+C+[^A]+[^B]+[^C]+XYZ with IGNORECASE flag

This seems nice in the case that ABC is finite. Think of it as a regex in itself though. For example:

Hello!GoodBye!Hello.Later.

VERY simplified version of what I'm trying to do. I would want "Hello.Later." given the start regex Hello[!.] and the end Later[!.]. Running something simply like Hello[!.]Later[!.] would grab the entire string, but I'm looking to say that if the start regex Hello[!.] exists between the first starting regex instance found and the first ending regex instance found, ignore it.

The convo below this proposal indicates that I might be limited by regular language limitations similar to the parentheses matching problem (Google it, it's fun to think about). The purpose of this post is to see if I do in fact have to resort to creating an underlying algorithm that handles the issue I'm encountering. I would very much like to avoid it if possible (in the simple example that I gave you above, it's pretty easy to design a finite state machine for...I hope that holds as it grows slightly more complex).

Proposal 3: ABC(?:(?!ABC).)*?XYZ with DOTALL flag

I like the idea of this if it actually allows ABC to be a regex. I'll have to explore this when I get in to the office tomorrow. Nothing looks too out of the ordinary at first glance, but I'm entirely new to python regex (and new to actually applying regexes in code instead of just in theory homework)

Upvotes: 4

Views: 2484

Answers (3)

huon
huon

Reputation: 102096

There is another method of solving this problem: not trying to do it in one regex. You could split the string by the first regex, and then use the second one on the last part.

Code is the best explanation:

s = """ABC
content 1
123
content 2
ABC
content 3
XYZ
content 4
XYZ"""

# capturing groups to preserve the matched section
prefix = re.compile('(ABC)')
suffix = re.compile('(XYZ)')

# prefix.split(s) == ['', 'ABC', [..], 'ABC', '\ncontent 3\nXYZ\ncontent 4\nXYZ']
#                          prefixmatch ^^^^^  ^^^^^^^^^^^^ rest ^^^^^^^^^^^^^^^^
prefixmatch, rest = prefix.split(s)[-2:]

# suffix.split(rest,1) == ['\ncontent 3\n', 'XYZ', '\ncontent 4\nXYZ']
#                          ^^ interior ^^   ^^^^^ suffixmatch
interior, suffixmatch = suffix.split(rest,1)[:2]

# join the parts up.
result = '%s%s%s' % (prefixmatch, interior, suffixmatch)

# result == 'ABC\ncontent 3\nXYZ'

Some points:

  • there should be appropriate error handling (even just try: ... except ValueError: .. around the whole thing) to handle the case when either regex doesn't match at all and so the list unpacking fails.
  • this assumes that the desired segment will occur immediately after the last occurrence of prefix, if not, then you can iterate through the results of prefix.split(s) two at a time (starting at index 1) and do the same splitting trick with suffix to find all the matches.
  • this likely to be reasonably inefficient, since it creates quite a few intermediate data structures.

Upvotes: 0

cblab
cblab

Reputation: 1935

Edit

So after reading your further explanations, I would say that my previous proposal, as well as MRAB's one are somehow similar and won't be of any help here. Your problem is actually the prolem of nested structures.

Think of your 'prefix' and 'suffix' as symbols. You could easily replace them with an opening and a closing parenthesis or whatever, and what you want is being able to match only the smallest (then deepest) pair ...

For example if your prefix is 'ABC.' and your suffix is 'XYZ.':

ABChello worldABCfooABCbarXYZ

You want to get only ABCbarXYZ.

It's the same if the prefix is (, and the suffix is ), the string:

(hello world(foo(bar)

It would match ideally only (bar) ...

Definitely you have to use a context free grammar (like programming languages do: C grammar, Python grammar) and a parser, or make your own by using regex as well as the iterating and storing mechanisms of your programming language.

But that's not possible with only regular expressions. They would probably help in your algorithm, but they just are not designed to handle that alone. Not the good tool for that job ... You cannot inflate tires with a screwdriver. Therefore, you will have to use some external mechanisms, not complicated though, to store the context, your position in the nested stack. Using your regular expression in each single context still may be possible.

Finite state machines are finite, and nested structures have an arbitrary depth that would require your automaton to grow arbitrarily, thus they are not regular languages.

Since recursion in a grammar allows the definition of nested syntactic structures, any language (including any programming language) which allows nested structures is a context-free language, not a regular language. For example, the set of strings consisting of balanced parentheses [like a LISP program with the alphanumerics removed] is a context-free language see here

Former proposal (not relevant anymore)

If I do:

>>> s = """ABC
content 1
123
content 2
ABC
content 3
XYZ"""
>>> r = re.compile(r'A+B+C+[^A]+[^B]+[^C]+XYZ', re.I)
>>> re.findall(r,s)

I get

['ABC\ncontent 3\nXYZ']

Is that what you want ?

Upvotes: 1

MRAB
MRAB

Reputation: 20654

A regex solution would be ABC(?:(?!ABC).)*?XYZ with the DOTALL flag.

Upvotes: 7

Related Questions