Reputation: 9392
I'm looking to speed along my discovery process here quite a bit, as this is my first venture into the world of lexical analysis. Maybe this is even the wrong path. First, I'll describe my problem:
I've got very large properties files (in the order of 1,000 properties), which when distilled, are really just about 15 important properties and the rest can be generated or rarely ever change.
So, for example:
general {
name = myname
ip = 127.0.0.1
}
component1 {
key = value
foo = bar
}
This is the type of format I want to create to tokenize something like:
property.${general.name}blah.home.directory = /blah
property.${general.name}.ip = ${general.ip}
property.${component1}.ip = ${general.ip}
property.${component1}.foo = ${component1.foo}
into
property.mynameblah.home.directory = /blah
property.myname.ip = 127.0.0.1
property.component1.ip = 127.0.0.1
property.component1.foo = bar
Lexical analysis and tokenization sounds like my best route, but this is a very simple form of it. It's a simple grammar, a simple substitution and I'd like to make sure that I'm not bringing a sledgehammer to knock in a nail.
I could create my own lexer and tokenizer, or ANTlr is a possibility, but I don't like re-inventing the wheel and ANTlr sounds like overkill.
I'm not familiar with compiler techniques, so pointers in the right direction & code would be most appreciated.
Note: I can change the input format.
Upvotes: 12
Views: 15414
Reputation: 19769
There's an excellent article on Using Regular Expressions for Lexical Analysis at effbot.org.
Adapting the tokenizer to your problem:
import re
token_pattern = r"""
(?P<identifier>[a-zA-Z_][a-zA-Z0-9_]*)
|(?P<integer>[0-9]+)
|(?P<dot>\.)
|(?P<open_variable>[$][{])
|(?P<open_curly>[{])
|(?P<close_curly>[}])
|(?P<newline>\n)
|(?P<whitespace>\s+)
|(?P<equals>[=])
|(?P<slash>[/])
"""
token_re = re.compile(token_pattern, re.VERBOSE)
class TokenizerException(Exception): pass
def tokenize(text):
pos = 0
while True:
m = token_re.match(text, pos)
if not m: break
pos = m.end()
tokname = m.lastgroup
tokvalue = m.group(tokname)
yield tokname, tokvalue
if pos != len(text):
raise TokenizerException('tokenizer stopped at pos %r of %r' % (
pos, len(text)))
To test it, we do:
stuff = r'property.${general.name}.ip = ${general.ip}'
stuff2 = r'''
general {
name = myname
ip = 127.0.0.1
}
'''
print ' stuff '.center(60, '=')
for tok in tokenize(stuff):
print tok
print ' stuff2 '.center(60, '=')
for tok in tokenize(stuff2):
print tok
for:
========================== stuff ===========================
('identifier', 'property')
('dot', '.')
('open_variable', '${')
('identifier', 'general')
('dot', '.')
('identifier', 'name')
('close_curly', '}')
('dot', '.')
('identifier', 'ip')
('whitespace', ' ')
('equals', '=')
('whitespace', ' ')
('open_variable', '${')
('identifier', 'general')
('dot', '.')
('identifier', 'ip')
('close_curly', '}')
========================== stuff2 ==========================
('newline', '\n')
('identifier', 'general')
('whitespace', ' ')
('open_curly', '{')
('newline', '\n')
('whitespace', ' ')
('identifier', 'name')
('whitespace', ' ')
('equals', '=')
('whitespace', ' ')
('identifier', 'myname')
('newline', '\n')
('whitespace', ' ')
('identifier', 'ip')
('whitespace', ' ')
('equals', '=')
('whitespace', ' ')
('integer', '127')
('dot', '.')
('integer', '0')
('dot', '.')
('integer', '0')
('dot', '.')
('integer', '1')
('newline', '\n')
('close_curly', '}')
('newline', '\n')
Upvotes: 14
Reputation: 993
The syntax you provide seems similar to Mako templates engine. I think you could give it a try, it's rather simple API.
Upvotes: 1
Reputation: 46479
A simple DFA works well for this. You only need a few states:
${
${
looking for at least one valid character forming the name}
.If the properties file is order agnostic, you might want a two pass processor to verify that each name resolves correctly.
Of course, you then need to write the substitution code, but once you have a list of all the names used, the simplest possible implementation is a find/replace on ${name}
with its corresponding value.
Upvotes: 4
Reputation: 2748
For as simple as your format seems to be, I think a full-on parser/lexer would be way overkill. Seems like a combination of regexes and string manipulation would do the trick.
Another idea is to change the file to something like json or xml and use an existing package.
Upvotes: 2
Reputation: 83250
If you can change the format of the input files, then you could use a parser for an existing format, such as JSON.
However, from your problem statement it sounds like that isn't the case. So if you want to create a custom lexer and parser, use PLY (Python Lex/Yacc). It is easy to use and works the same as lex/yacc.
Here is a link to an example of a calculator built using PLY. Note that everything starting with t_
is a lexer rule - defining a valid token - and everything starting with p_
is a parser rule that defines a production of the grammar.
Upvotes: 1