Paweł Rumian
Paweł Rumian

Reputation: 3826

python - creating dictionary from comma-separated lines, containing nested values

I have got a line in such format:

line = 'A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=true]'

I would like to extract these values to a dictionary, getting such result:

{
'A': '15',
'B': '8',
'C': 'false',
'D': '[somevar, a=0.1, b=77, c=true]'
}

If not the D value, I could have used such simple method:

result = dict(e.split('=') for e in line.split(', '))

But given the fact that D contains ', ' as a separator, I get a total mess instead

{
'A': '15',
'B': '8',
'C': 'false',
'D': '[somevar',
'a': '0.1',
'b': '77',
'c': 'true]'
}

I would appreciate any advice - I have not tried with regexps yet, but this thing has to be fast, as there are dozens of gigabytes of such lines, and I am afraid that regexping would slow it down a lot...

EDIT: Benchmarks

I have wrapped most of the answers below into functions and used ipython's %timeit magic function to measure the execution times.

The test file was created on tmpfs in RAM by doing simply:

 for i in {1..1000000}; do echo 'A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=true]' >> test_file; done

This is how the complete test program looked like:

import shlex
import re

def kalgasnik(line):
    lexer = shlex.shlex(line)
    lexer.wordchars += '.'
    values = [['']]
    stack = [values]
    for token in lexer:
        if token == ',':
            stack[-1] += [['']]
        elif token == '=':
            stack[-1][-1] += ['']
        elif token == '[':
            v = [['']]
            stack[-1][-1][-1] = v
            stack += [v]
        elif token == ']':
            sub = stack.pop()
            stack[-1][-1][-1] = {v[0]: v[1] if len(v) > 1 else None for v in sub}
        else:
            stack[-1][-1][-1] += token
    values = {v[0]: v[1] if len(v) > 1 else None for v in values}

    return values

def roberto(myline):
    mydict = {}
    parsecheck = {'(':1, '[':1, '{':1, ')':-1, ']':-1, '}':-1}
    parsecount = 0
    chargroup = ''
    myline = myline + ','
    for thischar in myline:
        parsecount += parsecheck.get(thischar, 0)
        if parsecount == 0:
            if thischar == '=':
                thiskey = chargroup.strip()
                chargroup = ''
            elif thischar == ',':
                mydict[thiskey] = chargroup
                chargroup = ''
            else:
                chargroup += thischar
        else:
            chargroup += thischar

    return mydict       

def xavier(line):
    regexp = r'(\w*)=(\[[^\]]*\]|[^,]*),?\s*'
    outdict = dict((match.group(1),match.group(2)) for match in re.finditer(regexp,line))

    return outdict

def wim(line):
    outdict = dict(x.split('=', 1) for x in shlex.split(line.replace("[", "'[").replace("]", "]'")))

    return outdict

def gorkypl(line):
    outdict = dict(e.split('=') for e in line.split(', '))

    return outdict

def run_test(method):
    with open('test_file', 'r') as infile:
        for line in infile:
            method(line)

And here are the results:

%timeit run_test(kalgasnik)
1 loops, best of 3: 3min 52s per loop

%timeit run_test(roberto)
1 loops, best of 3: 30.2 s per loop

%timeit run_test(xavier)
1 loops, best of 3: 12.1 s per loop

%timeit run_test(wim)
1 loops, best of 3: 2min 41 s per loop

And for the sake of comparison, the (not-working-correctly) original idea based purely on split.

%timeit run_test(gorkypl)
1 loops, best of 3: 8.27 s per loop

So, basically, Xavier's regexp-based solution is not only the most flexible, but the fastest one, and not that much slower than the naive method based on split().

Thank you all a lot!

Upvotes: 3

Views: 2487

Answers (5)

kalgasnik
kalgasnik

Reputation: 3205

As sample of unnecessary complexity:

import shlex
line = 'A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=[A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=true]]]'
lexer = shlex.shlex(line)
lexer.wordchars += '.'
values = [['']]
stack = [values]
for token in lexer:
    if token == ',':
        stack[-1] += [['']]
    elif token == '=':
        stack[-1][-1] += ['']
    elif token == '[':
        v = [['']]
        stack[-1][-1][-1] = v
        stack += [v]
    elif token == ']':
        sub = stack.pop()
        stack[-1][-1][-1] = {v[0]: v[1] if len(v) > 1 else None for v in sub}
    else:
        stack[-1][-1][-1] += token
values = {v[0]: v[1] if len(v) > 1 else None for v in values}

Result:

>>> line
'A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=[A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=true]]]'

>>> values
{'A': '15',
 'B': '8',
 'C': 'false',
 'D': {'a': '0.1',
       'b': '77',
       'c': {'A': '15',
             'B': '8',
             'C': 'false',
             'D': {'a': '0.1', 'b': '77', 'c': 'true', 'somevar': None}},
       'somevar': None}}

Upvotes: 1

Roberto
Roberto

Reputation: 2786

This might not be pretty, but it works - maybe use it as a starting point for something more Python-esque?

myline = 'A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=true]'

def separate(myline):
    mydict = {}
    parsecheck = {'(':1, '[':1, '{':1, ')':-1, ']':-1, '}':-1}
    parsecount = 0
    chargroup = ''
    myline = myline + ',' # So all the entries end with a ','
    for thischar in myline:
        parsecount += parsecheck.get(thischar, 0)
        if parsecount == 0 and thischar in '=,':
            if thischar == '=':
                thiskey = chargroup.strip()
            elif thischar == ',':
                mydict[thiskey] = chargroup
            chargroup = ''
        else:
            chargroup += thischar
    return mydict

print separate(myline)

[edited to clean the code a bit]

Upvotes: 0

H4kor
H4kor

Reputation: 1562

Pass over the input string once and check for list segments.

  1. check if current char equals '['
  2. if [ found replace all = and , with different unique characters until a ] is found
  3. result = dict(e.split('=') for e in line.split(', ')) on modified input string

if the lists can be nested keep track of the depth with a counter.

This would turn

line = 'A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=true]'

into

line = 'A=15, B=8, C=false, D=[somevar! a?0.1! b?77! c?true]'

after generating the result just replace ? and ! with = and , again

EDIT: don't use normal characters but control characters instead to avoid collisions

Upvotes: 1

Cam
Cam

Reputation: 478

How about using the '=' to read it as a csv

>>> line = 'A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=true]'
>>> mod_line = line.replace('[','"') #replace [ and ] with " so it can be used as a csv quote char
>>> mod_line = mod_line.replace(']','"')
>>> lines_list = []
>>> lines_list.append(mod_line) #put line into an interable object for csv reader
>>> import csv
>>> reader = csv.reader(lines_list, delimiter='=', quotechar='"')
>>> for row in reader:
...     print(row) # or you could call a function that will turn the returned list into the dictionary you are after
...
['A', '15, B', '8, C', 'false, D', 'somevar, a=0.1, b=77, c=true']

Upvotes: 0

Xavier Combelle
Xavier Combelle

Reputation: 11235

If and only if there is not nested bracket it's a perfect fit for regexp.

import re

line = 'A=15, B=8, C=false, D=[somevar, a=0.1, b=77, c=true]'

regexp = r'(\w*)=(\[[^\]]*\]|[^,]*),?\s*'
print(dict((match.group(1),match.group(2)) for match in re.finditer(regexp,line)))

output

{'A': '15', 'C': 'false', 'B': '8', 'D': '[somevar, a=0.1, b=77, c=true]'}

concerning your fear of being not fast enought, don't assume mesure. As the regexp is optimised C (except for few pathological cases) there is few chance you can do better.

Upvotes: 4

Related Questions