Reputation: 143
I recently had following code in mind and wondered what was wrong with it. Previously I used the .get method of dictionaries with success, but now i wanted to pass arguments too and this is where i noticed a somewhat weird behavior:
def string_encoder(nmstr):
return nmstr.encode('UTF-8')
def int_adder(nr_int):
return int(nr_int) + int(nr_int)
def selector(fun, val):
return {'str_en': string_encoder(val),
'nr_add': int_adder(val)}.get(fun, string_encoder(val))
selector('str_en', 'Test') -> ValueError
selector('str_en', 1) -> AttributeError
The above code will never run. To inspect the issue i supplied a small piece of code:
def p1(pstr):
print('p1: ', pstr)
return pstr
def p2(pstr):
print('p2: ', pstr)
return pstr
def selector_2(fun, val):
return {'p1': p1(val),
'p2': p2(val)}.get(fun, p2(val))
selector_2('p1', 'Test')
Out[]: p1: Test
p2: Test
p2: Test
'Test'
I would expect the following .get('p1', 'test') to output 'p1: test' test. But as it appears to me, every argument is evaluated, even if it is not selected. So my question is: Why is every argument evaluated with the .get method, or how can this behavior be explained?
Upvotes: 0
Views: 44
Reputation: 155507
dict
creation is eager, as is argument evaluation. So before get
even runs, you've called string_encoder
twice, and int_adder
once (and since the behaviors are largely orthogonal, you'll get an error for anything but a numeric str
like "123"
).
You need to avoid calling the function until you know which one to call (and ideally, only call that function once).
The simplest solution is to have the dict
and get
call contain the functions themselves, rather than the result of calling them; you'll end up with whichever function wins, and you can then call that function. For example:
def selector(fun, val):
# Removed (val) from all mentions of functions
return {'str_en': string_encoder,
'nr_add': int_adder}.get(fun, string_encoder)(val) # <- But used it to call resulting function
Given string_encoder
is your default, you could remove 'str_en'
handling entirely to simplify to:
return {'nr_add': int_adder}.get(fun, string_encoder)(val)
which leads to the realization that you're not really getting anything out of the dict
. dict
s have cheap lookup, but you're rebuilding the dict
every call, so you didn't save a thing. Given that you really only have two behaviors:
int_adder
if fun
is 'nr_add'
string_encoder
the correct solution is just an if
check which is more efficient, and easier to read:
def selector(fun, val):
if fun == 'nr_add':
return int_adder(val)
return string_encoder(val)
# Or if you love one-liners:
return int_adder(val) if fun == 'nr_add' else string_encoder(val)
If your real code has a lot of entries in the dict
, not just two, one of which is unnecessary, then you can use a dict
for performance, but build it once at global scope and reference it in the function so you're not rebuilding it every call (which loses all performance benefits of dict
), e.g.:
# Built only once at global scope
_selector_lookup_table = {
'str_en': string_encoder,
'nr_add': int_adder,
'foo': some_other_func,
...
'baz': yet_another_func,
}
def selector(fun, val):
# Reused in function for each call
return _selector_lookup_table.get(fun, default_func)(val)
Upvotes: 1
Reputation: 5958
If you want to avoid evaluation of functions and only chooses the function, do this instead for your second block (the syntax will also work for your first block):
def selector_2(fun, val):
return {'p1': p1,
'p2': p2}.get(fun)(val)
Upvotes: 1