Trickee
Trickee

Reputation: 81

How to translate this generator function to lambda expression

def f(nums):
    sum = 0
    for i in nums:
        sum += i
        yield sum

I was trying to initiate a new list which every index's value is previous accumulation, according to args nums(type list), using list comprehension.

the final result would look like[i for i in f(nums)]

Is there ways to translate the function to lambda expression? or any other ones to make it one-line?

Upvotes: 4

Views: 1104

Answers (3)

dawg
dawg

Reputation: 104032

I would propose the following as a replacement for that:

nums=[1,2,3,4]
gen=(sum(li[0:i]) for i,_ in enumerate(li,1))

That is a generator, so a O(n^2) operation is not being performed for elements not yet needed.

Then to get the elements, use next:

>>> next(gen)
1
>>> next(gen)
3
>>> next(gen)
6
>>> next(gen)
10

And if you do want them all at once, just use list on the generator:

>>> gen=(reduce(add, li[0:i]) for i,_ in enumerate(li,1))
>>> list(gen)
[1, 3, 6, 10]]

The performance of this function on non trivial lists is HORRIBLE because it has O(n^2) complexity. Only use it as a curiosity. See timings below.


And (thanks to AChampion) another reduce:

>>> reduce(lambda x, y: x+[y+next(iter(x[-1:]), 0)], nums, [])
[1, 3, 6, 10]

But the right answer is itertools.accumulate or your your original function. Any one line solution will have far greater computational complexity.


Here is a set of timings to show that other than itertools.accumulate, the single line replacements have O(n^2) type complexity (ie, 10x more items, 100x more time roughly). That is because for each element in the list, because lambdas or reduce or comprehensions do not have any form of accumulator, the entire list up to that point must be looped over again. Your original function and itertools.accumulate are both O(n) type complexity (ie, 10x more items, a linear 10x more time).

Here is a graph and cheatsheet of O Complexity.

Here is the timing and results:

from itertools import accumulate
from functools import reduce 

def f1(nums):
    sum_ = 0
    for i in nums:
        sum_ += i
        yield sum_

def f2(nums):
    return (sum(nums[0:i]) for i,_ in enumerate(nums,1))

def f3(nums):
    return  accumulate(nums)

def f4(nums):
    return reduce(lambda x, y: x+[y+next(iter(x[-1:]), 0)], nums, [])

if __name__=='__main__':
    import timeit    
    for case, x in (('small',100),('med',1000),('large',10000),('huge',100000)):  
        data=list(range(x))
        print("Case {}, {:,} x, All equal: {}".format(case,x,(list(f1(data))==list(f2(data))==list(f3(data))==list(f4(data)))))
        for f in (f1,f2,f3,f4):
            print("   {:^10s}{:.4f} secs".format(f.__name__, timeit.timeit("list(f(data))", setup="from __main__ import f, data", number=10)))

Results:

Case small, 100 x, All equal: True
       f1    0.0001 secs
       f2    0.0007 secs
       f3    0.0000 secs
       f4    0.0006 secs
Case med, 1,000 x, All equal: True
       f1    0.0007 secs
       f2    0.0424 secs
       f3    0.0003 secs
       f4    0.0139 secs
Case large, 10,000 x, All equal: True
       f1    0.0083 secs
       f2    3.9526 secs
       f3    0.0036 secs
       f4    1.2756 secs
Case huge, 100,000 x, All equal: True
       f1    0.0977 secs
       f2    427.4129 secs
       f3    0.0532 secs
       f4    159.2506 secs

Upvotes: 2

Cyrbuzz
Cyrbuzz

Reputation: 119

If the list is continuously.

A simple but not efficient way:

[sum(range(1, i+1)) for i in range(1, 5))]

Output:

[1, 3, 6, 10]

Upvotes: 0

Ashish Acharya
Ashish Acharya

Reputation: 3399

This is one way to do it:

nums=[1,2,3,4]

[sum(nums[:idx+1]) for idx, i in enumerate(nums)]

Output:

[1, 3, 6, 10]

Another way is to use itertools.accumulate as suggested by @Blckknght.

from itertools import accumulate

list(accumulate(nums))

Output:

[1, 3, 6, 10]

Upvotes: 4

Related Questions