Manuel Fedele
Manuel Fedele

Reputation: 1709

ThreadPoolExecutor is leaking memory

I'm using a ThreadPoolExecutor and I don't understand why it is leaking memory:

def callback(message):
    # Memory intensive operation
    x = [n for n in range(int(1e6))]
    return message
    
@profile
def main():
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        executor.map(callback, [x for x in range(10)])
    time.sleep(2)
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        executor.map(callback, [x for x in range(10)])
    time.sleep(2)
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        executor.map(callback, [x for x in range(10)])
    time.sleep(2)
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        executor.map(callback, [x for x in range(10)])
    time.sleep(2)
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        executor.map(callback, [x for x in range(10)])
    time.sleep(2)


if __name__ == '__main__':
    main()

Produces this graph enter image description here

What is going, exactly? This does not happen if I run this code synchronously.

Upvotes: 3

Views: 3322

Answers (1)

bruno
bruno

Reputation: 32596

To answer to your remark Can I ask you if you really tried what you're saying? And how, in that case?

I test with that program :

import concurrent.futures
import time,sys,os,gc

def callback(message):
    # Memory intensive operation
    x = [n for n in range(int(1e6))]
    return message
    
def main():
    delay = int(sys.argv[2])
    for _ in range(0, int(sys.argv[1])):
        with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
            executor.map(callback, [x for x in range(10)])
        time.sleep(delay)
    gc.collect()
    os.system("ps l -C python3")

if __name__ == '__main__':
    main()

some executions when sleeping 0 sec each time :

pi@raspberrypi:/tmp $ python3 p.py 1 0
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 16784  2657  20   0  57616 11840 do_wai S+   pts/0      0:01 python3 p.py 1 0
pi@raspberrypi:/tmp $ python3 p.py 2 0
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 16811  2657  20   0  57872 12100 do_wai S+   pts/0      0:03 python3 p.py 2 0
pi@raspberrypi:/tmp $ python3 p.py 10 0
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 16835  2657  20   0  58412 12760 do_wai S+   pts/0      0:18 python3 p.py 10 0
pi@raspberrypi:/tmp $ python3 p.py 20 0
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 16950  2657  20   0  58924 13240 do_wai S+   pts/0      0:35 python3 p.py 20 0
pi@raspberrypi:/tmp $ python3 p.py 100 0
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 17179  2657  20   0  57900 12320 do_wai S+   pts/0      2:58 python3 p.py 100 0
pi@raspberrypi:/tmp $ 

as you can see the size after 100 turns is smaller than after 10 and 20

Now if I wait 1 sec each turn the results are different and smaller or equal than previously :

pi@raspberrypi:/tmp $ python3 p.py 1 1
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 18251  2657  20   0  57360 11708 do_wai S+   pts/0      0:01 python3 p.py 1 1
pi@raspberrypi:/tmp $ python3 p.py 2 1
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 18276  2657  20   0  57616 11840 do_wai S+   pts/0      0:03 python3 p.py 2 1
pi@raspberrypi:/tmp $ python3 p.py 10 1
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 18299  2657  20   0  58412 12748 do_wai S+   pts/0      0:17 python3 p.py 10 1
pi@raspberrypi:/tmp $ python3 p.py 20 1
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 18417  2657  20   0  58412 12824 do_wai S+   pts/0      0:34 python3 p.py 20 1
pi@raspberrypi:/tmp $ python3 p.py 100 1
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000 20469  2657  20   0  57900 12328 do_wai S+   pts/0      3:13 python3 p.py 100 1
pi@raspberrypi:/tmp $ 

for the python point of view that as no sense, may be the variability does not come (only) from Python but (also) the time Linux needs to manage the threads disappearance. Of course if you do all synchronously you do not have that problem

I did on a PI4 :

pi@raspberrypi:/tmp $ uname -a
Linux raspberrypi 5.10.17-v7l+ #1403 SMP Mon Feb 22 11:33:35 GMT 2021 armv7l GNU/Linux
pi@raspberrypi:/tmp $ python3 --version
Python 3.7.3
pi@raspberrypi:/tmp $ 

You can see also in your diagram the last 3 lower levels are equal (like valuing 100) and this is not compatible with memory leaks or at least visible ones

Upvotes: 2

Related Questions