GreenSaber
GreenSaber

Reputation: 1148

Python: How can I stop Threading/Multiprocessing from using 100% of my CPU?

I have code that reads data from 7 devices every second for an infinite amount of time. Each loop, a thread is created which starts 7 processes. After each process is done the program waits 1 second and starts again. Here is a snippet the code:

def all_thread(): #function that handels the threading
    thread = threading.Thread(target=all_process) #prepares a thread for the devices
    thread.start() #starts a thread for the devices

def all_process(): #function that prepares and runs processes
    processes = [] #empty list for the processes to be stored
    while len(gas_list) > 0: #this gaslist holds the connection information for my devices
        for sen in gas_list: #for each sen(sensor) in the gas list
            proc = multiprocessing.Process(target=main_reader, args=(sen, q)) #declaring a process variable that sends the gas object, value and queue information to reading function
            processes.append(proc) #adding the process to the processes list
            proc.start() #start the process
        for sen in processes: #for each sensor in the processes list
            sen.join() #wait for all the processes to complete before starting again
        time.sleep(1) #wait one second

However, this uses 100% of my CPU. Is this by design of threading and multiprocessing or just bad coding? Is there a way I can limit the CPU usage? Thanks!

Update:

The comments were mentioning the main_reader() function so I will put it into the question. All it does is read each device, takes all the data and appends it to a list. Then the list is put into a queue to be displayed in the tkinter GUI.

def main_reader(data, q): #this function reads the device which takes less than a second
    output_list = get_registry(data) #this function takes the device information, reads the registry and returns a list of data
    q.put(output_list) #put the output list into the queue

Upvotes: 4

Views: 2090

Answers (1)

Hannu
Hannu

Reputation: 12205

As you state in the comments, your main_reader takes only a fraction of a second to run, which means process creation overhead might cause your problem.

Here is an example with multiprocessing.Pool. This creates a pool of workers and submits your tasks to them. Processes are started only once and never shut down or joined if this is meant to be an infinite loop. If you want to shut your pool down, you can do so by joining and closing it (see documentation for that).

from multiprocessing import Pool, Manager
from time import sleep
import threading
from random import random

gas_list = [1,2,3,4,5,6,7,8,9,10]

def main_reader(sen, rqu):
    output = "%d/%f" % (sen, random())
    rqu.put(output)


def all_processes(rq):
    p = Pool(len(gas_list) + 1)
    while True:
        for sen in gas_list:
            p.apply_async(main_reader, args=(sen, rq))

        sleep(1)

m = Manager()
q = m.Queue()
t = threading.Thread(target=all_processes, args=(q,))
t.daemon = True
t.start()

while True:
    r = q.get()
    print r

If this does not help, you need to start digging deeper. I would first increase the sleep in your infinite loop to 10 seconds or even longer. This would allow you to monitor the behaviour of your program. If CPU peaks for a moment and then settles down for 10 seconds or so, you know the problem is in your main_reader. If it is still 100%, your problem must be elsewhere.

Is it possible your problem is not in this part of your program at all? You seem to launch this all in a thread, which indicates your main program is doing something else. Can it be this something else that peaks the CPU?

Upvotes: 2

Related Questions