kaloon
kaloon

Reputation: 177

control python code to run on different core

I have ubuntu OS on my machine that has 4 cores. Also, I have a python script called python.py that has different functions such as def1, def2, and def 3. I would like to run def1 on core 1 , and the rest on core 2 to 4. I know that I can use:

       #taskset -c 1 python.py 

The problem with this makes the whole script run on one core for each function inside it. however, I want to run specific function on specific core such as:

       def add(a,b):
           return a+b

       def sub(s, t):
           return s-t

       def mult(y,x):
           return y*x

      add(3,4)  # run this function on core 0
      sub(3,4)  # run this function on core 1
      mult(2,3)  # I don't core run on core 2 or 3

My question is: Is This Possible?

Upvotes: 2

Views: 8970

Answers (1)

Paul
Paul

Reputation: 5935

Yes, you can run each function in a different process in order to take advantage of multiple cores. Here is an example:

from multiprocessing import Process

def add(a,b):
    return a+b

def sub(s, t):
    return s-t

def mult(y,x):
    return y*x

if __name__ == "__main__":
    # construct a different process for each function
    processes = [Process(target=add, args=(3,4)),
                 Process(target=sub, args=(3,4)),
                 Process(target=mult, args=(2,3))]

    # kick them off 
    for process in processes:
        process.start()

    # now wait for them to finish
    for process in processes:
        process.join()

There is no need to force the OS to run a specific process on a specific core. If you have multiple cores on your CPU the OS is going to be scheduling the processes across those cores. It is unlikely that you need to do any sort of CPU pinning here.

The above example is too simple to see your multiple cores get engaged. Instead you can try this example which is a simple CPU bound problem variation of the above -- i.e. it is just a version that requires more computation.

from multiprocessing import Process


def add(a, b):
    total = 0
    for a1, b1 in zip(a, b):
        total = a1 + b1
    return total


def sub(s, t):
    total = 0
    for a1, b1 in zip(s, t):
        total = a1 - b1
    return total


def mult(y, x):
    total = 0
    for a1, b1 in zip(y, x):
        total = a1 * b1
    return total


if __name__ == "__main__":
    # construct a different process for each function
    max_size = 1000000000
    processes = [Process(target=add, args=(range(1, max_size), range(1, max_size))),
                 Process(target=sub, args=(range(1, max_size), range(1, max_size))),
                 Process(target=mult, args=(range(1, max_size), range(1, max_size)))]

    # kick them off 
    for process in processes:
        process.start()

    # now wait for them to finish
    for process in processes:
        process.join()

If you look at your top output (and press 1 to see the cores) you should see something like this where the three cores are being used at 100% (or close to it). That's without needing to do any CPU pinning. It is easier to trust in the OS to get the parallelism done.

enter image description here

Upvotes: 4

Related Questions