Reputation: 161
I have a Python method as shown below:
import gym
def run():
env = gym.make('Pendulum-v1')
env.reset()
action = [1.0]
for _ in range(1000):
env.step(action)
I use viztracer
to profile this method in the main process and subprocess (by multiprocessing.Process
), respectively. It shows that the main process executes faster than the subprocess, even for the single step
method.
Here are the profilings
step
method execution time in subprocess:
step
method execution time in the main process:
numpy.ndarray.clip
method execution time in subprocess:
numpy.ndarray.clip
method execution time in the main process:
My environment is:
Any idea on:
Thanks.
I have tried to profile this method on different hardware, including intel i7 9750h and i9 12900h. It shows the same issue.
multiprocessing benchmark code is shown below
import gym
from multiprocessing import Process
def run():
env = gym.make('Pendulum-v1')
env.reset()
action = [1,0]
for _ in range(1000):
env.step(action)
if __name__ == "__main__":
p = Process(target=run)
p.start()
p.join()
p.close()
Upvotes: 0
Views: 204
Reputation: 17616
What is the cause of issue?
your CPU is triggering thermal throttling in the multiprocessing version, there's also memory and cache contention that will happen.
How to fix it?
you can't, even if you submerged the computer in liquid nitrogen.
vendors are overspecing and pushing the numbers in single core performance, but the core's true performance under load is shown in multiprocessing work, most CPUs nowdays can work just fine at 5+ GHz in single core performance, but will fall down to 3 GHz under load.
if you turn off turbo boost you will get the same time in both, but this is just by slowing down the single core performance to be on par with the multicore performance.
Upvotes: 1