Reputation: 232
I am working on an application with a few nested loops of non vectorizable code. There's a group of around 50 functions that get called hundreds or thousands of times. These functions receive dictionaries and pass dictionaries back. Each function is very short, each with simple numerical code, so just compiling the functions will not do much good. I think that I need to compile both the loop with the functions. Cython could work but I am worried about the amount of work in maintaining type declarations on so many functions plus not getting big improvements due to all those dictionaries being passed around. I was wondering if this is a good use case for PyPy. There's no numpy or c extensions involved. Just simple functions reading off inputs from a dictionary and updating those dictionaries.
I've read the documentation for PyPy where it says that short-running processes will not be improved by PyPy. I am wondering if this short-running functions will prevent the JIT compiler from improving the runtime.
What do you think? Any experience with pypy on something similar is welcome.
Thanks!
Upvotes: 0
Views: 340
Reputation: 2573
The term short-running refers to the quantity of code executed, not the wall-clock time. The JIT traces code execution and only kicks in when it sees a section of code repeat ~1000 times.
Upvotes: 1