ovgolovin
ovgolovin

Reputation: 13410

Replacing parts of the function code on-the-fly

Here I came up with the solution to the other question asked by me on how to remove all costly calling to debug output function scattered over the function code (slowdown was 25 times with using empty function lambda *p: None).

The solution is to edit function code dynamically and prepend all function calls with comment sign #.

from __future__ import print_function

DEBUG = False

def dprint(*args,**kwargs):
    '''Debug print'''
    print(*args,**kwargs)


def debug(on=False,string='dprint'):
    '''Decorator to comment all the lines of the function code starting with string'''
    def helper(f):      
        if not on:
            import inspect
            source = inspect.getsource(f)
            source = source.replace(string, '#'+string) #Beware! Swithces off the whole line after dprint statement
            with open('temp_f.py','w') as file:
                file.write(source)
            from temp_f import f as f_new
            return f_new            
        else:
            return f #return f intact
    return helper


def f():
    dprint('f() started')
    print('Important output')
    dprint('f() ended')

f = debug(DEBUG,'dprint')(f) #If decorator @debug(True) is used above f(), inspect.getsource somehow includes @debug(True) inside the code.

f()

The problems I see now are these:

What other problems do you see here?

How can all these problems be solved?

What are upsides and downsides of this approach?

What can be improved here?

Is there any better way to do what I try to achieve with this code?


I think it's a very interesting and contentious technique to preprocess function code before compilation to byte-code. Strange though that nobody got interested in it. I think the code I gave may have a lot of shaky points.

Upvotes: 1

Views: 2612

Answers (2)

ovgolovin
ovgolovin

Reputation: 13410

Here is the solution I came up with after composing answers from another questions asked by me here on StackOverflow.

This solution don't comment anything and just deletes standalone dprint statements. It uses ast module and works with Abstract Syntax Tree, it lets us avoid parsing source code. This idea was written in the comment here.

Writing to temp_f.py is replaced with execution f in necessary environment. This solution was offered here.

Also, the last solution addresses the problem of decorator recursive application. It's solved by using _blocked global variable.

This code solves the problem asked to be solved in the question. But still, it's suggested not to be used in real projects:

You are correct, you should never resort to this, there are so many ways it can go wrong. First, Python is not a language designed for source-level transformations, and it's hard to write it a transformer such as comment_1 without gratuitously breaking valid code. Second, this hack would break in all kinds of circumstances - for example, when defining methods, when defining nested functions, when used in Cython, when inspect.getsource fails for whatever reason. Python is dynamic enough that you really don't need this kind of hack to customize its behavior.

from __future__ import print_function

DEBUG = False

def dprint(*args,**kwargs):
    '''Debug print'''
    print(*args,**kwargs)

_blocked = False
def nodebug(name='dprint'):
    '''Decorator to remove all functions with name 'name' being a separate expressions'''
    def helper(f):      
        global _blocked
        if _blocked:
            return f

        import inspect, ast, sys

        source = inspect.getsource(f)        
        a = ast.parse(source) #get ast tree of f

        class Transformer(ast.NodeTransformer):
            '''Will delete all expressions containing 'name' functions at the top level'''
            def visit_Expr(self, node): #visit all expressions
                try:
                    if node.value.func.id == name: #if expression consists of function with name a
                        return None #delete it
                except(ValueError):
                    pass
                return node #return node unchanged
        transformer = Transformer()
        a_new = transformer.visit(a)
        f_new_compiled = compile(a_new,'<string>','exec')

        env = sys.modules[f.__module__].__dict__
        _blocked = True
        try:
            exec(f_new_compiled,env)
        finally:
            _blocked = False
        return env[f.__name__]         
    return helper


@nodebug('dprint')        
def f():
    dprint('f() started')
    print('Important output')
    dprint('f() ended')
    print('Important output2')


f()

Other relevant links:

Upvotes: 1

Martijn Pieters
Martijn Pieters

Reputation: 1123270

A decorator can return either a wrapper, or the decorated function unaltered. Use it to create a better debugger:

from functools import wraps

def debug(enabled=False):
    if not enabled:
        return lambda x: x  # Noop, returns decorated function unaltered

    def debug_decorator(f):
        @wraps(f)
        def print_start(*args, **kw):
            print('{0}() started'.format(f.__name__))
            try:
                return f(*args, **kw)
            finally:
                print('{0}() completed'.format(f.__name__))
        return print_start
    return debug_decorator

The debug function is a decorator factory, when called it produces a decorator function. If debugging is disabled, it simply returns a lambda that returns it argument unchanged, a no-op decorator. When debugging is enabled, it returns a debugging decorator that prints when a decorated function has started and prints again when it returns.

The returned decorator is then applied to the decorated function.

Usage:

DEBUG = True

@debug(DEBUG)
def my_function_to_be_tested():
    print('Hello world!')

To reiterate: when DEBUG is set to false, the my_function_to_be_tested remains unaltered, so runtime performance is not affected at all.

Upvotes: 2

Related Questions