Ben Caine
Ben Caine

Reputation: 1256

make python wait for stored procedure to finish executing

I have a python script that uses pyodbc to call an MSSQL stored procedure, like so:

cursor.execute("exec MyProcedure @param1 = '" + myparam + "'")

I call this stored procedure inside a loop, and I notice that sometimes, the procedure gets called again before it was finished executing the last time. I know this because if I add the line

time.sleep(1)

after the execute line, everything works fine.

Is there a more elegant and less time-costly way to say, "sleep until the exec is finished"?

Update (Divij's solution): This code is currently not working for me:

from tornado import gen
import pyodbc

@gen.engine
def func(*args, **kwargs):
    # connect to db
    cnxn_str = """
    Driver={SQL Server Native Client 11.0};
    Server=172.16.111.235\SQLEXPRESS;
    Database=CellTestData2;
    UID=sa;
    PWD=Welcome!;
    """
    cnxn = pyodbc.connect(cnxn_str)
    cnxn.autocommit = True
    cursor = cnxn.cursor()
    for _ in range(5):
        yield gen.Task(cursor.execute, 'exec longtest')

    return

func()

Upvotes: 21

Views: 21670

Answers (5)

Captain Jack Sparrow
Captain Jack Sparrow

Reputation: 1111

I have found a solution which does not require "muting" your stored procedures or altering them in any way. According to the pyodbc wiki:

nextset()

This method will make the cursor skip to the next available result set, discarding any remaining rows from the current result set. If there are no more result sets, the method returns False. Otherwise, it returns a True and subsequent calls to the fetch methods will return rows from the next result set.

This method is primarily used if you have stored procedures that return multiple results.

To wait for a stored procedure to finish execution before moving on with the rest of the program, use the following code after executing the code that runs the stored procedure in the cursor.

slept = 0
while cursor.nextset():
    if slept >= TIMEOUT:
        break
    time.sleep(1)
    slept += 1

You could also change the time.sleep() value from 1 second to a little under a second to minimize extra wait time, but I don't recommend calling it very many times a second.

Here is a full program showing how this code would be implemented:

import time
import pyodbc

connection = pyodbc.connect('DRIVER={SQL Server};SERVER=<hostname>;PORT=1433;DATABASE=<database name>;UID=<database user>;PWD=password;CHARSET=UTF-8;')
cursor = connection.cursor()

TIMEOUT = 20  # Max number of seconds to wait for procedure to finish execution
params = ['value1', 2, 'value3']
cursor.execute("BEGIN EXEC dbo.sp_StoredProcedureName ?, ?, ? END", *params)

# here's where the magic happens with the nextset() function
slept = 0
while cursor.nextset():
    if slept >= TIMEOUT:
        break
    time.sleep(1)
    slept += 1

cursor.close()
connection.close()

Upvotes: 4

vitalious
vitalious

Reputation: 311

I know this is old, but I just spent several hours trying to figure out how to make my Python code wait for a stored proc on MSSQL to finish.

The issue is not with asynchronous calls.

The key to resolving this issue is to make sure that your procedure does not return any messages until it's finished running. Otherwise, PYDOBC interprets the first message from the proc as the end of it.

Run your procedure with SET NOCOUNT ON. Also, make sure any PRINT statements or RAISERROR you might use for debugging are muted.

Add a BIT parameter like @muted to your proc and only raise your debugging messages if it's 0.

In my particular case, I'm executing a proc to process a loaded table and my application was exiting and closing the cursor before the procedure finished running because I was getting row counts and debugging messages.

So to summarize, do something along the lines of

cursor.execute('SET NOCOUNT ON; EXEC schema.proc @muted = 1')

and PYODBC will wait for the proc to finish.

Upvotes: 18

Tzuravn
Tzuravn

Reputation: 1

i think my way is alittle bit more crude but in the same time much more easy to understand:

cursor = connection.cursor()
    SQLCommand = ("IF EXISTS(SELECT 1 FROM msdb.dbo.sysjobs J JOIN 
msdb.dbo.sysjobactivity A ON A.job_id = J.job_id WHERE J.name ='dbo.SPNAME' AND 
A.run_requested_date IS NOT NULL AND A.stop_execution_date IS NULL) select 'The job is 
running!' ELSE select 'The job is not running.'")
    cursor.execute(SQLCommand)
    results = cursor.fetchone()
    sresult= str(results)
    while "The job is not running" in sresult:
        time.sleep(1)
        cursor.execute(SQLCommand)
        results = cursor.fetchone()
        sresult= str(results)

while "SPNAME" return "the job is not running" from the jobactivity table sleep 1 second and check the result again. this work for sql job, for SP should like in another table

Upvotes: 0

Ben Caine
Ben Caine

Reputation: 1256

Here's my workaround:

In the database, I make a table called RunningStatus with just one field, status, which is a bit, and just one row, initially set to 0.

At the beginning of my stored procedure, I execute the line

update RunningStatus set status = 1;

And at the end of the stored procedure,

update RunningStatus set status = 0;

In my Python script, I open a new connection and cursor to the same database. After my execute line, I simply add

while 1:
    q = status_check_cursor.execute('select status from RunningStatus').fetchone()
    if q[0] == 0:
        break

You need to make a new connection and cursor, because any calls from the old connection will interrupt the stored procedure and potentially cause status to never go back to 0.

It's a little janky but it's working great for me!

Upvotes: 3

Divij Rajkumar
Divij Rajkumar

Reputation: 31

There's no python built-in that allows you to wait for an asynchronous call to finish. However, you can achieve this behaviour using Tornado's IOLoop. Tornado's gen interface allows you to do register a function call as a Task and return to the next line in your function once the call has finished executing. Here's an example using gen and gen.Task

from tornado import gen

@gen.engine
def func(*args, **kwargs)
    for _ in range(5):
        yield gen.Task(async_function_call, arg1, arg2)

    return

In the example, execution of func resumes after async_function_call is finished. This way subsequent calls to asnyc_function_call won't overlap, and you wont' have to pause execution of the main process with the time.sleep call.

Upvotes: 1

Related Questions