Ravinath
Ravinath

Reputation: 65

Lock Table overflow issue in Progress 4GL

I am facing a lock table overflow issue and below is the error it displays me and as soon as it displays it crashes the code.

Lock table overflow, increase -L on server (915)

I have checked the error number and it is saying we need to modify that -L value before server starts and it has been set to 500 by default. But I would not imagine I have been given that privilege to change that value unless I am a database administrator of the company. What i was trying to do was wipe out roughly 11k of member records with all the linked table records ( more than 25 tables are linked to each member record ) while backing them each table up into separate file. So roughly it achieves 'EXCLUSIVE-LOCK' when entering the member for loop as below,

for each member
    EXCLUSIVE-LOCK:

    /* 
       Then find each linked records in a order.
       Extract them.
       Delete them.
    */   
    Finally it extracts the member.
    Delete member.

end.

When it hits certain number of member records program crashes out. So i had to run it as batches like,

for each member
    EXCLUSIVE-LOCK:

    Increment a member count.
      When count = 1k
          then RETURN.
      /* 
       Then find each linked records in a order.
       Extract them.
       Delete them.
    */   
    Finally it extracts the member.
    Delete member.

end.

So literally I've ended up with running the same code more than 11 times to get the work done. I hope someone should have come across this issue and would be great help if you like to share a long time solution rather than my temporary solution.

Upvotes: 0

Views: 2380

Answers (2)

Henry Todd
Henry Todd

Reputation: 14

You could also increase your -L database startup parameter to take into account your one off query / delete.

Upvotes: 0

Tom Bascom
Tom Bascom

Reputation: 14020

You need a lock for each record that is part of a transaction. Otherwise other users could make conflicting changes before your transaction commits.

In your code you have a transaction that is scoped to the outer FOR EACH. Thus you need 1 lock for the "member" record and another lock for each linked record associated with that member.

(Since you are not showing real code it is also possible that your actual code has a transaction scope that is even broader...)

The lock table must be large enough to hold all of these locks. The lock table is also shared by all users -- so not only must it hold your locks but there has to be room for whatever other people are doing as well.

FWIW -- 500 is very, very low. The default is 8192. There are two startup parameters using the letter "l", one is upper case, -L, and that is the lock table and it is a server startup parameter. Lower case, -l, is the "local buffer size" and that is a client parameter. (It controls how much memory is available for local variables.)

"Batching", as you have sort of done, is the typical way to ensure that no one process uses too many locks. But if your -L is really only 500 a batch size of 1,000 makes no sense. 100 is more typical.

A better way to batch:

define buffer delete_member for member.
define buffer delete_memberLink for memberLink.  /* for clarity I'll just do a single linked table... */

for each member no-lock:  /* do NOT get a lock */

  batch_loop: do for delete_member, delete_memberLink while true transaction:

    b = 0.

    for each delete_memberLink exclusive-lock where delete_memberLink.id = member.id:
      b = b + 1.
      delete delete_memberLink.
      if b >= 100 then next batch_loop.
    end.

    find delete_member exclusive-lock where recid( delete_member ) = recid( member ).

    leave batch_loop.  /* this will only happen if we did NOT execute the NEXT */

  end.

end.

Upvotes: 2

Related Questions