Richard
Richard

Reputation: 65510

django.db.utils.DatabaseError: out of shared memory

I'm written a script to load some objects into a Django database using the Django ORM. The underlying database is Postgres.

After running happily for a while, the script fails with this error:

django.db.utils.DatabaseError: out of shared memory
HINT: You might need to increase max_locks_per_transaction.

I'm guessing this is a problem with my script's efficiency, rather than the database settings.

The script iterates over a CSV file, and create a database object for every row in the CSV file. Typically there are a couple thousand objects to create. I've read some background material on database efficiency in Django. I can rule out some mistakes - I'm not iterating over a queryset, or using __in queries or OFFSET.

But I do have quite a lot of indexes on fields in my database, and I guess that each time I create and save an object, Django has to update all the indexes. I have six indexes on the StoreItem fields, for example.

for item in csv_rows: 
  s, created = StoreItem.objects.get_or_create(display_url=item['display_url'], \
         retailer_img_url=item['retailer_img_url'],store=store_obj)
  s.name = item['name']
  s.description = item['description']
  s.affiliate = item['affiliate']
  ... more stuff
  s.save()

Two questions:

  1. Is it possible that updating the database indexes could cause this error?
  2. How can I debug if this is the case?

Upvotes: 3

Views: 2779

Answers (1)

Matt Seymour
Matt Seymour

Reputation: 9395

I have had a quick google and there are a couple of resources to look at:

  1. postgresql: out of shared memory?
  2. http://www.databasesoup.com/2012/06/postgresqlconf-maxlockspertransaction.html

This quotes is taken from http://www.postgresql.org/docs/9.1/static/runtime-config-locks.html

max_locks_per_transaction (integer)

The shared lock table tracks locks on max_locks_per_transaction * (max_connections + max_prepared_transactions) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has historically proven sufficient, but you might need to raise this value if you have clients that touch many different tables in a single transaction. This parameter can only be set at server start.

Increasing this parameter might cause PostgreSQL to request more System V shared memory than your operating system's default configuration allows. See Section 17.4.1 for information on how to adjust those parameters, if necessary.

When running a standby server, you must set this parameter to the same or higher value than on the master server. Otherwise, queries will not be allowed in the standby server.

So, it looks like you should check the default value of max_locks_per_transaction to check its set to a sensible value

Likewise, if you are using a default install of postgres then you should probably check all the other defaults as in some instances defaults can be set too low.

Upvotes: 3

Related Questions