TinBane
TinBane

Reputation: 936

Does SQLAlchemy's expire_all actually expire cached data?

So, I've read a lot of conflicting reports on how sqlalchemy works. I've read that it doesn't cache queries, etc. None of it seems to match my experience.

I'll give an example:

    >>> x = Session.query(statusStorage).all()
>>> for i in x:
...     print i.id
... 
1
... - records omitted
100000
>>> Session.expire_all()
>>> x = Session.query(statusStorage).all()
>>> for i in x:
...     print i.id
... 
1
... - records omitted
100000
>>> Session.commit()
>>> x = Session.query(statusStorage).all()
>>> for i in x:
...     print i.id
... 
1
... - records omitted
100001

So this thread is querying a MySQL database, to get all the items of a class, then printing a field. After running the first query, I run the same query in another python process, and change the ID of the last item by adding 1.

When I run the next request, after expire_all, it returns the same results, and the id is still unchanged. I've had this happen in code I've developed, where the search is filtered.

Both rollback and commit will fix the problem, both expire(object) and expire_all() fail to expire the sqlalchemy cache.

Am I failing to understand something implicit in sqlalchemy? Shouldn't I be able to clear the contents of the cache, without either rollback or commit? I'm running 1.0.9 on python 2.7.

Upvotes: 1

Views: 971

Answers (1)

inklesspen
inklesspen

Reputation: 1186

I answered this on IRC, but the issue is simply that InnoDB's default transaction isolation level is "REPEATABLE READ", which means the second and third query are not permitted (by the database) to see your change.

Upvotes: 3

Related Questions