Student
Student

Reputation: 684

Flask/FastAPI SQLite pytest fixture returns None unless row ID is specified

I'm testing a FastAPI app with pytest. I've created a client fixture which includes a sqlite DB created from CSVs:

import pytest
from os import path, listdir, remove
from pandas import read_csv
from fastapi.testclient import TestClient
from api.main import app
from api.db import engine, db_url

@pytest.fixture(scope="session")
def client():
    db_path = db_url.split("///")[-1]
    if path.exists(db_path):
        remove(db_path)
    file_path = path.dirname(path.realpath(__file__))
    table_path = path.join(file_path, "mockdb")
    for table in listdir(table_path):
        df = read_csv(path.join(table_path, table))
        df.to_sql(table.split('.')[0], engine, if_exists="append", index=False)
    client = TestClient(app)
    yield client

My DB setup in the FastAPI app:

import os
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

dirname = os.path.dirname(__file__)
if "pytest" in modules:
    mock_db_path = os.path.join(dirname, '../test/mockdb/test.db')
    db_url = f"sqlite:///{mock_db_path}"
else:
    db_url = os.environ.get("DATABASE_URL", None)
if "sqlite" in db_url:
    engine = create_engine(db_url, connect_args={"check_same_thread": False})
else:
    engine = create_engine(db_url)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()

This works: I can set up tests for app endpoints which query the DB and the data I put in the CSVs is returned, e.g. after adding one row to mockdb/person.csv:

from api.db import SessionLocal

db = SessionLocal()
all = db.query(Person).all()
print(all)
[<tables.Person object at 0x7fc829f81430>]

I am now trying to test code which adds new rows to tables in the database.

This only works if I specify the ID (assume this occurs during the pytest run):

db.add(Person(id=2, name="Alice"))
db.commit()
all = db.query(Person).all()
print(all)
[<tables.Person object at 0x7fc829f81430>, <tables.Person object at 0x7fc829f3bdc0>]

The above result is as I'd expect the program to behave. However, if I don't specify the ID, then the result is None:

db.add(Person(name="Alice"))
db.commit()
all = db.query(Person).all()
print(all)
[<tables.Person object at 0x7fc829f81430>, None]

This result is not how I expect the program to behave.

The code that I want to test does not specify IDs, it uses autoincrement as is good practice. Thus, I am unable to test this code. It simply creates these Nones.

At first, I though the culprit was not creating tables with Base.metadata.create_all(). However, I have tried placing this both in my client fixture, and following my DB setup (i.e. the first 2 code blocks above), but the result is the same: Nones.

Stepping through with the debugger, when the Person row is added, the following error appears:

sqlalchemy.orm.exc.ObjectDeletedError: Instance '<Person at 0x7fc829f3bdc0>' has been deleted, or its row is otherwise not present.

Why is the resulting row None and how do I solve this error?

Upvotes: 1

Views: 597

Answers (1)

Student
Student

Reputation: 684

The cause of the error was that I had a column type in my DB that was not compatible with SQLite, namely PostgresSQL's ARRAY type. Unfortunately there was no error message hinting at this. The simplest solution is to remove or change the type of this column.

It is also possible to retain the column and the SQLite fixture by changing client() as follows:

from mytableschema import MyOffendingTable

@pytest.fixture(scope="session")
def client():
    table_meta = SBEvent.metadata.tables[MyOffendingTable.__tablename__]
    table_meta._columns.remove(table_meta._columns["my_offending_column"])
    Base.metadata.create_all(bind=engine)
    db_path = db_url.split("///")[-1]
    if path.exists(db_path):
        remove(db_path)
    file_path = path.dirname(path.realpath(__file__))
    table_path = path.join(file_path, "mockdb")
    for table in listdir(table_path):
        df = read_csv(path.join(table_path, table))
        df.to_sql(table.split('.')[0], engine, if_exists="append", index=False)
    client = TestClient(app)
    yield client

It is now possible to proceed as normal if you remove my_offending_column from the MyOffendingTable CSV. No more Nones!

Sadly querying the offending table during the test run will still run into issues as the SELECT statement will look for the nonexistent my_offending_column. For those needing to query said table, I recommend using dialect-specific compilation rules.

Upvotes: 0

Related Questions