Timothy Lombard
Timothy Lombard

Reputation: 967

My approach for avoiding multiple flask-sqlalchemy db entries

My Flask app reads time-based CSV log file of observations where new entries are appended. My approach in aim to avoid duplications is to collect variables and test before adding the record as in:

        sl = db.session.query(DesignObservation).filter_by(worker_id = worker_id).filter_by(design_id = design_id).filter_by(ob_date = ob_date)

        if len(sl.all()):
            pass
        else:
            db.session.add(observation_record)
            db.session.commit()

Not sure why this pattern does not add the record. If I forgo the test the record adds but dups would be added on subsequent runs.

Upvotes: 0

Views: 58

Answers (2)

Timothy Lombard
Timothy Lombard

Reputation: 967

What I finally ended up doing is paring the design_id with an ob_date to determine uniqueness then checking using the function below to return a boolean of whether the record to add was unique.

import parser.parse as date_parse

def check_4_obs_dups(design_id, ob_date):
    # returns bool
    r = DesignObservation.query.filter_by(design_id=design_id).all()
    # convert date strings stored in SQLite to datetime 
    s = [date_parse(x.ob_date) for x in r]

    return date_parse(ob_date) in s

Then when iterating thru each line of the CSV file, I check for uniqueness with this loop:

if not check_4_obs_dups(design_id, ob_date):
    db.session.add(observation_record 

Upvotes: 0

T Burgis
T Burgis

Reputation: 1435

You should set UNIQUE constraints on the table you are writing to. The database engine will then manage the uniqueness for you.

You will have to deal with the 'UNIQUE constraint failed' exception that is coming your way ...

Upvotes: 1

Related Questions