deltap
deltap

Reputation: 4206

SQL iterate over distinct values of a column and build a data frame for each value

I have a table that looks like the following:

|A|B|C|D|
|---|---|---|---|
|1|b1|c1|d1|
|1|b2|c2|d2|
|2|b3|c3|d3|
|2|b4|c4|d4|

I would like to iterate over distinct values of A and build a pandas data frame out of the remaining columns and then use that table to do calculations. I tried the following:

import sqlite3
import pandas as pd
conn = sqlite3.connection('my_db.db')
c = conn.cursor()
for entry in c.execute("SELECT DISTINCT A in table):
    df = pd.DataFrame(c.execute("SELECT * FROM table WHERE A = ?", (entry[0],)).fetchall())

This doesn't work because the second cursor object that builds the dataframe overwrites the cursor object that i was iterating over. I also discovered that you can not have two cursor objects. How should I work around this?

Upvotes: 0

Views: 2034

Answers (4)

deltap
deltap

Reputation: 4206

The end solution was to use pandas.read_sql with chunksize

I found this post useful as well.

import sqlite3
import pandas as pd
conn = sqlite3.connection('my_db.db')
for df in pd.read_sql("SELECT * from table ORDER BY A ASC", conn, chunksize = 100000):
    group  = df.groupby('A')
    last   = group.first().tail(1).index.values[0]
    last_a = 0
    for a, g_df in group:
        if (a == last_a):
            g_df = l_df.append(g_df)

        ....calculations....

        if (a == last):
            l_df = g_df
            l_a  = a

It is really important to have logic that ties together the groupby data frames that are split into two different chunks.

Upvotes: 0

Parfait
Parfait

Reputation: 107587

Consider using pandas's read_sql (with parameterization in passing the cursor value) and iteratively save each dataframe to a dictionary where the reference key is the corresponding distinct value (dict route avoids multiple dfs in your global environment):

import sqlite3
import pandas as pd

conn = sqlite3.connect('my_db.db')
c = conn.cursor()

dfDict = {}
for entry in c.execute("SELECT DISTINCT A FROM table"):
    strSQL = "SELECT * FROM table WHERE A = :nameofparam"   
    dfDict[entry[0]] = pd.read_sql(strSQL, conn, params={'nameofparam': entry[0]})

c.close()
conn.close()

for k, v in dfDict.items():
    print(k, '\n', v.head())

Upvotes: 1

it's-yer-boy-chet
it's-yer-boy-chet

Reputation: 2017

Put all the data you're interested in into a DataFrame (if it's not a huge dataset) then filter the dataset.

df = pd.DataFrame(c.execute("SELECT * FROM table").fetchall())
distict_a = df['A'].unique()
for a in distinct_a:
    df_for_this_a = df.query[df.A == a]

Upvotes: 1

Kartik
Kartik

Reputation: 8683

Is there a particular reason you don't want to do this whole operation in pandas itself? You could simply do it like so:

parent_df = pd.read_sql(c, "SELECT * from table")
for name, group in parent_df.groupby('A'):
    print(name, group.head())

Or

parent_df.set_index('A', inplace=True)
parent_df.head(20)

Upvotes: 1

Related Questions