geo_so
geo_so

Reputation: 802

How to efficiently columnize (=pivoting) pandas DataFrame (with groupby)?

To give you the context of the question:

I have decent SQL table (72M rows, 6GB) with data which could be understood as "column-based", e.g.:

------------------------------
| fk_id | date       | field |
------------------------------
|     1 | 2001-01-02 |    24 |
|     1 | 2001-01-03 |    25 |
|     1 | 2001-01-04 |    21 |
|     1 | 2001-01-05 |    20 |
|     1 | 2001-01-06 |    30 |
|     1 | 2001-01-07 |    33 |
|            ....            |
|     2 | 2001-01-02 |    10 |
|     2 | 2001-01-03 |    15 |
|     2 | 2001-01-04 |    12 |
|     2 | 2001-01-05 |    11 |
|     2 | 2001-01-06 |    10 |
|     2 | 2001-01-07 |    12 |
|            ....            |
|            ....            |
| 12455 | 2015-01-01 |    99 |
| 12456 | 2005-10-10 |    10 |
| 12456 | 2005-10-11 |    10 |
|            ....            |
------------------------------

The desired end result in Python as a pandas.DataFrame should look like this, where date becomes the index column the foreign keys the column names and the values of the column field the content of a matrix:

------------------------------------------------------
| date       |     1 |     2 |  .... | 12455 | 12456 | 
------------------------------------------------------
| 2001-01-02 |    24 |    10 |  .... |   NaN |   NaN |
| 2001-01-03 |    25 |    15 |  .... |   NaN |   NaN |
| 2001-01-04 |    21 |    12 |  .... |   NaN |   NaN |
| 2001-01-05 |    20 |    11 |  .... |   NaN |   NaN |
| 2001-01-06 |    30 |    10 |  .... |   NaN |   NaN |
| 2001-01-07 |    33 |    12 |  .... |   NaN |   NaN |
|       .... |    .. |    .. |  .... |  .... |  .... |
| 2005-10-10 |    50 |     4 |  .... |   NaN |    10 |
| 2005-10-11 |    51 |     3 |  .... |   NaN |    10 |
|       .... |    .. |    .. |  .... |  .... |  .... |
| 2015-01-01 |    40 |   NaN |  .... |    50 |    99 |
------------------------------------------------------

Till now, I accomplish this with the following code:

def _split_by_fk(self, df):
    """
    :param df: pandas.DataFrame
    :param fields: Iterable
    :return: pandas.Panel
    """
    data = dict()
    res = df.groupby('fk_id')
    for r in res:
        fk_id = r[0]
        data[fk_id] = r[1]['field']
    return pd.DataFrame(data)

def get_data(self, start, end):
    s = select([daily_data.c.date, daily_data.c.fk_id, daily_data.c.field])\
        .where(and_(end >= daily_data.c.date, daily_data.c.date >= start))\
        .order_by(daily_data.c.fk_id, daily_data.c.date)
    data = pd.read_sql(s, con=db_engine, index_col='date')
    return self._split_by_fk(data)


>>> get_data('1960-01-01', '1989-12-31')

which does basically:

  1. Query SQL DB via sqlalchemy directly through pandas.read_sql function.
  2. groupby the received DataFrame
  3. Iterate over the group result object and put them in a dictionary
  4. Convert the dict into a DataFrame.

To query 29 years of daily data with 13'813 columns takes with the above approach 4min 38s (the whole DataFrame takes up 796.5MB in memory), where %lprun shows that most of the time is spent in the read_sql function and the rest in the _split_by_fk (excerpt of the output):

% Time   Line Contents
===============================================================
83.8     data = pd.read_sql(s, con=db_engine, index_col='date')
16.2     return self._split_by_fk(data)

My code feels not very elegant as I am collecting all groups in a dictionary to transform them again into a DataFrame.

Now to my actual question: Is there a (more) efficient/pythonic way to "columnize" a pandas.DataFrame in the manner shown above?


PS: I would be not happy to pointers and hints into more general directions regarding the handling of such data structures and amount of data, tough, I think that it should be possible to solve everything "small data"-style.

Upvotes: 1

Views: 390

Answers (2)

santon
santon

Reputation: 4625

If the combination of fk_id and date is always unique, you can do something like:

df = pd.DataFrame({'fk_id': [1, 2, 3],
                   'date': pd.date_range('1/1/2015', periods=3),
                   'field': [25, 25, 1]})


#         date  field  fk_id
# 0 2015-01-01     25      1
# 1 2015-01-02     24      2
# 2 2015-01-03      1      3

df.groupby(['date', 'fk_id']).agg(lambda x: x.unique()).unstack()


#            field        
# fk_id          1   2   3
# date                    
# 2015-01-01    25 NaN NaN
# 2015-01-02   NaN  24 NaN
# 2015-01-03   NaN NaN   1

If they're not always unique, you may need some more complicated strategy for aggregating values.

Upvotes: 0

BrenBarn
BrenBarn

Reputation: 251438

If I understand you right, you can do df.pivot(index='date', columns='fk_id', values='field').

I think that it should be possible to solve everything "small data"-style.

Good luck with that. A DataFrame with 12000 columns is unlikely to perform well.

Upvotes: 1

Related Questions