Reputation: 4241
I have a pandas time series data frame. df
date is the index. Three columns, cusip, ticker, factor.
I want to decile the data per date. About 100 factors per date...Each date will be deciled 1 to 10.
As a first attempt, I tried to decile the whole data frame regardless of date. I used:
factor = pd.cut(df.factor, 10) #This gave an error:
adj = (mx - mn) * 0.001 # 0.1% of the range
Sybase.Error: ('Layer: 2, Origin: 4\ncs_calc: cslib user api layer: common library error: The conversion/operation resulted in overflow.')
The dataframe has 1mm rows. Is it a size issue? An nan issue?
Three questions.
Thank you for the help. New to pandas python.
SAMPLE DATA:
df: cusip ticker factor
date
2012-01-05 XXXXX ABC 4.26
2012-01-05 YYYYY BCD -1.25
...(100 more stocks on this date)
2012-01-06 XXXXX ABC 3.25
2012-01-06 YYYYY BCD -1.55
...(100 more stocks on this date)
OUTPUT for what I would like:
#column with the deciles, lined up with the df.
decile
10
2
...
10
3
...
I can then append this to my dataframe to have a new column. Each date is deciled and each data point then has their corresponding decile on that date. Thanks.
Stack Trace:
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/misc/apps/linux/python-2.6.1/lib/python2.6/site-packages/pandas-0.10.0-py2.6-linux-x86_64.egg/pandas/core/groupby.py", line 1817, in transform res = wrapper(group)
File "/misc/apps/linux/python-2.6.1/lib/python2.6/site-packages/pandas-0.10.0-py2.6-linux-x86_64.egg/pandas/core/groupby.py", line 1807, in <lambda> wrapper = lambda x: func(x, *args, **kwargs) File "<stdin>", line 1, in <lambda> File "/misc/apps/linux/python-2.6.1/lib/python2.6/site-packages/pandas-0.10.0-py2.6-linux-x86_64.egg/pandas/tools/tile.py", line 138, in qcut bins = algos.quantile(x, quantiles)
File "/misc/apps/linux/python-2.6.1/lib/python2.6/site-packages/pandas-0.10.0-py2.6-linux-x86_64.egg/pandas/core/algorithms.py", line 272, in quantile return algos.arrmap_float64(q, _get_score) File "generated.pyx", line 1841, in pandas.algos.arrmap_float64 (pandas/algos.c:71156) File "/misc/apps/linux/python-2.6.1/lib/python2.6/site-packages/pandas-0.10.0-py2.6-linux-x86_64.egg/pandas/core/algorithms.py", line 257, in _get_score idx % 1)
File "/misc/apps/linux/python-2.6.1/lib/python2.6/site-packages/pandas-0.10.0-py2.6-linux-x86_64.egg/pandas/core/algorithms.py", line 279, in _interpolate return a + (b - a) * fraction File "build/bdist.linux-x86_64/egg/Sybase.py", line 246, in _cslib_cb Sybase.Error: ('Layer: 2, Origin: 4\ncs_calc: cslib user api layer: common library error: The conversion/operation resulted in overflow.', <ClientMsgType object at 0x1c4da730>)
Upvotes: 2
Views: 3650
Reputation: 40628
Toy example. First make a datetime
index. Here I make an index using two days repeated 10 times each. I then make some dummy data using randn
.
In [1]: date_index = [datetime(2012,01,01)] * 10 + [datetime(2013,01,01)] * 10
In [2]: df = DataFrame({'A':randn(20),'B':randn(20)}, index=date_index)
In [3]: df
Out[3]:
A B
2012-01-01 -1.155124 1.018059
2012-01-01 -0.312090 -1.083568
2012-01-01 0.688247 -1.296995
2012-01-01 -0.205218 0.837194
2012-01-01 0.700611 -0.001015
2012-01-01 1.996796 -0.914564
2012-01-01 -2.268237 0.517232
2012-01-01 -0.170778 -0.143245
2012-01-01 -0.826039 0.581035
2012-01-01 -0.351097 -0.013259
2013-01-01 -0.767911 -0.009232
2013-01-01 -0.322831 -1.384785
2013-01-01 0.300160 0.334018
2013-01-01 -1.406878 -2.275123
2013-01-01 1.722454 0.873262
2013-01-01 0.635711 -1.763352
2013-01-01 -0.816891 -0.451424
2013-01-01 -0.808629 -0.092290
2013-01-01 0.386046 -1.297096
2013-01-01 0.261837 0.562373
If I understand your question correctly, you want to decile within each date. To do that, you can first move the index into the dataframe as a column. Then, you can groupby by the new column (here it's called index), and use transform
with a lambda function. The lambda function below, applies pandas.qcut
to the grouped series
and returns the labels
attribute.
In [4]: df.reset_index().groupby('index').transform(lambda x: qcut(x,10).labels)
Out[4]:
A B
0 1 9
1 4 1
2 7 0
3 5 8
4 8 5
5 9 2
6 0 6
7 6 3
8 2 7
9 3 4
10 3 6
11 4 2
12 6 7
13 0 0
14 9 9
15 8 1
16 1 4
17 2 5
18 7 3
19 5 8
Upvotes: 2