Surendra Singh
Surendra Singh

Reputation: 19

stratified sample on ranges

I have table_1, that has data such as:

Range Start   Range End   Frequency   
10                   20          90   
20                   30          68   
30                   40         314   
40                   40         191 (here, it means we have just 40 as data point repeating 191 times) 

table_2:

group     value   
10        56.1   
10        88.3   
20        53   
20        20   
30        55   

I need to get the stratified sample on the basis of range from table_1, the table_2 can have millions of rows but the result should be restricted to just 10k points.

Tried below query:

SELECT   
    d.*   
FROM   
    (   
        SELECT   
            ROW_NUMBER() OVER(   
                                PARTITION BY group   
                                ORDER BY group   
                            ) AS seqnum,   
            COUNT(*) OVER() AS ct,   
            COUNT(*) OVER(PARTITION BY group) AS cpt,   
            group, value   
        FROM   
            table_2 d   
    ) d   
WHERE   
    seqnum < 10000 * ( cpt * 1.0 / ct )   

but a bit confused with the analytics functions usage here.

Expecting 10k records as a stratified sample from table_2:

Result table:

group     value   
10       56.1   
20       53   
20        20   
30       55

Upvotes: 0

Views: 291

Answers (2)

Alex Poole
Alex Poole

Reputation: 191265

If I understand what you want - which is by no means certain - then I think you want to get a maximum of 10000 rows, with the number of group values proportional to the frequencies. So you can get the number of rows you want from each range with:

select range_start, range_end, frequency,
  frequency/sum(frequency) over () as proportion,
  floor(10000 * frequency/sum(frequency) over ()) as limit
from table_1;

RANGE_START  RANGE_END  FREQUENCY PROPORTION      LIMIT
----------- ---------- ---------- ---------- ----------
         10         20         90 .135746606       1357
         20         30         68 .102564103       1025
         30         40        314 .473604827       4736
         40         40        191 .288084465       2880

Those limits don't quite add up to 10000; you could go slightly above with ceil instead of floor.

You can then assign a nominal row number to each entry in table_2 based on which range it is in, and then restrict the number of rows from that range via that limit:

with cte1 (range_start, range_end, limit) as (
  select range_start, range_end, floor(10000 * frequency/sum(frequency) over ())
  from table_1
),
cte2 (grp, value, limit, rn) as (
  select t2.grp, t2.value, cte1.limit,
    row_number() over (partition by cte1.range_start order by t2.value) as rn
  from cte1
  join table_2 t2
  on (cte1.range_end > cte1.range_start and t2.grp >= cte1.range_start and t2.grp < cte1.range_end)
  or (cte1.range_end = cte1.range_start and t2.grp = cte1.range_start)
)
select grp, value
from cte2
where rn <= limit;

...

9998 rows selected.

I've used order by t2.value in the row_number() call because it isn't clear how you want to pick which rows in the range you actually want; you might want to order by dbms_random.value or something else.

db<>fiddle with some artificial data.

Upvotes: 0

Popeye
Popeye

Reputation: 35900

It means you need atleast one record of each group and more records on random basis then try this:

SELECT GROUP, VALUE FROM
(SELECT T2.GROUP, T2.VALUE, 
ROW_NUMBER() 
OVER (PARTITION BY T2.GROUP ORDER BY NULL) AS RN
FROM TABLE_1 T1
JOIN TABLE_2 T2
ON(T1.RANGE = T2.GROUP))
WHERE RN = 1 OR
CASE WHEN RN > 1 
AND RN = CEIL(DBMS_RANDOM.VALUE(1,RN))
THEN 1 END = 1
FETCH FIRST 10000 ROWS ONLY;

Here, Rownum is taken on random basis for each group and then result is taking rownum 1 and other rownum if they fulfill random condition.

Cheers!!

Upvotes: 1

Related Questions