David דודו Markovitz
David דודו Markovitz

Reputation: 44941

How to generate a large data set using hive / spark-sql?

E.g. generate 1G records with sequential numbers between 1 and 1G.

Upvotes: 0

Views: 1975

Answers (1)

David דודו Markovitz
David דודו Markovitz

Reputation: 44941

Create partitioned seed table

create table seed (i int)
partitioned by (p int)

Populate the seed table with 1K records with sequential numbers between 0 and 999.
Each record is being inserted into a different partition, therefore located on a different HDFS directory and more important - on a different file.

P.s.

The following set is needed

set hive.exec.dynamic.partition.mode=nonstrict;
set hive.exec.max.dynamic.partitions.pernode=1000;
set hive.hadoop.supports.splittable.combineinputformat=false;
set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;

insert into table seed partition (p)
select  i,i 
from    (select 1) x lateral view posexplode (split (space (999),' ')) e as i,x

Generate a table with 1G records.
Each of the 1K records in the seed table is on a different file and is being read by a different container.
Each container generates 1M records.

create table t1g
as
select  s.i*1000000 + e.i + 1  as n
from    seed s lateral view posexplode (split (space (1000000-1),' ')) e as i,x

Upvotes: 5

Related Questions