Reputation: 609
I am trying to extract some "rows" from a big .h5 file, to create a smaller, sample file.
In order to make sure my sample looks like the original file, I am extracting rows at random.
#Get length of files and prepare samples
source_file = h5py.File(args.data_path, "r")
dataset = source_file['X']
indices = np.sort(np.random.choice(dataset.shape[0],args.nb_rows))
#checking we're extracting a subsample
if args.nb_rows > dataset.shape[0]:
raise ValueError("Can't extract more rows than dataset contains. Dataset has %s rows" % dataset.shape[0] )
target_file = h5py.File(target, "w")
for k in source_file.keys():
dataset = source_file[k]
dataset = dataset[indices,:,:,:]
dest_dataset = target_file.create_dataset(k, shape=(dataset.shape), dtype=np.float32)
dest_dataset.write_direct(dataset)
target_file.close()
source_file.close()
However when nb_rows is by (like 10,000) I'm getting TypeError("Indexing elements must be in increasing order")
. The indexes are sorted, so I think I should not get this error. Am I misunderstanding something?
Upvotes: 1
Views: 854
Reputation: 231665
I think you are getting duplicates.
Obviously you'll get duplicates in the args.nb_rows > dataset.shape[0]
case:
In [499]: np.random.choice(10, 20)
Out[499]: array([2, 4, 1, 5, 2, 8, 4, 3, 7, 0, 2, 6, 6, 8, 9, 3, 8, 4, 2, 5])
In [500]: np.sort(np.random.choice(10, 20))
Out[500]: array([1, 1, 1, 2, 2, 2, 4, 4, 4, 5, 5, 5, 5, 6, 6, 7, 8, 8, 8, 9])
But you can still get duplicates when the number is smaller:
In [502]: np.sort(np.random.choice(10, 9))
Out[502]: array([0, 0, 1, 1, 1, 5, 5, 9, 9])
Turn off replace
:
In [504]: np.sort(np.random.choice(10, 9, replace=False))
Out[504]: array([0, 1, 2, 3, 4, 5, 6, 7, 8])
Upvotes: 3