julio514
julio514

Reputation: 187

What is the best way to query a pytable column with many values?

I have a 11 columns x 13,470,621 rows pytable. The first column of the table contains a unique identifier to each row (this identifier is always only present once in the table).

This is how I select rows from the table at the moment:

my_annotations_table = h5r.root.annotations

# Loop through table and get rows that match gene identifiers (column labeled gene_id).
for record in my_annotations_table.where("(gene_id == b'gene_id_36624' ) | (gene_id == b'gene_id_14701' ) | (gene_id == b'gene_id_14702')"):
    # Do something with the data.

Now this works fine with small datasets, but I will need to routinely perform queries in which I can have many thousand of unique identifiers to match for in the table's gene_id column. For these larger queries, the query string can quickly get very large and I get an exception:

  File "/path/to/my/software/python/python-3.9.0/lib/python3.9/site-packages/tables/table.py", line 1189, in _required_expr_vars
    cexpr = compile(expression, '<string>', 'eval')
RecursionError: maximum recursion depth exceeded during compilation

I've looked at this question (What is the PyTables counterpart of a SQL query "SELECT col2 FROM table WHERE col1 IN (val1, val2, val3...)"?), which is somehow similar to mine, but was not satisfactory.

I come from an R background where we often do these kinds of queries (i.e. my_data_frame[my_data_frame$gene_id %in% c("gene_id_1234", "gene_id_1235"),] and was wondering if there was comparable solution that I could use with pytables.

Thanks very much,

Upvotes: 0

Views: 338

Answers (2)

kcw78
kcw78

Reputation: 8046

Another approach to consider is combining 2 functions: Table.get_where_list() with Table.read_coordinates()

  • Table.get_where_list(): gets the row coordinates fulfilling the given condition.
  • Table.read_coordinates(): Gets a set of rows given their coordinates (in a list), and returns as a (record) array.

The code would look something like this:

my_annotations_table = h5r.root.annotations  
gene_name_list = ['gene_id_36624', 'gene_id_14701', 'gene_id_14702']
# Loop through gene names and get rows that match gene identifiers (column labeled gene_id)
gene_row_list = []
for gene_name in gene_name_list:
    gene_rows = my_annotations_table.get_where_list("gene_id == gene_name")) 
    gene_row_list.extend(gene_rows)

# Retieve all of the data in one call
gene_data_arr = my_annotations_table.read_coordinates(gene_row_list)

Upvotes: 1

julio514
julio514

Reputation: 187

Okay, I managed to do some satisfactory improvements on this.

1st: optimize the table (with the help of the documentation - https://www.pytables.org/usersguide/optimization.html)

Create table. Make sure to specify the expectedrows=<int> arg as it has the potential to increase the query speed.

table = h5w.create_table("/", 'annotations', 
    DataDescr, "Annotation table unindexed", 
    expectedrows=self._number_of_genes, 
    filters=tb.Filters(complevel=9, complib='blosc')
    #tb comes from import tables as tb ...

I also modified the input data so that the gene_id_12345 fields are simple integers (gene_id_12345 becomes 12345). Once the table is populated with its 13,470,621 entries (i.e. rows), I created a complete sorted index based on the gene_id column (Column.create_csindex()) and sorted it.

table.cols.gene_id.create_csindex()
table.copy(overwrite=True, sortby='gene_id', newname="Annotation table", checkCSI=True)
# Just make sure that the index is usable. Will print an empty list if not.
print(table.will_query_use_indexing('(gene_id == 57403)'))

2nd - The table is optimized, but I still can't query thousands of gene_ids at a time. So I simply separated them in chunks of 31 gene_ids (yes 31 was the absolute maximum, 32 was too much apparently).

I did not perform benchmarks, but querying ~8000 gene_ids now takes approximately 10 seconds which is acceptable for my needs.

Upvotes: 0

Related Questions