dma
dma

Reputation: 1809

Filtering a query causes "Response too large" error

Running the query

SELECT project, test_id, log_time,
       connection_spec.client_geolocation.latitude, 
       connection_spec.client_geolocation.longitude
FROM m_lab.2012_11
GROUP BY project, test_id, log_time,
         connection_spec.client_geolocation.latitude, 
         connection_spec.client_geolocation.longitude
ORDER BY log_time LIMIT 6

succeeds in ~20 seconds

However, adding a WHERE clause to this that should reduce the number of returned rows

SELECT project, test_id, log_time,
       connection_spec.client_geolocation.latitude, 
       connection_spec.client_geolocation.longitude
FROM m_lab.2012_11
WHERE log_time > 0
GROUP BY project, test_id, log_time,
         connection_spec.client_geolocation.latitude, 
         connection_spec.client_geolocation.longitude
ORDER BY log_time LIMIT 6

results in the error 'Response too large to return.'

My expectation is that limiting the rows returned will increase the execution time as more rows need to be scanned, but the response should be the same size. What am I missing?

Upvotes: 2

Views: 817

Answers (1)

Michael Manoochehri
Michael Manoochehri

Reputation: 7877

First, the number of rows scanned is constant. BigQuery does not index rows (by design) and performs a full table scan on the table you specify.

With billions of rows in this m-lab table, I think the general issue here is that the amount of unique results generated via multiple GROUP BY is really large in both queries, which produces a 'Response too large' error for individual nodes in the BigQuery execution tree.

One approach:

One way to approach this query is to use a new feature we called GROUP EACH BY. This provides applies a shuffle operation to balance the groupings across the serving tree. It works best when there are many individual values per GROUP 'bucket.' In the m-lab dataset, almost every entry is attached to project "0," so I would remove that from the query result, and GROUP EACH BY the other, more numerous values:

SELECT test_id, log_time,  connection_spec.client_geolocation.latitude,  connection_spec.client_geolocation.longitude
FROM
  [measurement-lab:m_lab.2012_11]
WHERE
  log_time > 0 AND project = 0
GROUP EACH BY
  test_id, log_time, connection_spec.client_geolocation.latitude,   connection_spec.client_geolocation.longitude
ORDER BY log_time LIMIT 6;

Another strategy:

The result you are querying for lists results in order of log_time, meaning you are actually only returning the earliest log_time data points. Why not run a subselect for a group of time points, and then run your GROUP BY using the result set in your WHERE clause. This query should run much faster than the other example:

SELECT
  test_id, log_time, connection_spec.client_geolocation.latitude, connection_spec.client_geolocation.longitude, COUNT(*) AS entry_count      
FROM
  [measurement-lab:m_lab.2012_11]
WHERE
  project = 0 AND log_time IN
  (SELECT log_time FROM [measurement-lab:m_lab.2012_11] WHERE log_time > 0 GROUP BY log_time ORDER BY log_time LIMIT 6)
GROUP BY
  test_id, log_time, connection_spec.client_geolocation.latitude, connection_spec.client_geolocation.longitude  ORDER BY log_time, entry_count;

Upvotes: 3

Related Questions