Reputation: 1173
I have a table in my dataset which has 3,500,000 rows and a count(*) on the table takes around 1.5 seconds and for ano table with 718,158 rows ( a subset of the above table) the count(*) takes around 3-4 seconds.
I want to know why , is it because of its architecture ?
Upvotes: 1
Views: 361
Reputation: 207912
1) Big Query is a highly scalable database, before being a "super fast" database. It's designed to process HUGE amount of data distributing the processing among several different machines using a technique named Dremel. Because it's designed to use several machines and parallel processing, you should expect to have super-scalability with a good performance.
2) BigQuery is an asset when you want to analyze billions of rows.
For example: analyzing all the wikipedia revisions in 5-10 seconds isn't bad, is it? But even a much smaller table would take about the same time, even if has 10k rows.
3) Under this size, you'll be better off using more traditional data storage solutions such as Cloud SQL or the App Engine Datastore. If you want to keep SQL capability, Cloud SQL is the best guess.
That said, it's going to be faster than Big Query in many scenarios...as designed.
4) Certainly the performance differ from a dedicated environment. You get your dedicated environment for 20K$ a month.
Upvotes: 1