planpoint
planpoint

Reputation: 29

What is the limitation of google published csv

I have a published google csv sheet file link. I didn't share the link through email to anybody. Just published as csv and using the link as below url.

https://docs.google.com/spreadsheets/d/e/<Sheet ID>/pub?gid=0&single=true&range=B2&output=csv

My question:

How many concurrent users can request the published csv file link at the same time?

or, is there any published csv sheet limitation?

I searched google for many times but not found exact answer. If you are someone who already know about google published csv sheets limitation please answer.

Thank you so much.

Upvotes: 2

Views: 427

Answers (2)

David Lukas
David Lukas

Reputation: 1229

You can read about the issue here: Sheets for Developers > Sheets API > Usage limits

You probably need to be signed in.

Quotas

  • Read requests
    • Per day per project - Unlimited
    • Per minute per project - 300
    • Per minute per user per project - 60
  • Write requests
    • Per day per project - Unlimited
    • Per minute per project - 300
    • Per minute per user per project - 60

According to older posts on the subject, the numbers are changing. In this case, it's probably a good idea to search for the current 100s quotas.

How to increase Google Sheets v4 API quota limitations

Quota exceeded for quota group 'WriteGroup' Google Sheet Api Error

Edit 1

The quotas in this answer above probably won't happen because the .csv is cached.
CSV changes with delay.

The results of the experiments are uneven.

For example:

The errors did not appear until 600 simultaneous connections (in parallel). 500 was OK.

Or

  • 100 parallel connections
  • requests without closing the connection
  • a delay between requests: 0.01 s
  • The errors did not appear until the 20-second test. 15 seconds was OK.

On more and more attempts, it seems to me that the errors will start after reaching 10 MB/s.

It can point out: https://cloud.google.com/pubsub/quotas
StreamingPull streams: 10 MB/s per open stream

Edit 2:

Ten minute test, 50 threads, small file: passed

Ten minute test, 50 threads, small file: passed

Ten minute test, 50 threads, big file: passed => 10MB/s isn't the case

Ten minute test, 50 threads, big file: passed

Maybe we're facing some adaptive protection against a DDoS attack here. For example:
https://cloud.google.com/armor/docs/adaptive-protection-overview

Upvotes: 2

TheMaster
TheMaster

Reputation: 50452

The concurrent limit for a sheets published csv for a simple 1 sheet(tab) file is 500.

Test:

You can test this using UrlFetchApp.fetchAll(), since it makes concurrent requests.

function getConcurrentLimit_(numOfSimultaneousReq, id = '[[[PUBLISHED ID]]]') {
  Utilities.sleep(5 * 1000);//sleep 5s before starting
  const url = `https://docs.google.com/spreadsheets/d/e/${id}/pub?gid=0&single=true&range=A1:B1&output=csv`,
    urls = [];
  ((i) => {
    while (--i) urls.push({ url, muteHttpExceptions: true });
  })(numOfSimultaneousReq);
  const res = UrlFetchApp.fetchAll(urls);
  const statusCodes = res.map((e) => e.getResponseCode());
  const totalFailures = statusCodes.filter((e) => e !== 200).length;
  const firstFailureAt = statusCodes.findIndex((c) => c !== 200);
  return { numOfSimultaneousReq, firstFailureAt, totalFailures };
}

function test166() {
  console.log(
    [100, 500, 600, 800, 1000]
      .flatMap((e) => Array(3).fill(e))//repeat each test thrice
      .map((e) => getConcurrentLimit_(e))
  );
}

Results:

[ { numOfSimultaneousReq: 100, firstFailureAt: -1, totalFailures: 0 },
  { numOfSimultaneousReq: 100, firstFailureAt: -1, totalFailures: 0 },
  { numOfSimultaneousReq: 100, firstFailureAt: -1, totalFailures: 0 },
  { numOfSimultaneousReq: 500, firstFailureAt: -1, totalFailures: 0 },
  { numOfSimultaneousReq: 500, firstFailureAt: -1, totalFailures: 0 },
  { numOfSimultaneousReq: 500, firstFailureAt: -1, totalFailures: 0 },
  { numOfSimultaneousReq: 600, firstFailureAt: 19, totalFailures: 68 },
  { numOfSimultaneousReq: 600, firstFailureAt: -1, totalFailures: 0 },
  { numOfSimultaneousReq: 600, firstFailureAt: 71, totalFailures: 78 },
  { numOfSimultaneousReq: 800, firstFailureAt: 9, totalFailures: 256 },
  { numOfSimultaneousReq: 800, firstFailureAt: 28, totalFailures: 99 },
  { numOfSimultaneousReq: 800, firstFailureAt: 43, totalFailures: 125 },
  { numOfSimultaneousReq: 1000, firstFailureAt: 0, totalFailures: 402 },
  { numOfSimultaneousReq: 1000, firstFailureAt: 17, totalFailures: 398 },
  { numOfSimultaneousReq: 1000, firstFailureAt: 4, totalFailures: 392 } ]

As you can see, failures start at 600 concurrent requests.

Upvotes: 6

Related Questions