Reputation: 61
We are using GCP Workflows to do some API calls for status check every n second via http.post
call.
Everything was fine till recently all of our workflows started failing with internal error:
{"message":"ResourceLimitError: Memory usage limit exceeded","tags":["ResourceLimitError"]}
I found out, that when we are using GET
with query params, it's failure happens a bit later than the same for POST
and body.
Here is the testing workflow:
main:
steps:
- init:
assign:
- i: 0
- body:
foo: 'thisismyhorsemyhorseisamazing'
- doRequest:
call: http.request
args:
url: https://{my-location-and-project-id}.cloudfunctions.net/workflow-test
method: GET
query: ${body}
result: res
- sleepForOneSecond:
call: sys.sleep
args:
seconds: 1
- logCounter:
call: sys.log
args:
text: ${"Iteration - " + string(i)}
severity: INFO
- increaseCounter:
assign:
- i: ${i + 1}
- checkIfFinished:
switch:
- condition: ${i < 500}
next: doRequest
next: returnOutput
- returnOutput:
return: ${res.body}
It can do up to 37 requests with GET and 32 with POST and then execution stops with an error. And that numbers don't change.
For reference, Firebase function on POST and GET returns 200 with next JSON:
{
"bar": "thisismyhorsemyhorseisamazing",
"fyz": [],
}
Any ideas what goes wrong there? I don't think that 64Kb quota for variables is exceeded there. It shouldn't be calculated as a sum of all assignments, should it?
Upvotes: 2
Views: 1010
Reputation: 4640
This looks like an issue with the product, I found this Google tracker, This issue was reported.
It is better continue over the public issue tracker.
Upvotes: 2