Reputation: 1
I have a simple 'go' server listening for HTTPs traffic on port 8443 running in a container (inside a K8s cluster). I have an istio ingress gateway running at the edge of the K8s cluster (listening on port 443). Once I exposed the service (incoming 443 targeted to port 8443) and declared the virtual service (matching URL '/testgo' to be forwarded to port 443 of the service) and destination rule (using SIMPLE TLS), I am able to access the service (from outside the cluster) using "https://GATEWAY_HOST/testgo".
Once I injected the istio proxy into the service (so that I could do local rate limiting), to continue accessing the backend service over HTTPs (and not plain HTTP), I had to set 'peerAuthentication' to 'DISABLE' (using the advise at https://github.com/istio/istio/issues/40680).
But now the local HTTP rate limit filter (using sample provided at https://istio.io/latest/docs/tasks/policy-enforcement/rate-limit/) does not work. I have posted a question about this at Local HTTP rate limit for TLS backend service.
Since that does not work, I tried a NETWORK_FILTER and it seems to work for some time and then it just stops working. The istio version being used is 1.15.
The network filter looks like below:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: go-server-ratelimit
namespace: default
spec:
workloadSelector:
labels:
app: web
configPatches:
- applyTo: NETWORK_FILTER
match:
context: SIDECAR_INBOUND
listener:
portNumber: 8443
filterChain:
filter:
name: "envoy.filters.network.tcp_proxy"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.local_ratelimit
typed_config:
"@type": type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.network.local_ratelimit.v3.LocalRateLimit
value:
stat_prefix: local_rate_limiter
token_bucket:
max_tokens: 1
tokens_per_fill: 1
fill_interval: 60s
runtime_enabled:
default_value: true
runtime_key: go-server-ratelimit
share_key: go-server-ratelimit
I have this simple shell script which I am using to test this.
cnt=1
delay=60
while true
do
http_code=`curl -k -o /dev/null -s -w "%{http_code}" "https://GATEWAY_HOST/testgo/"`
if [ $http_code -ne 200 ]
then
echo "HTTP return code is $http_code after $((cnt-1)) tries, sleeping for $delay seconds"
sleep $delay
cnt=1
else
echo "HTTP return code is $http_code on try $cnt"
cnt=$((cnt+1))
fi
done
There is only one service POD running so I would not expect more than 1 HTTP request to succeed at a time (based on the token_bucket config in the filter). It works fine for some time but then it allows more number of HTTP requests through. The script output looks like below:
HTTP return code is 200 on try 1
HTTP return code is 503 after 1 tries, sleeping for 60 seconds
HTTP return code is 200 on try 1
HTTP return code is 503 after 1 tries, sleeping for 60 seconds
HTTP return code is 200 on try 1
HTTP return code is 503 after 1 tries, sleeping for 60 seconds
HTTP return code is 200 on try 1
HTTP return code is 503 after 1 tries, sleeping for 60 seconds
HTTP return code is 200 on try 1
HTTP return code is 503 after 1 tries, sleeping for 60 seconds
HTTP return code is 200 on try 1
HTTP return code is 200 on try 2
HTTP return code is 200 on try 3
HTTP return code is 200 on try 4
HTTP return code is 200 on try 5
HTTP return code is 200 on try 6
HTTP return code is 200 on try 7
HTTP return code is 200 on try 8
HTTP return code is 503 after 8 tries, sleeping for 60 seconds
HTTP return code is 200 on try 1
HTTP return code is 200 on try 2
HTTP return code is 503 after 2 tries, sleeping for 60 seconds
HTTP return code is 200 on try 1
HTTP return code is 200 on try 2
HTTP return code is 200 on try 3
HTTP return code is 200 on try 4
HTTP return code is 200 on try 5
HTTP return code is 200 on try 6
HTTP return code is 200 on try 7
HTTP return code is 200 on try 8
HTTP return code is 200 on try 9
HTTP return code is 200 on try 10
I could not find any reported issues on the envoy token bucket. Is there anything wrong with this network filter configuration which is causing the filter to stop working after some time ?
Upvotes: 0
Views: 217
Reputation: 41
Did you try applying the patch to HTTP_FILTER instead as shown in the docs? https://istio.io/latest/docs/tasks/policy-enforcement/rate-limit/#local-rate-limit
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
Upvotes: 0