carmiac
carmiac

Reputation: 359

Simple gRPC envoy configuration

I'm trying to setup a envoy proxy as a gRPC fron end, and can't get it to work, so I'm trying to get to as simple a test setup as possible and build from there, but I can't get that to work either. Here's what my test setup looks like:

Python server (slightly modified gRPC example code)

# greeter_server.py
from concurrent import futures
import time

import grpc

import helloworld_pb2
import helloworld_pb2_grpc

_ONE_DAY_IN_SECONDS = 60 * 60 * 24


class Greeter(helloworld_pb2_grpc.GreeterServicer):

    def SayHello(self, request, context):
        return helloworld_pb2.HelloReply(message='Hello, %s!' % request.name)


def serve():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    helloworld_pb2_grpc.add_GreeterServicer_to_server(Greeter(), server)
    server.add_insecure_port('[::]:8081')
    server.start()
    try:
        while True:
            time.sleep(_ONE_DAY_IN_SECONDS)
    except KeyboardInterrupt:
        server.stop(0)


if __name__ == '__main__':
    serve()

Python client (slightly modified gRPC example code)

from __future__ import print_function

import grpc

import helloworld_pb2
import helloworld_pb2_grpc


def run():
    # NOTE(gRPC Python Team): .close() is possible on a channel and should be
    # used in circumstances in which the with statement does not fit the needs
    # of the code.
    with grpc.insecure_channel('localhost:9911') as channel:
        stub = helloworld_pb2_grpc.GreeterStub(channel)
        response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
    print("Greeter client received: " + response.message)


if __name__ == '__main__':
    run()

And then my two envoy yaml files:

# envoy-hello-server.yaml
static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 8811
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          codec_type: auto
          stat_prefix: ingress_http
          access_log:
          - name: envoy.file_access_log
            typed_config:
              "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
              path: "/dev/stdout"
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/"
                  grpc: {}
                route:
                  cluster: hello_grpc_service
          http_filters:
          - name: envoy.router
            typed_config: {}
  clusters:
  - name: hello_grpc_service
    connect_timeout: 0.250s
    type: strict_dns
    lb_policy: round_robin
    http2_protocol_options: {}
    load_assignment:
      cluster_name: hello_grpc_service
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: hello_grpc_service
                port_value: 8081

admin:
  access_log_path: "/tmp/envoy_hello_server.log"
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8881

and

# envoy-hello-client.yaml
static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 9911
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          codec_type: auto
          add_user_agent: true
          access_log:
          - name: envoy.file_access_log
            typed_config:
              "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
              path: "/dev/stdout"
          stat_prefix: egress_http
          common_http_protocol_options:
            idle_timeout: 0.840s
          use_remote_address: true
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - grpc
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: backend-proxy
          http_filters:
          - name: envoy.router
            typed_config: {}
  clusters:
  - name: backend-proxy
    type: logical_dns
    dns_lookup_family: V4_ONLY
    lb_policy: round_robin
    connect_timeout: 0.250s
    http_protocol_options: {}
    load_assignment:
      cluster_name: backend-proxy
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: hello_grpc_service
                port_value: 8811

admin:
  access_log_path: "/tmp/envoy_hello_client.log"
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9991

Now, what I expect this would allow is something like hello_client.py (port 9911) -> envoy (envoy-hello-client.yaml) -> envoy (envoy-hello-server.yaml) -> hello_server.py (port 8081)

Instead, what I get is an error from the python client:

$ python3 greeter_client.py 
Traceback (most recent call last):
  File "greeter_client.py", line 35, in <module>
    run()
  File "greeter_client.py", line 30, in run
    response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
  File "/usr/lib/python3/dist-packages/grpc/_channel.py", line 533, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/lib/python3/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
    status = StatusCode.UNIMPLEMENTED
    details = ""
    debug_error_string = "{"created":"@1594770575.642032812","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"","grpc_status":12}"
>

And in the envoy client log:

[2020-07-14 16:22:10.407][16935][info][main] [external/envoy/source/server/server.cc:652] starting main dispatch loop
[2020-07-14 16:23:25.441][16935][info][runtime] [external/envoy/source/common/runtime/runtime_impl.cc:524] RTDS has finished initialization
[2020-07-14 16:23:25.441][16935][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:182] cm init: all clusters initialized
[2020-07-14 16:23:25.441][16935][info][main] [external/envoy/source/server/server.cc:631] all clusters initialized. initializing init manager
[2020-07-14 16:23:25.441][16935][info][config] [external/envoy/source/server/listener_manager_impl.cc:844] all dependencies initialized. starting workers
[2020-07-14 16:23:25.441][16935][warning][main] [external/envoy/source/server/server.cc:537] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2020-07-14T23:49:35.641Z] "POST /helloworld.Greeter/SayHello HTTP/2" 200 NR 0 0 0 - "10.0.0.56" "grpc-python/1.16.1 grpc-c/6.0.0 (linux; chttp2; gao)" "aa72310a-3188-46b2-8cbf-9448b074f7ae" "localhost:9911" "-"

And nothing in the server log.

Also, weirdly, this is an almost one second delay between when I run the python client and when the log message shows up in the client envoy.

What am I missing to make these two scripts talk via envoy?

Upvotes: 2

Views: 4236

Answers (1)

v8-overclocked
v8-overclocked

Reputation: 31

I know I'm bit late, hope this helps someone. Since you are grpc server is running in the same host you could specify hostname to be host.docker.internal (previous docker.for.mac.localhost deprecated from docker v18.03.0)

In your case if you are running in a dockerized environment you could do the following:

Envoy version: 1.13+

clusters:
  - name: backend-proxy
    type: logical_dns
    dns_lookup_family: V4_ONLY
    lb_policy: round_robin
    connect_timeout: 0.250s
    http_protocol_options: {}
    load_assignment:
      cluster_name: backend-proxy
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: host.docker.internal
                port_value: 8811

hello_grpc_service won't be resolved to IP in dockerized environment.

Note: you could enable envoy trace log level for detailed logs

Upvotes: 1

Related Questions