IsolatedSushi
IsolatedSushi

Reputation: 152

Unable to connect with gRPC when deployed with kubernetes

I'm trying to deploy a gRPC server with kubernetes, and connect to it outside the cluster. The relevant part of the server:

function main() {
  var hello_proto = grpc.loadPackageDefinition(packageDefinition).helloworld;
  var server = new grpc.Server();
  server.addService(hello_proto.Greeter.service, {sayHello: sayHello});
  const url = '0.0.0.0:50051'
  server.bindAsync(url, grpc.ServerCredentials.createInsecure(), () => {
    server.start();
    console.log("Started server! on " + url);
  });
}

function sayHello(call, callback) {
  console.log('Hello request');
  callback(null, {message: 'Hello ' + call.request.name + ' from ' + require('os').hostname()});
}

And here is the relevant part of the client:

function main() {
  var target = '0.0.0.0:50051';
  let pkg = grpc.loadPackageDefinition(packageDefinition);
  let Greeter = pkg.helloworld["Greeter"];
  var client = new Greeter(target,grpc.credentials.createInsecure());
  var user = "client";
  
  client.sayHello({name: user}, function(err, response) {
    console.log('Greeting:', response.message);
  });
}

When I run them manually with nodeJS, as well as when I run the server in a docker container (client is still run with node without a container) it works just fine.

The docker file with the command: docker run -it -p 50051:50051 helloapp

FROM node:carbon
 
# Create app directory
WORKDIR /usr/src/appnpm 
 
COPY package.json .
COPY package-lock.json .
 
RUN npm install
 
COPY . .
 
CMD npm start

However, when I'm deploying the server with kubernetes (again, the client isnt run within a container) I'm not able to connect.

The yaml file is as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helloapp
  strategy: {}
  template:
    metadata:
      labels:
        app: helloapp
    spec:
      containers:
        image: isolatedsushi/helloapp
        name: helloapp
        ports:
        - containerPort: 50051
          name: helloapp
        resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
  name: helloservice
spec:
  selector:
    app: helloapp
  ports:
  - name: grpc
    port: 50051
    targetPort: 50051

The deployment and the service start up just fine

kubectl get svc
NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
helloservice   ClusterIP   10.105.11.22   <none>        50051/TCP   17s


kubectl get pods      
NAME                     READY   STATUS    RESTARTS   AGE
helloapp-dbdfffb-brvdn   1/1     Running   0          45s

But when I run the client it can't reach the server.

Any ideas what I'm doing wrong?

Upvotes: 3

Views: 5622

Answers (1)

Jakub
Jakub

Reputation: 8840

As mentioned in comments


ServiceTypes

If you have exposed your service as ClusterIP it's visible only internally in the cluster, if you wan't to expose your service externally you have to use either nodePort or LoadBalancer.

Publishing Services (ServiceTypes)

For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster. Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.

Type values and their behaviors are:

ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.

NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.

LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.

Related documentation about that.


Minikube

With minikube you can achieve that with minikube service command.

There is documentation about minikube service and there is an example.


grpc http/https

As mentioned here by @murgatroid99

The gRPC library does not recognize the https:// scheme for addresses, so that target name will cause it to try to resolve the wrong name. You should instead use grpc-server-xxx.com:9090 or dns:grpc-server-xxx.com:9090 or dns:///grpc-server-xxx.com:9090. More detailed information about how gRPC interprets channel target names can be found in this documentation page.

As it does not recognize https I assume it's the same for http, so it's not possible.


kubectl port-forward

Additionally as @IsolatedSushi mentioned

It also works when I portforward with the command kubectl -n hellospace port-forward svc/helloservice 8080:50051

As mentioned here

Kubectl port-forward allows you to access and interact with internal Kubernetes cluster processes from your localhost. You can use this method to investigate issues and adjust your services locally without the need to expose them beforehand.

There is an example in documentation.

Upvotes: 3

Related Questions