talhasagdan
talhasagdan

Reputation: 21

Can’t connect the created cluster that is exposed by using NodePort Service

So, I have been trying to create a cluster on AWS EKS for couple of days. I managed to upload the docker image on ECR, created appropriate VPC but could not managed to connect it from http://<ip:port. I am using NodePort service for exposing the project. The project is a basic .NET Core REST API that returns JSON. I am using AWS CLI and kubectl for operations. I have already implemented the generated nodePort IP in the inbound rules of the worker nodes (EC2 Instances) security protocols. Here are my yaml files;

Cluster.yaml -> yaml file for using pre-created VPC setup and defining node groups

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: EKS-Demo-Cluster
  region: eu-central-1

vpc:
  id: vpc-056dbccebc402e9a8
  cidr: "192.168.0.0/16"
  subnets:
    public:
      eu-central-1a:
        id: subnet-04192a691f3c156a6
      eu-central-1b:
        id: subnet-0f89762f3d78ccb47
    private:
      eu-central-1a:
        id: subnet-07fe8b089287a16c4
      eu-central-1b:
        id: subnet-0ae524ea2c78b49a7

nodeGroups:
  - name: EKS-public-workers
    instanceType: t3.medium
    desiredCapacity: 2
  - name: EKS-private-workers
    instanceType: t3.medium
    desiredCapacity: 1
    privateNetworking: true

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demojsonapp
  template:
    metadata:
      labels:
        app: demojsonapp
    spec:
      containers:
        - name: back-end
          image: 921915718885.dkr.ecr.eu-central-1.amazonaws.com/sample_repo:latest
          ports:
            - name: http
              containerPort: 8080

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: backend-service

spec:
  type: NodePort
  selector:
    app: demojsonapp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

I don’t understand where the problem is. If you could help me it is very much appreciated.

Finally, here is the docker file that created the image which I uploaded to ECR for EKS cluster;

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 3000
EXPOSE 443

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["WebApplication2/WebApplication2.csproj", "WebApplication2/"]
RUN dotnet restore "WebApplication2/WebApplication2.csproj"
COPY . .
WORKDIR "/src/WebApplication2"
RUN dotnet build "WebApplication2.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "WebApplication2.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication2.dll"]

Upvotes: 1

Views: 434

Answers (1)

uniglot
uniglot

Reputation: 164

First of all, you need to add a nodePort value under spec.ports in your service.yaml (you can refer to documentation to find some examples).

And note that, by default, the range of nodePort you can assign is limited to the interval [30000-32767] by kube-apiserver (you can search by keyword 'nodeport' in this document). It would be very hard, and you may not want, to change the NodePort visibility because the kube-apiserver of a cluster resides in the control plane of the cluster, not in worker nodes.

In my case, FYI, requests are first accepted by port 443 of the load balancer, then forwarded to port 30000 of one of the worker nodes. Then the service with nodePort: 30000 will receive the requests and pass them to appropriate Pods.

Summary

  • Add nodePort under spec.ports in service.yaml, with a value between 30000 and 32767.
  • If you want to defy your fate, have a try at changing the NodePort visibility of kube-apiserver.

Upvotes: 1

Related Questions