Reputation: 312
I'm trying to create an EKS cluster with terraform and configure it thorugh kubectl and istio basic following this guides:
However when trying to deploy the alb, it does not create any alb on aws.
Ruunning kubectl get ingress -n istio-system
, I get:
NAME CLASS HOSTS ADDRESS PORTS AGE
alb <none> * 80 4s
I'm unable to debug it as I can't find any log telling me why the alb is not deployed. Does anynone came across the same issue? Or does anyone have any clues on how to pin-point the issue?
Follow config files used:
ingress-alb.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: alb
namespace: istio-system
annotations:
# create AWS Application LoadBalancer
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: "***"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
external-dns.alpha.kubernetes.io/hostname: "****"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: istio-ingressgateway
servicePort: 80
kubectl -n istio-system get svc istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway NodePort ************** <none> 15021:30012/TCP,80:31684/TCP,443:30689/TCP 132m
eks_cluster.tf
data "aws_eks_cluster" "eks" {
name = module.eks_cluster.cluster_id
}
data "aws_eks_cluster_auth" "eks" {
name = module.eks_cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.eks.token
}
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_version = "1.18"
cluster_name = var.eks.cluster_name
vpc_id = *****
subnets = *****
cluster_endpoint_private_access = true
enable_irsa = true
worker_groups = [
{
name = "worker-group-1"
instance_type = "t3a.medium"
asg_min_size = 1
asg_max_size = 3
asg_desired_capacity = 2
root_volume_type = "gp3"
root_volume_size = 20
}
]
map_users = [{
userarn = "***"
username = "****"
groups = ["****"]
}]
}
Upvotes: 2
Views: 852
Reputation: 427
I managed to make ALB working with following
EKS - 1.22
aws-load-balancer-controller - v2.4.1
istioctl - 1.14.2
First I modified istio-ingressgateway service type from load balancer to node port.
istioctl install --set profile=default --set values.gateways.istio-ingressgateway.type=NodePort -y
Then create ingress. Here is my ingress. I only configured to support port 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: istio-alb
namespace: istio-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/subnets: subnet-aaaaaaa, subnet-bbbbbbbb, subnet-cccccc
alb.ingress.kubernetes.io/security-groups: sg-xxxxxxxxxx
alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
alb.ingress.kubernetes.io/healthcheck-port: "32560"
spec:
rules:
- http:
paths:
- backend:
service:
name: istio-ingressgateway
port:
number: 80
pathType: ImplementationSpecific
I believe adding security group annotation will solve your issue.
Upvotes: 1