Reputation: 4611
I am very new in Istio therefore it might be simple question but I have several confusion regarding Istio.I am using Istio 1.8.0 and 1.19 for k8s.Sorry for the multiple questions but will be appreciate if you can help me to clarify best approaches.
After I inject Istio, I suppose ı could not able to access service to service directly inside pod but as you can see below ı can. Maybe I 've misunderstanding but is it expected behaviour? Meanwhile how can I debug whether services talk each other via envoy proxy with mTLS ? I am using STRICT
mode and should I deploy peerauthentication the namespace where microservices are running to avoid this?
kubectl get peerauthentication --all-namespaces
NAMESPACE NAME AGE
istio-system default 26h
How can I restrict the traffic lets say api-dev service should not be access to auth-dev but can access backend-dev?
Some of the microservices needs to commnunicate with database where its also running in database
namespace. We have also some which we do not want to inject istio also using same database? So, should database also deployed in the same namespace where we have istio injection? If yes, then does it mean I need to deploy another database instance for rest of services?
$ kubectl get ns --show-labels
NAME STATUS AGE LABELS
database Active 317d name=database
hub-dev Active 15h istio-injection=enabled
dev Active 318d name=dev
capel0068340585:~ semural$ kubectl get pods -n hub-dev
NAME READY STATUS RESTARTS AGE
api-dev-5b9cdfc55c-ltgqz 3/3 Running 0 117m
auth-dev-54bd586cc9-l8jdn 3/3 Running 0 13h
backend-dev-6b86994697-2cxst 2/2 Running 0 120m
cronjob-dev-7c599bf944-cw8ql 3/3 Running 0 137m
mp-dev-59cb8d5589-w5mxc 3/3 Running 0 117m
ui-dev-5884478c7b-q8lnm 2/2 Running 0 114m
redis-hub-master-0 2/2 Running 0 2m57s
$ kubectl get svc -n hub-dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-dev ClusterIP xxxxxxxxxxxxx <none> 80/TCP 13h
auth-dev ClusterIP xxxxxxxxxxxxx <none> 80/TCP 13h
backend-dev ClusterIP xxxxxxxxxxxxx <none> 80/TCP 14h
cronjob-dev ClusterIP xxxxxxxxxxxxx <none> 80/TCP 14h
mp-dev ClusterIP xxxxxxxxxxxxx <none> 80/TCP 13h
ui-dev ClusterIP xxxxxxxxxxxxx <none> 80/TCP 13h
redis-hub-master ClusterIP xxxxxxxxxxxxx <none> 6379/TCP 3m47s
----------
$ kubectl exec -ti ui-dev-5884478c7b-q8lnm -n hub-dev sh
Defaulting container name to oneapihub-ui.
Use 'kubectl describe pod/ui-dev-5884478c7b-q8lnm -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl -vv http://hub-backend-dev
* Trying 10.254.78.120:80...
* TCP_NODELAY set
* Connected to backend-dev (10.254.78.120) port 80 (#0)
> GET / HTTP/1.1
> Host: backend-dev
> User-Agent: curl/7.67.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< content-security-policy: default-src 'self'
<
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot GET /</pre>
</body>
</html>
* Connection #0 to host oneapihub-backend-dev left intact
/usr/src/app $
Upvotes: 3
Views: 2833
Reputation: 8830
STRICT
mtls, then workloads should only accept encrypted traffic.Peer authentication policies specify the mutual TLS mode Istio enforces on target workloads. The following modes are supported:
- PERMISSIVE: Workloads accept both mutual TLS and plain text traffic. This mode is most useful during migrations when workloads without sidecar cannot use mutual TLS. Once workloads are migrated with sidecar injection, you should switch the mode to STRICT.
- STRICT: Workloads only accept mutual TLS traffic.
- DISABLE: Mutual TLS is disabled. From a security perspective, you shouldn’t use this mode unless you provide your own security solution.
When the mode is unset, the mode of the parent scope is inherited. Mesh-wide peer authentication policies with an unset mode use the PERMISSIVE mode by default.
Also worth to take a look here, as it's very well described here by banzaicloud.
You can enable strict mtls mode globally, but also per specific namespace or workload.
Istio Authorization Policy enables access control on workloads in the mesh.
There is an example.
The following is another example that sets action to “DENY” to create a deny policy. It denies requests from the “dev” namespace to the “POST” method on all workloads in the “foo” namespace.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: foo
spec:
action: DENY
rules:
- from:
- source:
namespaces: ["dev"]
to:
- operation:
methods: ["POST"]
ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the workloadSelector field. These endpoints can be VM workloads declared using the WorkloadEntry object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.
There are examples in istio documentation:
To answer your main question about how to debug mtls communication
The most basic test would be to try to call from not injected pod to injected pod, with a curl for example. There is istio documentation about that.
You can also use istioctl x describe
, more about it here.
Not sure what's wrong with the curl -vv http://hub-backend-dev
, but as it's 404 I suspect it might be an issue with your istio dependencies, like wrong virtual service configuration.
Upvotes: 2