prime
prime

Reputation: 2074

Connect to local database from inside minikube cluster

I'm trying to access a MySQL database hosted inside a docker container on localhost from inside a minikube pod with little success. I tried the solution described Minikube expose MySQL running on localhost as service but to no effect. I have modelled my solution on the service we use on AWS but it does not appear to work with minikube. My service reads as follows

apiVersion: v1

kind: Service

metadata: 

  name: mysql-db-svc

  namespace: external

spec: 
  type: ExternalName
  ExternalName: 172.17.0.2

...where I try to connect to my database from inside a pod using "mysql-db-svc" on port 3306 but to no avail. If I try and CURL the address "mysql-db-svc" from inside a pod it cannot resolve the host name.

Can anybody please advise a frustrated novice?

Upvotes: 22

Views: 31898

Answers (6)

Somjit
Somjit

Reputation: 2772

As an addon to @Crou s answer from 2018, In 2022, kubernetes docs say ExternalName takes in a string and not an address. So, in case ExternalName doesn't work, you can also use the simpler option of services-without-selectors

You can also refer to this Google Cloud Tech video for how this services-without-selectors concept works

Upvotes: 1

Aak
Aak

Reputation: 263

I was also facing a similar problem where I needed to connect a POD inside minikube with a SQL Server container on the machine.

I noticed that minikube is itself a container in the local docker environment and during it's setup it creates a local docker network minikube. I connected my local SQL Server container to this minikube docker network using docker network connect minikube <SQL Server Container Name> --ip=<any valid IP on the minikube network subent>.

I was now able to access the local SQL Server container using the IP address on minikube network.

Upvotes: 1

smiletrl
smiletrl

Reputation: 396

Above solution somehow doesn't work for me. Finally it works below in terraform

resource "kubernetes_service" "host" {
    metadata {
        name = "minikube-host"

        labels = {
            app = "minikube-host"
        }

        namespace = "default"
    }

    spec {
        port {
            name = "app"
            port = 8082
        }
        cluster_ip = "None"
    }
}

resource "kubernetes_endpoints" "host" {
    metadata {
        name = "minikube-host"
        namespace = "default"
    }
    subset {
        address {
            // This ip comes from command: minikube ssh 'grep host.minikube.internal /etc/hosts | cut -f1'
            ip = "192.168.65.2"
        }

        port {
            name     = "app"
            port     = 8082
        }
    }
}

Then I can access the local service (e.g, postgres, or mysql) in my mac with host minikube-host.default.svc.cluster.local from k8s pods.

The plain yaml file version and more details can be found at issue.

The minikube host access host.minikube.internal detail can be found here.

On the other hand, the raw ip address from command minikube ssh 'grep host.minikube.internal /etc/hosts | cut -f1'(e.,g "192.168.65.2"), can be used as the service host directly, instead of 127.0.0.1/localhost at code. And no more above configurations required.

Upvotes: 0

prime
prime

Reputation: 2074

I'm using ubuntu with Minikube and my database runs outside of minikube inside a docker container and can be accessed from localhost @ 172.17.0.2. My Kubernetes service for my external mysql container reads as follows:

kind: Service
apiVersion: v1
metadata:
  name: mysql-db-svc
  namespace: external
spec: 
  type: ExternalName
  externalName: 10.0.2.2

Then inside my .env for a project my DB_HOST is defined as

mysql-db-svc.external.svc

... the name of the service "mysql-db-svc" followed by its namespace "external" with "svc"

Hope that makes sense.

Upvotes: 10

Crou
Crou

Reputation: 11446

If I'm not mistaken, you should also create an Endpoint for this service as it's external.

In your case, the Endpoints definition should be as follows:

kind: "Endpoints"
apiVersion: "v1"
metadata:
  name: mysql-db-svc
  namespace: external
subsets: 
- addresses:
  - ip: "10.10.1.1"
  ports:
    port: 3306

You can read about the external sources on Kubernetes Defining a service docs.

Upvotes: 3

Insightcoder
Insightcoder

Reputation: 526

This is because your service type is ExternalName which only fits in cloud environment such as AWS and GKE. To run your service locally change the service type to NodePort which will assign a static NodePort between 30000-32767. If you need to assign a static port on your own so that minikube won't pick a random port for you define that in your service definition under ports section like this nodePort: 32002.

And also I don't see any selector that points to your MySQL deployment in your service definition. So include the corresponding selector key pair (e.g. app: mysql-server) in your service definition under spec section. That selector should match with the selector you have defined in MySQL deployment definition.

So your service definition should be like this:

kind: Service
apiVersion: v1
metadata:
  name: mysql-db-svc
  namespace: external
spec:
  selector:
    app: mysql-server
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306
    nodePort: 32002
  type: NodePort

After you deploy the service you can hit the MySQL service via http://{minikube ip}:32002 Replace {minikube ip} with actual minikube ip.

Or else you can get the access URL for the service with following command

minikube service <SERVICE_NAME> --url

Replace the with actual name of the service. In your case it is mysql-db-svc

Upvotes: 1

Related Questions