Technetium
Technetium

Reputation: 6158

How to connect to Cloud SQL (2nd Generation) via MySQL Proxy Docker Container over TCP

Running on Mac OS X, I have been trying to connect to a Cloud SQL instance via the proxy using these directions. Once you have installed the MySQL client, gce-proxy container, and have created a service account in Google Cloud Platform, you get down to running these two commands specified in the documentation:

docker run -d -v /cloudsql:/cloudsql \
  -v [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] \
  b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
  -instances=[INSTANCE_CONNECTION_NAME]=tcp:3306 -credential_file=[CLOUD_KEY_FILE_PATH]

mysql -h127.0.0.1 -uroot -p

First, I don't understand how this should ever work, since the container is not exposing a port. So unsurprisingly, when I attempted to connect I get the following error from the MySQL client:

ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (61)

But if I do expose the port by adding -p 3306:3306 to the docker run command, I still can't connect. Instead I get the following error from MySQL client:

ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0

I have successfully connected to the proxy running cloud_sql_proxy on my docker host machine by following that documentation, so I am confident my credential file and my mysql client is configured correctly. The logs of the container do not state that any connection was attempted. I have no problem connecting to a normal mysql container via docker. What am I missing here?

Upvotes: 1

Views: 2309

Answers (3)

Dan
Dan

Reputation: 2536

I was able to figure out how to use cloudsql-proxy on my local docker environment by using docker-compose. You will need to pull down your Cloud SQL instance credentials and have them ready. I keep them them in my project root as credentials.json and add it to my .gitignore in the project.

The key part I found was using =tcp:0.0.0.0:5432 after the GCP instance ID so that the port can be forwarded. Then, in your application, use cloudsql-proxy instead of localhost as the hostname. Make sure the rest of your db creds are valid in your application secrets so that it can connect through local proxy being supplied by the cloudsql-proxy container.

Note: Keep in mind I'm writing a tomcat java application and my docker-compose.yml reflects that.

docker-compose.yml:

    version: '3'
    services:
      cloudsql-proxy:
          container_name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: /cloud_sql_proxy --dir=/cloudsql -instances=<YOUR INSTANCE ID HERE>=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
          ports:
            - 5432:5432
          volumes:
            - ./credentials.json:/secrets/cloudsql/credentials.json
          restart: always
    
      tomcatapp-api:
        container_name: tomcatapp-api
        build: .
        volumes:
          - ./build/libs:/usr/local/tomcat/webapps
        ports:
          - 8080:8080
          - 8000:8000
        env_file:
          - ./secrets.env
        restart: always

Upvotes: 2

Technetium
Technetium

Reputation: 6158

I tried @Vadim's suggestion, which is basically this:

docker run -d -v /cloudsql:/cloudsql \
  -p 127.0.0.1:3306:3306 \ 
  -v [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] \
  b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
  -instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306 -credential_file=[CLOUD_KEY_FILE_PATH]

I was still unable to get a connection, as I still got this error:

ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0

However, the logs of the docker container showed a connection, like so:

2016/10/16 07:52:32 New connection for "[INSTANCE_CONNECTION_NAME]"
2016/10/16 07:52:32 couldn't connect to "[INSTANCE_CONNECTION_NAME]": Post https://www.googleapis.com/sql/v1beta4/projects/[PROJECT_NAME]/instances/[CLOUD_SQL_INSTANCE_NAME]/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: x509: failed to load system roots and no roots provided

So now it appeared that it was getting the traffic, but it not find the certificates for the SSL container. I had used OpenSSL's cert.pem export of my certificates and mounted it to the same location in the docker container. It makes sense that an arbitrary mapping of [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] wasn't helping the proxy figure out where the certificates were. So I used a clue from this Kubernetes setup guide and change the mounted volume to -v [LOCAL_CERTIFICATE_FILE_PATH]:/etc/ssl/certs. Mercifully, that worked.

TL;DR - Here is the final syntax for getting the Docker Container to run over TCP:

docker run -d \
    -p 127.0.0.1:3306:3306 \
    -v [SERVICE_ACCOUNT_PRIVATE_KEY_DIRECTORY]:[SERVICE_ACCOUNT_PRIVATE_KEY_DIRECTORY] \
    -v [LOCAL_CERTIFICATE_DIRECTORY]:/etc/ssl/certs \
    b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
    -instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306 \
    -credential_file=[SERVICE_ACCOUNT_PRIVATE_KEY_JSON_FILE]

Upvotes: 0

Vadim
Vadim

Reputation: 5126

It does look like there are some omissions in the documentation.

1) As you point out, you need to expose the port from the container. You'll want to make sure you only expose it to the local machine by specifying -p 127.0.0.1:3306:3306.

2) Then when running the container, you'll want to expose the port outside the container by specifying -instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306

Upvotes: 1

Related Questions