Reputation: 147
In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.
The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):
The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.
The command I used to import the namespaces is pretty straightforward:
terraform import kubernetes_namespace.my_new_namespace my_new_namespace
I also tried using the -provdier=""
and -config=""
but to no avail.
My Kubernetes provider configuration is this:
provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
An example for a namespace resource I am trying to import is this:
resource "kubernetes_namespace" "my_new_namespace" {
metadata {
name = "my_new_namespace"
}
}
The import command results in the following:
Error: Get http://localhost/api/v1/namespaces/my_new_namespace: dial tcp [::1]:80: connect: connection refused
It's obvious it's doomed to fail since it's trying to reach localhost
instead of the actual cluster IP and configurations.
Is there any workaround for this use case?
Thanks in advance.
Upvotes: 11
Views: 4906
Reputation: 830
the issue lies with the dynamic data provider. The import
statement doesn't have access to it.
For the process of importing, you have to hardcode the provider values.
Change this:
provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
to:
provider "kubernetes" {
version = "~> 1.8"
host = "https://<ip-of-cluster>"
token = "<token>"
cluster_ca_certificate = base64decode(<cert>)
load_config_file = false
}
gcloud auth print-access-token
.terraform state show module.gke.google_container_cluster.your_cluster_resource_name_here
For provider version 2+ you have to drop load_config_file
.
Once in place, import and revert the changes on the provider.
Upvotes: 1
Reputation: 66
(1) Create an entry in your kubeconfig file for your GKE cluster.
gcloud container clusters get-credentials cluster-name
(2) Point terraform Kubernetes provider to your kubeconfig:
provider "kubernetes" {
config_path = "~/.kube/config"
}
Upvotes: 0