DmitrySemenov
DmitrySemenov

Reputation: 10375

Switch terraform 0.12.6 to 0.13.0 gives me provider["registry.terraform.io/-/null"] is required, but it has been removed

I manage state in remote terraform-cloud

I have downloaded and installed the latest terraform 0.13 CLI

Then I removed the .terraform.

Then I ran terraform init and got no error

then I did

➜ terraform apply -var-file env.auto.tfvars

Error: Provider configuration not present

To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.

Releasing state lock. This may take a few moments...

This is the content of the module/kubernetes/main.tf

###################################################################################
# EKS CLUSTER                                                                     #
#                                                                                 #
# This module contains configuration for EKS cluster running various applications #
###################################################################################

module "eks_label" {
  source      = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
  namespace   = var.project
  environment = var.environment
  attributes  = [var.component]
  name        = "eks"
}


#
# Local computed variables
#
locals {
  names = {
    secretmanage_policy = "secretmanager-${var.environment}-policy"
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-cluster.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "eks-cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = module.eks_label.id
  cluster_version = var.cluster_version
  subnets         = var.subnets
  vpc_id          = var.vpc_id

  worker_groups = [
    {
      instance_type = var.cluster_node_type
      asg_max_size  = var.cluster_node_count
    }
  ]

  tags = var.tags
}

# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
  name        = local.names.secretmanage_policy
  description = "allow to read secretmanager secrets ${var.environment}"
  policy      = file("modules/kubernetes/policies/secretmanager.json")
}

#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = aws_iam_policy.secretmanager-policy.arn
}

#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}

Upvotes: 14

Views: 11506

Answers (3)

samtoddler
samtoddler

Reputation: 9675

For us we updated all the provider URLs which we were using in the code like below:

terraform state replace-provider 'registry.terraform.io/-/null' \
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' \
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' \
'registry.terraform.io/hashicorp/aws'

I would like to be very specific with replacement so I used the broken URL while replacing the new one.

To be more specific this is only with terraform 13

https://www.terraform.io/docs/providers/index.html#providers-in-the-terraform-registry

Upvotes: 18

anldisr
anldisr

Reputation: 216

All credits for this fix go to the one mentioning this on the cloudposse slack channel:

terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null

This fixed my issue with this error, on to the next error. All to upgrade a version on terraform.

Upvotes: 18

Naveen
Naveen

Reputation: 99

This error arises when there’s an object in the latest Terraform state that is no longer in the configuration but Terraform can’t destroy it (as would normally be expected) because the provider configuration for doing so also isn’t present.

Solution:

This should arise only if you’ve recently removed object "data.null_data_source" along with the provider "null" block. To proceed with this you’ll need to temporarily restore that provider "null" block, run terraform apply to have Terraform destroy object data "null_data_source", and then you can remove the provider "null" block because it’ll no longer be needed.

Upvotes: 0

Related Questions