allan.simon
allan.simon

Reputation: 4326

Terraform AWS ASG: Error: timeout - last error: ssh: handshake failed: ssh: unable to authenticate

I'm using terraform 0.12 to create an autoscaling group with aws and when I terraform apply I got :

aws_autoscaling_group.satellite_websites_asg: Still creating... [4m50s elapsed]
aws_autoscaling_group.satellite_websites_asg: Still creating... [5m0s elapsed]
aws_autoscaling_group.satellite_websites_asg: Still creating... [5m10s elapsed]
aws_autoscaling_group.satellite_websites_asg: Still creating... [5m20s elapsed]
aws_autoscaling_group.satellite_websites_asg: Still creating... [5m30s elapsed]


Error: timeout - last error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain

If I check in aws, the ASG has been created, and I can ssh to the instance in the ASG

my .tf file

data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"] # Canonical

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
  }
}

resource "aws_launch_configuration" "satellite_websites_conf" {
  name_prefix          = "satellite_websites_conf-"
  image_id             = "${data.aws_ami.ubuntu.id}"
  instance_type        = "t3.micro"
  enable_monitoring    = "true"
  key_name             = data.terraform_remote_state.shared_infra.outputs.vpc_access_keyname
  iam_instance_profile = data.terraform_remote_state.shared_infra.outputs.ecs_iam_instance_profile
  security_groups      = [aws_security_group.ghost_ec2_http_https_ssh.id]
  user_data            = "${file("./boot-script.sh")}"

  lifecycle {
    create_before_destroy = true
  }
}


# ASG in which we'll host EC2 instance running ghost servers
resource "aws_autoscaling_group" "satellite_websites_asg" {
  name_prefix          = "satellite_websites_asg-"
  max_size             = 1
  min_size             = 1
  launch_configuration = "${aws_launch_configuration.satellite_websites_conf.name}"
  vpc_zone_identifier  = data.terraform_remote_state.shared_infra.outputs.vpc_private_subnets
  load_balancers       = ["${aws_elb.satellite_websites_elb.name}"]
  health_check_type    = "ELB"

  provisioner "file" {
    content = templatefile("${path.module}/ghost-config.json.template", {
         // somestuff
    })
    destination = "~/config.production.template"
  }
  provisioner "file" {
    source      = "${path.module}/boot-script.sh"
    destination = "~/boot-script.sh"
  }

  lifecycle {
    create_before_destroy = true
  }
}

Upvotes: 0

Views: 745

Answers (1)

ydaetskcoR
ydaetskcoR

Reputation: 56997

You would need to provide connection details for the file provisioner to be able to connect to the ASG instance.

Unfortunately the ASG resource only indirectly manages the instances it creates and so doesn't return this information.

You could have an aws_instance data source dependent on the ASG and use that to look up the instances it creates but modifying an instance by connecting to it after the ASG has created is an anti pattern and doesn't help you if the ASG replaces the instances as you and your automation software (eg Terraform) are not in the loop at that point.

Instead you should attempt to bake any generic configuration (eg Ghost and its dependencies installation in your case I think?) into an AMI using something like Packer. For anything that needs to be different between environments then use user data to make these changes on instance creation or something more dynamic and runtime based such as Consul.

Upvotes: 1

Related Questions