Alex Cohen
Alex Cohen

Reputation: 6236

How to add an ssh key to an GCP instance using terraform?

So I have a terraform script that creates instances in Google Cloud Platform, I want to be able to have my terraform script also add my ssh key to the instances I create so that I can provision them through ssh. Here is my current terraform script.

#PROVIDER INFO
provider "google" {
  credentials = "${file("account.json")}"
  project     = "myProject"
  region      = "us-central1"
}


#MAKING CONSUL SERVERS
resource "google_compute_instance" "default" {
  count    =  3
  name     =  "a-consul${count.index}"
  machine_type = "n1-standard-1"
  zone         = "us-central1-a"

  disk {
    image = "ubuntu-1404-trusty-v20160627"
  }

  # Local SSD disk
  disk {
    type    = "local-ssd"
    scratch = true
  }

  network_interface {
    network = "myNetwork"
    access_config {}
  }
}

What do I have to add to this to have my terraform script add my ssh key /Users/myUsername/.ssh/id_rsa.pub?

Upvotes: 34

Views: 49003

Answers (12)

Samith Perera
Samith Perera

Reputation: 266

I prefer to automate this further. Instead of defining it in a variable, I've created a folder and added all my public keys to that directory. (Please refer to the directory structure below.)

.
├── backends
│   └── nonprod.tfvars
├── data.tf
├── main.tf
├── outputs.tf
├── provider.tf
├── ssh-keys
│   ├── devops
│   ├── qa1
│   └── test
├── terraform.tf
├── variables
│   └── nonprod.fears
└── variables.tf

Then I can use the local resource to read any files inside the 'ssh-keys' directory and add them to the GCP instance.

locals {
  public_keys = { for k in fileset("${path.module}/ssh-keys", "*") : k => file("${path.module}/ssh-keys/${k}") }
}

resource "google_compute_instance" "app1" {
  count = var.num_instances
  name         = "app1"
  machine_type = var.instance_type
  project      = var.project_id
  zone         = var.instance_zone
  metadata = {
    ssh-keys = join("\n", [for user, key in local.public_keys : "${user}:${key}"])
  }

  tags = [
    "http-server"
  ]

  boot_disk {
    auto_delete = true
    device_name = "${var.instance_name_prefix}-${count.index + 1}"
    mode        = "READ_WRITE"

    initialize_params {
      image = data.google_compute_image.debian_10.self_link
      size  = var.disk_size
      type  = "pd-balanced"
    }
  }

  network_interface {
    subnetwork = var.subnet

    access_config {
      nat_ip = google_compute_address.external[count.index].address
    }
  }
}

Upvotes: 1

Bertware
Bertware

Reputation: 1

In case there are multiple users with each user having potentially multiple ssh keys, this can simplify the management of the ssh-keys.

# sample pub-keys variable
pub_keys = {
  user1 = [
    "ssh-ed25519 AAAAC3Lnasdfehdfhre345adgN4z1 user1@mypc1",
    "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYjpLYqb5Vnu86PRpU= user1@laptop1"
  ]
  user2 = [
    "ssh-ed25519 AAAAC3NzaC1lZDsdfo7iSL user2@mypc2"
  ]
}


# add this to the google_compute_instance
metadata = {
  ssh-keys = join("\n", [
    for sshkeys in flatten([for user, keys in var.pub_keys : [
      for key in keys : { user = user, key = key }
    ]]) : "${sshkeys.user}:${sshkeys.key}"
  ])
}

Upvotes: 0

MTR
MTR

Reputation: 1

One thing to know is that if there's higher level project, you may need to add to your metadata an option to disable os login.

metadata = {
    "enable-oslogin" = false
    "ssh-keys"       = ...
}

Only then authorized_keys file will be created

Also you may need to add "can_ip_forward = true" option to enable external traffic.

Upvotes: 0

Saurabh
Saurabh

Reputation: 6960

First, you'll need a compute instance:

resource "google_compute_instance" "website_server" {
  name                      = "webserver"
  description               = "Web Server"
  machine_type              = "f1-micro"
  allow_stopping_for_update = true
  deletion_protection       = false

  tags = ["webserver-instance"]

  shielded_instance_config {
    enable_secure_boot          = true
    enable_vtpm                 = true
    enable_integrity_monitoring = true
  }

  scheduling {
    provisioning_model  = "STANDARD"
    on_host_maintenance = "TERMINATE"
    automatic_restart   = true
  }

  boot_disk {
    mode        = "READ_WRITE"
    auto_delete = true
    initialize_params {
      image = "ubuntu-minimal-2204-jammy-v20220816"
      type  = "pd-balanced"
    }
  }

  network_interface {
    network = "default"

    access_config {
      network_tier = "PREMIUM"
    }
  }

  metadata = {
    ssh-keys               = "${var.ssh_user}:${local_file.public_key.content}"
    block-project-ssh-keys = true
  }

  labels = {
    terraform = "true"
    purpose   = "host-static-files"
  }

  service_account {
    # Custom service account with restricted permissions
    email  = data.google_service_account.myaccount.email
    scopes = ["compute-rw"]
  }

}

Note that ssh-keys field in the metadata needs the public key data in "Authorized Keys" format, i.e., the open SSH public key. This is similar to doing a pbcopy < ~/.ssh/id_ed25519.pub

You'll need a firewall rule to allow SSH on (default) port 22:

resource "google_compute_firewall" "webserver_ssh" {
  name    = "webserver-firewall"
  network = "default"

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  target_tags   = ["webserver-instance"]
  source_ranges = ["0.0.0.0/0"]
}

Your public and private keys can be ephemeral to make things more seamless:

resource "tls_private_key" "webserver_access" {
  algorithm = "ED25519"
}

resource "local_file" "public_key" {
  filename        = "server_public_openssh"
  content         = trimspace(tls_private_key.webserver_access.public_key_openssh)
  file_permission = "0400"
}

resource "local_sensitive_file" "private_key" {
  filename = "server_private_openssh"
  # IMPORTANT: Newline is required at end of open SSH private key file
  content         = tls_private_key.webserver_access.private_key_openssh
  file_permission = "0400"
}

And finally, to login you would need a connection string based on:

output "instance_connection_string" {
  description = "Command to connect to the compute instance"
  value       = "ssh -i ${local_sensitive_file.private_key.filename} ${var.ssh_user}@${google_compute_instance.website_server.network_interface.0.access_config.0.nat_ip} ${var.host_check} ${var.ignore_known_hosts}"
  sensitive   = false
}

where the variable file could look like:

variable "ssh_user" {
  type        = string
  description = "SSH user for compute instance"
  default     = "myusername"
  sensitive   = false
}

variable "host_check" {
  type        = string
  description = "Dont add private key to known_hosts"
  default     = "-o StrictHostKeyChecking=no"
  sensitive   = false
}

variable "ignore_known_hosts" {
  type        = string
  description = "Ignore (many) keys stored in the ssh-agent; use explicitly declared keys"
  default     = "-o IdentitiesOnly=yes"
  sensitive   = false
}

Upvotes: 1

surya
surya

Reputation: 1

I tested below ways of injecting ssh public key to a google compute instance and its working for me.

  metadata = {
    ssh-keys = "${var.ssh_user}:${file("./gcp_instance_ssh_key.pub")}"
  OR 
    ssh-keys  = "${var.ssh_user}:${file(var.public_key_path)}"

  OR
    ssh-keys  = "${var.ssh_user}:${file("${var.public_key_path}")}"
   
  }

variable "public_key_path" {
    default = "./gcp_instance_ssh_key.pub"   ##public key with path
}

Please note to use ssh-keys instead of ssh_keys (with underscore)

Upvotes: 0

Abdul Fahad
Abdul Fahad

Reputation: 236

I have below working for me: for all vms a single ssh key

resource "google_compute_project_metadata" "my_ssh_key" {
  metadata = {
    ssh-keys = <<EOF
      terakey:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICqaF7TqtimTUtqLdZIspKjuTXXXXnkbW7N9TQBPXazu terakey
      
    EOF
  }
}

Upvotes: 3

Daniel Habenicht
Daniel Habenicht

Reputation: 2243

Just updating for multiple keys in Terraform v0.15.4:

metadata = {
    ssh-keys = join("\n", [for key in var.ssh_keys : "${key.user}:${key.publickey}"])
}

And accoring variables:

variable "ssh_keys" {
  type = list(object({
    publickey = string
    user = string
  }))
  description = "list of public ssh keys that have access to the VM"
  default = [
      {
        user = "username"
        publickey = "ssh-rsa yourkeyabc username@PC"
      }
  ]
}

Upvotes: 4

sambit
sambit

Reputation: 339

You can use the following

metadata = {
  ssh-keys = "username:${file("username.pub")}"
}

I was struggling to create an instance with the ssh key using terraform & this answer is tested & working as well.

Upvotes: 7

hashier
hashier

Reputation: 4750

If you want multiple keys you can use heredoc like this

  metadata = {
    "ssh-keys" = <<EOT
<user>:<key>
<user>:<key>
EOT
  }

I stayed with the weird formatting here in the post that terraform fmt provided me.

Upvotes: 5

mblakele
mblakele

Reputation: 7842

I think something like this should work:

  metadata = {
    ssh-keys = "${var.gce_ssh_user}:${file(var.gce_ssh_pub_key_file)}"
  }

https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys describes the metadata mechanism, and I found this example at https://github.com/hashicorp/terraform/issues/6678

Upvotes: 52

0x416e746f6e
0x416e746f6e

Reputation: 10136

Just for the record. As of 0.12 it seems the block should look like:

resource "google_compute_instance" "default" {
  # ...

  metadata = {
    ssh-keys = join("\n", [for user, key in var.ssh_keys : "${user}:${key}"])
  }

  # ...
}

(Note = sign after metadata token and ssh-keys vs. sshKeys).

Upvotes: 20

tanaji yadav
tanaji yadav

Reputation: 71

Here is tested one.

  metadata {
    sshKeys = "${var.ssh_user}:${var.ssh_key} \n${var.ssh_user1}:${var.ssh_key1}"
}

Upvotes: 7

Related Questions