Wendy Rojas
Wendy Rojas

Reputation: 11

How to run node app inside remote-exec using Terraform

I want to automatically run a node app by configuring its steps inside the remote-exec of my Terraform script. However, when I run "terraform apply -auto-approve" I get an "Still creating... [10m21s elapsed]" message until it crashes.

This is the script I am using:

resource "aws_instance" "server_1" {
  ami                     = "ami-<id>"
  instance_type           = "t3.micro"
  associate_public_ip_address = true
  key_name = "server_1_kp"
  iam_instance_profile = aws_iam_instance_profile.ec2_access_profile.name
  
  root_block_device {
    volume_size = 25
  }

  connection {
      type        = "ssh"
      user        = "centos"
      private_key = file("server_1_kp.pem")
      host        = self.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      "echo 'Installing Git'",
      "sudo yum -y install git",
      "sudo yum install awscli -y",
      "git config --global credential.helper '!aws codecommit credential-helper $@'",
      "git config --global credential.UseHttpPath true",
      "git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/server-1",
      "curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash",
      "source ~/.bashrc",
      "nvm install 16.18.1",
      "cd server-1-app",
      "npm install",
      "npm run dev / node src/index.js ",
      "echo 'node server.js > /dev/null 2>&1' > app.sh",
      "nohup ./app.sh"
    ]
  }
}

The output I get:

aws_instance.server-1: Still creating... [12m51s elapsed]
aws_instance.server-1: Still creating... [13m1s elapsed]
aws_instance.server-1: Still creating... [13m11s elapsed]
aws_instance.server-1: Still creating... [13m21s elapsed]
aws_instance.server-1: Still creating... [13m31s elapsed]
aws_instance.server-1: Still creating... [13m41s elapsed]
aws_instance.server-1: Still creating... [13m51s elapsed]
aws_instance.server-1: Still creating... [14m1s elapsed]
aws_instance.server-1: Still creating... [14m11s elapsed]
aws_instance.server-1: Still creating... [14m21s elapsed]
aws_instance.server-1: Still creating... [14m31s elapsed]
aws_instance.server-1: Still creating... [14m41s elapsed]
aws_instance.server-1: Still creating... [14m51s elapsed]
aws_instance.server-1: Still creating... [15m1s elapsed]
aws_instance.server-1: Still creating... [15m11s elapsed]
aws_instance.server-1: Still creating... [15m21s elapsed]
aws_instance.server-1: Still creating... [15m31s elapsed]
aws_instance.server-1: Still creating... [15m41s elapsed]
aws_instance.server-1: Still creating... [15m51s elapsed]

I tried to execute these commands in place of the following commands:

New options Original value
npm run dev / node src/index.js "echo 'node server.js > /dev/null 2>&1' > app.sh", "nohup ./app.sh"
nohup node server.js > /dev/null 2>&1 & =

Upvotes: 0

Views: 887

Answers (1)

Martin Atkins
Martin Atkins

Reputation: 74654

In Terraform, provisioners are a last resort due to all of the additional complexity they imply, with you needing to ensure Terraform CLI can directly connect to the EC2 instance over SSH, can execute all of the commands successfully, and cleanly exit.

The situation you've described seems like it could be solved with one of the other two options recommended in the Terraform documentation:

I'll focus on the first option here because it's the closest to what you have already tried, but if you want to learn more about the section option you could refer to the HashiCorp Tutorial Provision Infrastructure with Packer.

The "Passing data into virtual machines..." section of the documentation lists some different approaches for different cloud platforms. You are using Amazon EC2, so the following bullet point is relevant to you:

Here's an example of using user_data with your EC2 instance, instead of the provisioner block:

resource "aws_instance" "server_1" {
  ami                     = "ami-<id>"
  instance_type           = "t3.micro"
  associate_public_ip_address = true
  key_name = "server_1_kp"
  iam_instance_profile = aws_iam_instance_profile.ec2_access_profile.name
  
  root_block_device {
    volume_size = 25
  }

  connection {
      type        = "ssh"
      user        = "centos"
      private_key = file("server_1_kp.pem")
      host        = self.public_ip
  }

  user_data = <<-EOT
    #!/bin/sh
    echo 'Installing Git'
    sudo yum -y install git
    sudo yum install awscli -y
    git config --global credential.helper '!aws codecommit credential-helper $@'
    git config --global credential.UseHttpPath true
    git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/server-1
    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
    source ~/.bashrc
    nvm install 16.18.1
    cd server-1-app
    npm install
    npm run dev / node src/index.js
    echo 'node server.js > /dev/null 2>&1' > app.sh
    nohup ./app.sh
  EOT
}

Notice that user_data is just literally the script to run. This assumes that the AMI you've selected in the ami argument is configured to run cloud-init during its boot process, which is typical for official Linux distribution images such as those provided by Ubuntu, Red Hat, and Amazon.

A plain script is one of the user_data formats supported by cloud-init, and so it will retrieve and run this script during your EC2 instance's boot process, without Terraform being involved.


Separately, note that nohup is not a typical way to run servers in a production environment, because in that case there's nothing supervising your program to restart it if it crashes.

Although nohup is a reasonable way to prove this concept, if you intend to use this in production I would suggest investigating how to integrate with your chosen operating system's service manager -- which is often systemd for modern Linux distributions -- so that it can be responsible for launching your process and also monitoring it so that it can be restarted automatically if it crashes, etc.

Upvotes: 1

Related Questions