Bert Alfred
Bert Alfred

Reputation: 511

Packer failing after terminating source with non-zero exit status: 2

I am attempting to create a new ami using packer and ansible. I'm admittedly very new to both. (I have used fabric and puppet in the past.) I noticed that I was using an amazon linux base ami and decided to change to a centos image instead. This has brought along a few hurdles which I have overcome. However, I am now getting the following error. It seems odd that it is occurring after "Terminating the source AWS instance..."

==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: No AMIs to cleanup
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' errored: Script exited with non-zero exit status: 2

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: Script exited with non-zero exit status: 2

==> Builds finished but no artifacts were created.

The packer json is:

{
    "variables": {
        "ansible_staging_directory": "/tmp/packer-provisioner-ansible-local",
        "role": "",
        "aws_base_ami": "",
        "aws_base_instance_type": "",
        "aws_region": "",
        "aws_vpc_id": "",
        "aws_subnet": "",
        "aws_ssh_user": "",
        "company_version": "",
        "additional_aws_users": "{{env `AWS_PROD_ID`}}",        
        "aws_access_key": "<redacted>",
        "aws_secret_key": "<redacted>"
    },
    "builders": [{
        "type": "amazon-ebs",
        "region": "{{user `aws_region`}}",
        "vpc_id": "{{user `aws_vpc_id`}}",
        "subnet_id": "{{user `aws_subnet`}}",
        "source_ami": "{{user `aws_base_ami`}}",
        "instance_type": "{{user `aws_base_instance_type`}}",
        "ssh_username": "{{user `aws_ssh_user`}}",
        "ssh_pty": true,
        "associate_public_ip_address": true,
        "ami_users": "{{user `additional_aws_users`}}",
        "ami_name": "company-{{user `role`}}-{{user `company_version`}} {{timestamp}}",
        "tags": {
            "Role": "company-{{user `role`}}",
            "Version": "{{user `company_version`}}",
            "Timestamp": "{{timestamp}}"
        }
    }],
    "provisioners": [
        {
            "type": "shell",
            "inline": "mkdir -p {{user `ansible_staging_directory`}}"
        },
        {
            "type": "file",
            "source": "../roles",
            "destination": "{{user `ansible_staging_directory`}}"
        },
        {
            "type": "file",
            "source": "../files",
            "destination": "{{user `ansible_staging_directory`}}"
        },
        {
            "type": "shell",
            "inline": [
                "sudo yum -y update",
                "sudo update-alternatives --set python /usr/bin/python2.7",
                "sudo yum -y install emacs",
                "sudo yum -y install telnet",
                "sudo yum -y install epel-release",
                "sudo yum -y install python-pip",
                "sudo yum -y install gcc libffi-devel python-devel openssl-devel",              
                "sudo pip install ansible"
            ]
        },
        {
            "type": "ansible-local",
            "playbook_file": "packer-playbook.yml",
            "group_vars": "../group_vars",
            "extra_arguments": [
                "-t {{user `role`}}",
                "--extra-vars 'aws_access_key={{user `aws_access_key`}} aws_secret_key={{user `aws_secret_key`}}'"
            ]
        }
    ]
}

The base image I am using is ami-d440a6e7.

Any guidance would be greatly appreciated. I have yet to be able to find exit code 2 or anything similar.

UPDATE

I have determined that by removing the line:

"sudo update-alternatives --set python /usr/bin/python2.7",

From the last shell provisioner it seems to complete that step and moves onto ansible. However, ansible fails as it is depending on python2.7

Upvotes: 4

Views: 14535

Answers (2)

mikecali
mikecali

Reputation: 1

Thanks for this post. This guide me to solve almost similar issue when trying to run some script (vmware tools installation).

The issue was I am running my cleanup script before the vmtools script rum which means some of the required files are already missing.

vmware-iso: tar: /tmp/VMwareTools-*.tar: Cannot open: No such file or directory
vmware-iso: tar: Error is not recoverable: exiting now

==> vmware-iso: Stopping virtual machine... ==> vmware-iso: Deleting output directory... Build 'vmware-iso' errored: Script exited with non-zero exit status: 2

Here is my provisioner scripts:

provisioners": [
  "execute_command": "echo 'rhel' | {{.Vars}} sudo -S -E sh -eux '{{.Path}}'",
  "scripts": [
    "scripts/common/metadata.sh",
    "scripts/centos/networking.sh",
    "scripts/common/sshd.sh",
    "scripts/rhel-ec/rhel-user.sh",
    "scripts/rhel-ec/vmtools.sh",
    "scripts/rhel-ec/cleanup.sh",
    "scripts/centos/cleanup.sh",
    "scripts/common/minimize.sh"
  ],
  "type": "shell"
}

],

The solution is to arrange my script. Which means running the vmtools.sh script first before cleanup.sh run.

Upvotes: 0

Bert Alfred
Bert Alfred

Reputation: 511

Ok, so it appears that by removing the

"sudo update-alternatives --set python /usr/bin/python2.7",

I was able to get past the above error and found that unusable was failing due to a couple of python dependencies that my ansible playbook checks for were named differently in the centos yum repo than they were in the amazon-linux repo. For example, python27-setuptools is just python-setuptools and python27-Cython is just Cython.

Thanks all for your help and guidance!

Upvotes: 1

Related Questions