adele dazim
adele dazim

Reputation: 537

error trying to change the root volume_size for EC2 instance using ansible

I am trying to create an EC2 instance with an ansible playbook and set the root volume size with the playbook. Works great without the volumes variable included, of course. But, I want to set a different default size for the root volume.

My playbook looks like this:

# Use the ec2 module to create a new host and then add
# it to a special "ec2hosts" group.

- hosts: localhost
  connection: local
  gather_facts: False
  vars:
    instance_type: "t2.micro"
    image: "ami-1420b57c"
    region: "us-east-1"
    volumes:
    - device_name: /dev/xvda
      volume_size: 10

  tasks:
    - name: make one instance
      ec2: image="{{ image }}"
           instance_type="{{ instance_type }}"
           keypair="{{ keypair }}"
           region="{{ region }}"
           group="{{ group }}"
           volumes="{{ volumes }}"
           instance_tags='{"Name":"{{instance_name}}"}'
           wait=true
      register: ec2_host

    - debug: var=ec2_host
    - debug: var=item
      with_items: ec2_host.instance_ids

    - add_host: hostname={{ item.public_ip }} groupname=ec2hosts
      with_items: ec2_host.instances

And then when I run the playbook command, i get the following error.

ansible-playbook ec2-simple.yml -e "instance_name=testnode keypair=mykeypair group=testgroup"

PLAY [localhost] ************************************************************** 

TASK: [make one instance] ***************************************************** 
failed: [localhost] => {"failed": true}
msg: Device name must be set for volume

FATAL: all hosts have already failed -- aborting

I've tried alternatives. With and without quotes etc., Nothing works. Sometimes, I get a different error.

I'm using Ansible 1.7.2 on Mac.

Upvotes: 4

Views: 4755

Answers (3)

Alex Lynham
Alex Lynham

Reputation: 1318

If you're provisioning an ubuntu box, you might expect an xvda1 volume name, but if you go through the wizard, you'll see that AWS calls volumes by default sda, sdb etc - so you can use these names instead, knowing that on your running instance they will map to their ubuntu equivalents (e.g. sda1 => xvda1, sdb => xvdb etc.)

The volumes declaration I have to mount four volumes (internal, plus xvdb, xvdc, xvdd) looks like this:

- name: Bootstrap EC2 instance and volumes
    ec2:
      key_name: "{{ec2_keypair}}"
      group_id: foo
      instance_type: "{{aws_instance_size}}"
      image: bar
      wait: yes
      wait_timeout: 500
      region: eu-west-1
      volumes:
      - device_name: /dev/sda1
        volume_size: 24
        delete_on_termination: true
      - device_name: /dev/sdb
        volume_size: 24
      - device_name: /dev/sdc
        volume_size: 300
      - device_name: /dev/sdd
        volume_size: 10
      monitoring: no
    register: new_ec2

Upvotes: 0

Arbab Nazar
Arbab Nazar

Reputation: 23811

Here is my complete ec2 lauch instance playbook (working example), hope this will help you or anyone that need help:

This playbook will launch an EC2 Instance(s) with the variable that you have define in the playbook, add the lauched EC2 instance's public ip to the the hosts file automatically under the group [ec2host]. This play book assume that you have the hosts file inside the directory from where you are running the playbook.

---
  - name: Provision an EC2 Instance
    hosts: local
    connection: local
    gather_facts: False
    tags: provisioning
    # Necessary Variables for creating/provisioning the EC2 Instance
    vars:
      instance_type: t1.micro
      security_group: ec2host
      image: ami-98aa1cf0
      region: us-east-1
      keypair: ansible
      volumes:
       - device_name: /dev/xvda
         volume_size: 10
      count: 1

    # Task that will be used to Launch/Create an EC2 Instance
    tasks:

      - name: Create a security group
        local_action: 
          module: ec2_group
          name: "{{ security_group }}"
          description: Security Group for ec2host Servers
          region: "{{ region }}"
          rules:
            - proto: tcp
              type: ssh
              from_port: 22
              to_port: 22
              cidr_ip: 0.0.0.0/0
            - proto: tcp
              from_port: 6800
              to_port: 6800
              cidr_ip: 0.0.0.0/0
          rules_egress:
            - proto: all
              type: all
              cidr_ip: 0.0.0.0/0


      - name: Launch the new EC2 Instance
        local_action: ec2 
                      group={{ security_group }} 
                      instance_type={{ instance_type}} 
                      image={{ image }} 
                      wait=true 
                      region={{ region }} 
                      keypair={{ keypair }}
                      volumes={{ volumes }}
                      count={{count}}
        register: ec2

      - name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
        local_action: lineinfile 
                      dest="./hosts" 
                      regexp={{ item.public_ip }} 
                      insertafter="[ec2host]" line={{ item.public_ip }}
        with_items: ec2.instances


      - name: Wait for SSH to come up
        local_action: wait_for 
                      host={{ item.public_ip }} 
                      port=22 
                      state=started
        with_items: ec2.instances

      - name: Add tag to Instance(s)
        local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
        with_items: ec2.instances
        args:
          tags:
            Name: ec2host

The hosts file will be look like this:

[local]
localhost

[ec2host]

Please use the following command to run this playbook:

ansible-playbook -i hosts ec2_launch.yml

Here "ec2_launch.yml" is the name of the playbook that we have defined above.

Upvotes: 0

300D7309EF17
300D7309EF17

Reputation: 24653

I got interested in answering this because you put in a (nearly) fully working example. I copied it locally, made small changes to work in my AWS account, and iterated to figure out the solution.

I suspected a YAML+Ansible problem. I tried a bunch of things and looked around. Michael DeHaan (creator of Ansible) said the complex argument/module style is required as seen in the ec2 examples. Here's how the module looks now- no changes elsewhere.

  tasks:
    - name: make one instance
      local_action:
           module: ec2
           image: "{{ image }}"
           instance_type: "{{ instance_type }}"
           keypair: "{{ keypair }}"
           region: "{{ region }}"
           group: "{{ group }}"
           volumes: "{{volumes}}"
           instance_tags: '{"Name":"{{instance_name}}"}'
           wait: true
      register: ec2_host

After converting it worked- or at least got to the next error, which was because the EC2 instance needs to be in a VPC (error below). I expect you can solve that- if not, leave a comment and I'll get it fully working.

TASK: [make one instance] ***************************************************** 
<127.0.0.1> REMOTE_MODULE ec2 region=us-east-1 keypair=mykey instance_type=t2.micro image=ami-1420b57c group=default
failed: [127.0.0.1 -> 127.0.0.1] => {"failed": true}
msg: Instance creation failed => VPCResourceNotSpecified: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.

Upvotes: 2

Related Questions