Openstack Heat & Ansible. VM spinup and App deployment

3k views Asked by At

I am spinning up new VM's using openstack heat template and get the IP list of the newly spun up VM's. I am using Ansible scripts for the same.

I am able to get the new list of IP's from the heat and I am able to deploy an app using with_items in a sequential manner.

How can I do the deployments in parallel using Ansible scripts so that the total deployment time on "n" servers is same as that of One server.

1

There are 1 answers

1
larsks On BEST ANSWER

One options is to create a dynamic inventory script that will fetch the instance ips from Heat and make them available to Ansible. Consider a Heat template that looks like this:

heat_template_version: 2014-10-16

resources:

  nodes:
    type: OS::Heat::ResourceGroup
    properties:
      count: 3
      resource_def:
        type: node.yaml

outputs:

  nodes:
    value: {get_attr: [nodes, public_ip]}

This will define three nova instances, where each instance is defined as:

heat_template_version: 2014-10-16

resources:

  node:
    type: OS::Nova::Server
    properties:
      image: rhel-atomic-20150615
      flavor: m1.small
      key_name: lars
      networks:
        - port: {get_resource: node_eth0}

  node_eth0:
    type: OS::Neutron::Port
    properties:
      network: net0
      security_groups:
        - default
      fixed_ips:
        - subnet: 82d04267-635f-4eec-8211-10e40fcecef0

  node_floating:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network: public
      port_id: {get_resource: node_eth0}

outputs:

  public_ip:
    value: {get_attr: [node_floating, floating_ip_address]}

After deploying this stack, we can get a list of public ips like this:

$ heat output-show mystack nodes
[
  "172.24.4.234", 
  "172.24.4.233", 
  "172.24.4.238"
]

We can write a simple Python script to implement the dynamic inventory interface:

#!/usr/bin/python

import os
import argparse
import json
import subprocess

def parse_args():
    p = argparse.ArgumentParser()
    p.add_argument('--list',
                   action='store_true')
    p.add_argument('--host')
    return p.parse_args()

def get_hosts():
    hosts = subprocess.check_output([
        'heat', 'output-show', 'foo', 'nodes'])

    hosts = json.loads(hosts)
    return hosts

def main():
    args = parse_args()

    hosts = get_hosts()

    if args.list:
        print json.dumps(dict(all=hosts))
    elif args.host:
        print json.dumps({})
    else:
        print 'Use --host or --list'
        print hosts

if __name__ == '__main__':
    main()

We can test that out to see that it works:

$ ansible all -i inventory.py -m ping
172.24.4.238 | success >> {
    "changed": false,
    "ping": "pong"
}

172.24.4.234 | success >> {
    "changed": false,
    "ping": "pong"
}

172.24.4.233 | success >> {
    "changed": false,
    "ping": "pong"
}

Assume that we have the following Ansible playbook:

- hosts: all
  gather_facts: false
  tasks:
  - command: sleep 60

This will run the sleep 60 command on each host. That should take around one minute to run things in parallel, and around three minutes if things are serialized.

Testing things out:

$ time ansible-playbook -i inventory.py playbook.yaml
PLAY [all] ******************************************************************** 

TASK: [command sleep 60] ****************************************************** 
changed: [172.24.4.233]
changed: [172.24.4.234]
changed: [172.24.4.238]

PLAY RECAP ******************************************************************** 
172.24.4.233               : ok=1    changed=1    unreachable=0    failed=0   
172.24.4.234               : ok=1    changed=1    unreachable=0    failed=0   
172.24.4.238               : ok=1    changed=1    unreachable=0    failed=0   


real    1m5.141s
user    0m1.771s
sys 0m0.302s

As you can see, the command is executing in parallel on all three hosts, which is the behavior you were looking for (but be aware of this thread, which describes situations in which Ansible will serialize things without telling you).