Commit Graph

557 Commits (e79e0240165bf2aa77612be2f1227ca7bb3c5fc7)

Author SHA1 Message Date
David Shrewsbury 2dddfbe67c Add shade version check to os_flavor_facts
The range_search() API was added to the shade library in version
1.5.0 so let's check for that and let the user know they need to
upgrade if they try to use it.
2016-12-08 11:34:00 -05:00
Brian Coca 41af347d8d renamed sl to sl_vm and updated docs
namespace for softlayer modules should now be sl_
2016-12-08 11:33:59 -05:00
Matt Colton 8f444b8c4b Added Softlayer Module 2016-12-08 11:33:59 -05:00
Julia Kreger 011267c04e Add os_ironic_inspect module
Addition of an os_ironic_inspect module to leverage the OpenStack
Baremetal inspector add-on to ironic or ironic driver out-of-band
hardware introspection, if supported and configured.
2016-12-08 11:33:59 -05:00
Ricardo Carrillo Cruz ba3515bc30 Allow passing domain name on os_project 2016-12-08 11:33:59 -05:00
Rene Moser 7a28ad63f7 dynamodb_table: doc fix 2016-12-08 11:33:59 -05:00
Matt Ferrante 99c8e82b60 dynamo db indexes 2016-12-08 11:33:59 -05:00
Casey Lucas 4cd9933388 fix edge case where boto returns empty list after subnet creation 2016-12-08 11:33:59 -05:00
Dennis Conrad 0254cbad9a Fix for existing ENIs w/ multiple security groups
Do a sorted comparison of the list of security groups supplied via `module.params.get('security_groups')` and the list of security groups fetched via `get_sec_group_list(eni.groups)`.  This fixes an incorrect "The specified address is already in use" error if the order of security groups in those lists differ.
2016-12-08 11:33:58 -05:00
Rob White 7b0b4262e5 Allow SNS topics to be created without subscriptions. Also added better error handling around boto calls. 2016-12-08 11:33:58 -05:00
Fernando J Pando be083a8fbe author added 2016-12-08 11:33:58 -05:00
Fernando J Pando 6d69956f83 Fix SNS topic attribute typo
Enables adding SNS topic policy. 'Policy' attribute is capitalized.
2016-12-08 11:33:58 -05:00
Joel Thompson 61672e5c61 Ensure ec2_win_password doesn't leak file handle
Currently the module doesn't explicitly close the file handle. This
wraps the reading of the private key in a try/finally block to ensure
the file is properly closed.
2016-12-08 11:33:58 -05:00
Rene Moser b92b30e3b3 ec2_vpc_dhcp_options: doc fix, add version_added to new args
See #1640
2016-12-08 11:33:58 -05:00
Andy Nelson 5718a5caac Updated ec2_vpc_dhcp_options 2016-12-08 11:33:57 -05:00
Darek Kaczyński 9e918b5955 Removed debug return values 2016-12-08 11:33:57 -05:00
Darek Kaczyński 7127a45d96 ecs_service will now compare whole model and update it if any difference found. Documentation #1483. Workaround for datetime fileds #1348. 2016-12-08 11:33:57 -05:00
Darek Kaczyński 9b27ed6c5d ecs_service_facts documentation fixes #1483. Workaround for datetime fileds #1348. 2016-12-08 11:33:57 -05:00
Alex Kalinin e97ca89953 Fix vmware_portgroup throwing an error if port group already exists 2016-12-08 11:33:57 -05:00
Toshio Kuratomi 5b84102a15 Doc fixes 2016-12-08 11:33:56 -05:00
Gabriel Burkholder 6a202054f8 Fixes route53_facts to use max_items parameter with record_sets query. 2016-12-08 11:33:56 -05:00
nonshankus a1fdff4c97 Adding missing attributes regarding the hosted zone. 2016-12-08 11:33:56 -05:00
David Shrewsbury cd2c7deec4 Add os_group.py OpenStack module
Allows an admin (or privileged user) to manage Keystone v3
groups.
2016-12-08 11:33:56 -05:00
David Shrewsbury e25c04aeb0 Add new os_flavor_facts.py module
New module to retrieve facts about existing instance flavors.
By default, facts on all available flavors will be returned.
This can be narrowed by naming a flavor or specifying criteria
about flavor RAM or VCPUs.
2016-12-08 11:33:56 -05:00
David Shrewsbury b697d986c1 Add new os_keystone_role module.
This new module allows for creating and deleting Keystone
roles.
2016-12-08 11:33:56 -05:00
Rene Moser fd68e66827 cloudstack: new module cs_zone_facts 2016-12-08 11:33:55 -05:00
Ritesh Khadgaray 06d2682b08 Fix test failure for lxc_container
TRACE:
    while parsing a block mapping
      in "<string>", line 33, column 13:
                    description: resulting state of  ...
                    ^
    expected <block end>, but found ','
      in "lxc_container.RETURN", line 419, column 53:
         ... "/tmp/test-container-config.tar",

ERROR: RETURN is not valid YAML. Line 419 column 53
2016-12-08 11:33:54 -05:00
Brian Coca bb355e6ccd add container name to return and document return
fixes #1848
2016-12-08 11:33:54 -05:00
liquidat 87bc5fcb24 remove legacy action style from examples
- "action" style invoking is a legacy way to call modules
- the examples were updated to the typical style of calling complex
  modules:

ovirt:
  parameter1: value1
  parameter2: value2
  ...
2016-12-08 11:33:53 -05:00
Ricardo Carrillo Cruz 9fea94b5bf Fix instantiation of openstack_cloud object in os_project
The os_project module instantiates the openstack cloud object
by passing the module params kwargs.
As the params contain a key named 'domain_id', this is used
for domain in the OpenStack connection, instead of the domain value
the user specifies on the OSCC clouds.yaml or OpenStack envvars.
This fix corrects this by popping the 'domain_id' key, so it we
keep the value but it's not passed later on module.params.
2016-12-08 11:33:53 -05:00
Darek Kaczyński 17a6cea512 ecs_task module documentation fixes 2016-12-08 11:33:53 -05:00
Joseph Callen 773db55233 Resolves issue with vmware_migrate_vmk module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_migrate_vmk module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Migrate Management vmk
      local_action:
        module: vmware_migrate_vmk
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        esxi_hostname: "{{ hostvars[item].hostname }}"
        device: vmk1
        current_switch_name: temp_vswitch
        current_portgroup_name: esx-mgmt
        migrate_switch_name: dvSwitch
        migrate_portgroup_name: Management
      with_items: groups['foundation_esxi']
```

Module Testing
```
TASK [Migrate Management vmk] **************************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/migrate_vmk.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252 )" )
localhost PUT /tmp/tmpdlhr6t TO /root/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252/vmware_migrate_vmk
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252/vmware_migrate_vmk; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168 )" )
localhost PUT /tmp/tmpqfZqh1 TO /root/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168/vmware_migrate_vmk
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168/vmware_migrate_vmk; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882 )" )
localhost PUT /tmp/tmpf3rKZq TO /root/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882/vmware_migrate_vmk
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882/vmware_migrate_vmk; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882/" > /dev/null 2>&1
ok: [foundation-vcsa -> localhost] => (item=foundation-esxi-01) => {"changed": false, "invocation": {"module_args": {"current_portgroup_name": "esx-mgmt", "current_switch_name": "temp_vswitch", "device": "vmk1", "esxi_hostname": "cscesxtmp001", "hostname": "172.27.0.100", "migrate_portgroup_name": "Management", "migrate_switch_name": "dvSwitch", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root"}, "module_name": "vmware_migrate_vmk"}, "item": "foundation-esxi-01"}
ok: [foundation-vcsa -> localhost] => (item=foundation-esxi-02) => {"changed": false, "invocation": {"module_args": {"current_portgroup_name": "esx-mgmt", "current_switch_name": "temp_vswitch", "device": "vmk1", "esxi_hostname": "cscesxtmp002", "hostname": "172.27.0.100", "migrate_portgroup_name": "Management", "migrate_switch_name": "dvSwitch", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root"}, "module_name": "vmware_migrate_vmk"}, "item": "foundation-esxi-02"}
ok: [foundation-vcsa -> localhost] => (item=foundation-esxi-03) => {"changed": false, "invocation": {"module_args": {"current_portgroup_name": "esx-mgmt", "current_switch_name": "temp_vswitch", "device": "vmk1", "esxi_hostname": "cscesxtmp003", "hostname": "172.27.0.100", "migrate_portgroup_name": "Management", "migrate_switch_name": "dvSwitch", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root"}, "module_name": "vmware_migrate_vmk"}, "item": "foundation-esxi-03"}
```
2016-12-08 11:33:53 -05:00
Joseph Callen eece1346ab missing doc fragment 2016-12-08 11:33:53 -05:00
Joseph Callen 3721a4647c Resolves issue with vmware_vm_vss_dvs_migrate module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Migrate VCSA to vDS
      local_action:
        module: vmware_vm_vss_dvs_migrate
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        vm_name: "{{ hostname }}"
        dvportgroup_name: Management
```

Module Testing
```
ASK [Migrate VCSA to vDS] *****************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" )
localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null}

```
2016-12-08 11:33:53 -05:00
Joseph Callen cef9e42896 Resolves issue with vmware_host module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_host module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
    - name: Add Host
      local_action:
        module: vmware_host
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        datacenter_name: "{{ mgmt_vdc }}"
        cluster_name: "{{ mgmt_cluster }}"
        esxi_hostname: "{{ hostvars[item].hostname }}"
        esxi_username: "{{ esxi_username }}"
        esxi_password: "{{ site_passwd }}"
        state: present
      with_items: groups['foundation_esxi']
```

Module Testing
```
TASK [Add Host] ****************************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:214
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937 )" )
localhost PUT /tmp/tmppmr9i9 TO /root/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937/vmware_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937/vmware_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834 )" )
localhost PUT /tmp/tmpVB81f2 TO /root/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834/vmware_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834/vmware_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563 )" )
localhost PUT /tmp/tmpFB7VQB TO /root/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563/vmware_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563/vmware_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-01) => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "esxi_hostname": "cscesxtmp001", "esxi_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "esxi_username": "root", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_host"}, "item": "foundation-esxi-01", "result": "'vim.HostSystem:host-15'"}
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-02) => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "esxi_hostname": "cscesxtmp002", "esxi_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "esxi_username": "root", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_host"}, "item": "foundation-esxi-02", "result": "'vim.HostSystem:host-20'"}
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-03) => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "esxi_hostname": "cscesxtmp003", "esxi_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "esxi_username": "root", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_host"}, "item": "foundation-esxi-03", "result": "'vim.HostSystem:host-21'"}

```
2016-12-08 11:33:53 -05:00
Joseph Callen fb3bb746b2 Resolves issue with vmware_dvs_portgroup module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvs_portgroup module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Create Management portgroup
      local_action:
        module: vmware_dvs_portgroup
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        portgroup_name: Management
        switch_name: dvSwitch
        vlan_id: "{{ hostvars[groups['foundation_esxi'][0]].mgmt_vlan_id }}"
        num_ports: 120
        portgroup_type: earlyBinding
        state: present
```

Module Testing
```
TASK [Create Management portgroup] *********************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:17
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" )
localhost PUT /tmp/tmpeQ8M1U TO /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"hostname": "172.27.0.100", "num_ports": 120, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "portgroup_name": "Management", "portgroup_type": "earlyBinding", "state": "present", "switch_name": "dvSwitch", "username": "root", "vlan_id": 2700}, "module_name": "vmware_dvs_portgroup"}, "result": "None"}
```
2016-12-08 11:33:52 -05:00
Joseph Callen 9bdc59a6ac Resolves issue with vmware_cluster module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_cluster module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
    - name: Create Cluster
      local_action:
        module: vmware_cluster
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        datacenter_name: "{{ mgmt_vdc }}"
        cluster_name: "{{ mgmt_cluster }}"
        enable_ha: True
        enable_drs: True
        enable_vsan: True
```

Module testing
```
TASK [Create Cluster] **********************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:188
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" )
localhost PUT /tmp/tmpAJfdPb TO /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "enable_drs": true, "enable_ha": true, "enable_vsan": true, "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_cluster"}}
```
2016-12-08 11:33:52 -05:00
Joseph Callen 0aa4f867de Resolves issue with vmware_dvs_host module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvs_host module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Add Host to dVS
      local_action:
        module: vmware_dvs_host
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        esxi_hostname: "{{ hostvars[item].hostname }}"
        switch_name: dvSwitch
        vmnics: "{{ dvs_vmnic }}"
        state: present
      with_items: groups['foundation_esxi']
```
Module Testing
```
TASK [Add Host to dVS] *********************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:234
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844 )" )
localhost PUT /tmp/tmpGrHqbd TO /root/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844/vmware_dvs_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844/vmware_dvs_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796 )" )
localhost PUT /tmp/tmpkP7DPu TO /root/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796/vmware_dvs_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796/vmware_dvs_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663 )" )
localhost PUT /tmp/tmp216NwV TO /root/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663/vmware_dvs_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663/vmware_dvs_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-01) => {"changed": true, "invocation": {"module_args": {"esxi_hostname": "cscesxtmp001", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "username": "root", "vmnics": ["vmnic2"]}, "module_name": "vmware_dvs_host"}, "item": "foundation-esxi-01", "result": "None"}
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-02) => {"changed": true, "invocation": {"module_args": {"esxi_hostname": "cscesxtmp002", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "username": "root", "vmnics": ["vmnic2"]}, "module_name": "vmware_dvs_host"}, "item": "foundation-esxi-02", "result": "None"}
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-03) => {"changed": true, "invocation": {"module_args": {"esxi_hostname": "cscesxtmp003", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "username": "root", "vmnics": ["vmnic2"]}, "module_name": "vmware_dvs_host"}, "item": "foundation-esxi-03", "result": "None"}
```
2016-12-08 11:33:52 -05:00
Rene Moser fb4c299f13 cloudstack: new module cs_zone 2016-12-08 11:33:52 -05:00
Rene Moser 3a6fd536ab cloudstack: new module cs_cluster 2016-12-08 11:33:52 -05:00
Rene Moser 2b21212dc6 cloudstack: new module cs_pod 2016-12-08 11:33:52 -05:00
Rene Moser 7d1a4db9ee cloudstack: new module cs_instance_facts 2016-12-08 11:33:52 -05:00
Rene Moser b609250cfd cloudstack: add new module cs_resourcelimit 2016-12-08 11:33:52 -05:00
Rene Moser 595eb1f8f1 cloudstack: new module cs_configuration 2016-12-08 11:33:52 -05:00
Matt Martz 4842758fd1 Choices should be a list of true/false not the string BOOLEANS 2016-12-08 11:33:51 -05:00
Matt Martz 402a996430 Don't call sys.exit in sns_topic, use HAS_BOTO to fail 2016-12-08 11:33:51 -05:00
Matt Martz 27be34ef9d DOCUMENTATION fixes for a few modules 2016-12-08 11:33:51 -05:00
Joseph Callen 9ab5b367bd Resolves issue with vmware_dvswitch module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvswitch module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Create dvswitch
      local_action:
        module: vmware_dvswitch
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        datacenter_name: "{{ mgmt_vdc }}"
        switch_name: dvSwitch
        mtu: 1500
        uplink_quantity: 2
        discovery_proto: lldp
        discovery_operation: both
        state: present
```
Module Testing
```
TASK [Create dvswitch] *********************************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" )
localhost PUT /tmp/tmptb3e2c TO /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"datacenter_name": "Test-Lab", "discovery_operation": "both", "discovery_proto": "lldp", "hostname": "172.27.0.100", "mtu": 1500, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "uplink_quantity": 2, "username": "root"}, "module_name": "vmware_dvswitch"}, "result": "'vim.dvs.VmwareDistributedVirtualSwitch:dvs-9'"}
```
2016-12-08 11:33:51 -05:00
Rene Moser 5344701557 cloudstack: cs_instance: implement updating security groups
ACS API implemented in 4.8, has no effect < 4.8.
2016-12-08 11:33:51 -05:00
Rene Moser 51393a0e0f cloudstack: use CS_HYPERVISORS from cloudstack utils 2016-12-08 11:33:51 -05:00