2016-12-05 17:35:13 +00:00
|
|
|
#!/usr/bin/python
|
|
|
|
# -*- coding: utf-8 -*-
|
|
|
|
#
|
|
|
|
# Copyright (c) 2016 Red Hat, Inc.
|
|
|
|
#
|
|
|
|
# This file is part of Ansible
|
|
|
|
#
|
|
|
|
# Ansible is free software: you can redistribute it and/or modify
|
|
|
|
# it under the terms of the GNU General Public License as published by
|
|
|
|
# the Free Software Foundation, either version 3 of the License, or
|
|
|
|
# (at your option) any later version.
|
|
|
|
#
|
|
|
|
# Ansible is distributed in the hope that it will be useful,
|
|
|
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
# GNU General Public License for more details.
|
|
|
|
#
|
|
|
|
# You should have received a copy of the GNU General Public License
|
|
|
|
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
#
|
|
|
|
|
2017-08-16 03:16:38 +00:00
|
|
|
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
2017-03-14 16:07:22 +00:00
|
|
|
'status': ['preview'],
|
|
|
|
'supported_by': 'community'}
|
|
|
|
|
2016-12-06 10:35:25 +00:00
|
|
|
|
2016-12-05 17:35:13 +00:00
|
|
|
DOCUMENTATION = '''
|
|
|
|
---
|
|
|
|
module: ovirt_storage_domains
|
2017-04-21 22:20:24 +00:00
|
|
|
short_description: Module to manage storage domains in oVirt/RHV
|
2016-12-05 17:35:13 +00:00
|
|
|
version_added: "2.3"
|
|
|
|
author: "Ondra Machacek (@machacekondra)"
|
|
|
|
description:
|
2017-04-21 22:20:24 +00:00
|
|
|
- "Module to manage storage domains in oVirt/RHV"
|
2016-12-05 17:35:13 +00:00
|
|
|
options:
|
2017-08-23 12:44:02 +00:00
|
|
|
id:
|
|
|
|
description:
|
|
|
|
- "Id of the storage domain to be imported."
|
|
|
|
version_added: "2.4"
|
2016-12-05 17:35:13 +00:00
|
|
|
name:
|
|
|
|
description:
|
2017-08-28 15:15:41 +00:00
|
|
|
- "Name of the storage domain to manage. (Not required when state is I(imported))"
|
2016-12-05 17:35:13 +00:00
|
|
|
state:
|
|
|
|
description:
|
Add additional mapping attributes for VM/Template registration (#32835)
* Add VnicProfileMapping to register VM
Add vnic profile mappings to be supported in vm registration
* Add VnicProfileMapping to register template
Add vnic profile mappings to be supported in template registration
* Add reassign bad macs to register VM
Add reassign bad macs to be supported in vm registration.
* Add additional mappings params for VM registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any VM's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a VM configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that VM and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the VM to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the VM to cluster B in the secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
LUN mapping
Role mapping
Permissions mapping
Affinity group mapping
Affinity label mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add additional mappings params for Template registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any Template's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a Template configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that Template and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the Template to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the Template to cluster B in the
secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
Role mapping
Permissions mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add support for update OVF store
Add support for task of update OVF store in a storage domain.
2018-01-16 12:14:29 +00:00
|
|
|
- "Should the storage domain be present/absent/maintenance/unattached/imported/update_ovf_store"
|
2017-08-23 12:44:02 +00:00
|
|
|
- "I(imported) is supported since version 2.4."
|
Add additional mapping attributes for VM/Template registration (#32835)
* Add VnicProfileMapping to register VM
Add vnic profile mappings to be supported in vm registration
* Add VnicProfileMapping to register template
Add vnic profile mappings to be supported in template registration
* Add reassign bad macs to register VM
Add reassign bad macs to be supported in vm registration.
* Add additional mappings params for VM registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any VM's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a VM configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that VM and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the VM to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the VM to cluster B in the secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
LUN mapping
Role mapping
Permissions mapping
Affinity group mapping
Affinity label mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add additional mappings params for Template registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any Template's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a Template configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that Template and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the Template to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the Template to cluster B in the
secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
Role mapping
Permissions mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add support for update OVF store
Add support for task of update OVF store in a storage domain.
2018-01-16 12:14:29 +00:00
|
|
|
- "I(update_ovf_store) is supported since version 2.5, currently if C(wait) is (true), we don't wait for update."
|
|
|
|
choices: ['present', 'absent', 'maintenance', 'unattached', 'update_ovf_store']
|
2016-12-05 17:35:13 +00:00
|
|
|
default: present
|
|
|
|
description:
|
|
|
|
description:
|
|
|
|
- "Description of the storage domain."
|
|
|
|
comment:
|
|
|
|
description:
|
|
|
|
- "Comment of the storage domain."
|
|
|
|
data_center:
|
|
|
|
description:
|
|
|
|
- "Data center name where storage domain should be attached."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "This parameter isn't idempotent, it's not possible to change data center of storage domain."
|
2016-12-05 17:35:13 +00:00
|
|
|
domain_function:
|
|
|
|
description:
|
|
|
|
- "Function of the storage domain."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "This parameter isn't idempotent, it's not possible to change domain function of storage domain."
|
2016-12-05 17:35:13 +00:00
|
|
|
choices: ['data', 'iso', 'export']
|
|
|
|
default: 'data'
|
|
|
|
aliases: ['type']
|
|
|
|
host:
|
|
|
|
description:
|
|
|
|
- "Host to be used to mount storage."
|
2017-06-08 20:43:40 +00:00
|
|
|
localfs:
|
|
|
|
description:
|
|
|
|
- "Dictionary with values for localfs storage type:"
|
|
|
|
- "C(path) - Path of the mount point. E.g.: /path/to/my/data"
|
|
|
|
- "Note that these parameters are not idempotent."
|
|
|
|
version_added: "2.4"
|
2016-12-05 17:35:13 +00:00
|
|
|
nfs:
|
|
|
|
description:
|
|
|
|
- "Dictionary with values for NFS storage type:"
|
|
|
|
- "C(address) - Address of the NFS server. E.g.: myserver.mydomain.com"
|
|
|
|
- "C(path) - Path of the mount point. E.g.: /path/to/my/data"
|
2016-12-14 16:42:15 +00:00
|
|
|
- "C(version) - NFS version. One of: I(auto), I(v3), I(v4) or I(v4_1)."
|
|
|
|
- "C(timeout) - The time in tenths of a second to wait for a response before retrying NFS requests. Range 0 to 65535."
|
|
|
|
- "C(retrans) - The number of times to retry a request before attempting further recovery actions. Range 0 to 65535."
|
2017-11-22 18:49:45 +00:00
|
|
|
- "C(mount_options) - Option which will be passed when mounting storage."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "Note that these parameters are not idempotent."
|
2016-12-05 17:35:13 +00:00
|
|
|
iscsi:
|
|
|
|
description:
|
|
|
|
- "Dictionary with values for iSCSI storage type:"
|
|
|
|
- "C(address) - Address of the iSCSI storage server."
|
|
|
|
- "C(port) - Port of the iSCSI storage server."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "C(target) - The target IQN for the storage device."
|
2017-06-08 21:24:12 +00:00
|
|
|
- "C(lun_id) - LUN id(s)."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "C(username) - A CHAP user name for logging into a target."
|
|
|
|
- "C(password) - A CHAP password for logging into a target."
|
2017-06-12 06:55:19 +00:00
|
|
|
- "C(override_luns) - If I(True) ISCSI storage domain luns will be overridden before adding."
|
2018-02-07 16:48:00 +00:00
|
|
|
- C(target_lun_map) - List of dictionary containing targets and LUNs."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "Note that these parameters are not idempotent."
|
2018-02-07 16:48:00 +00:00
|
|
|
- "Parameter C(target_lun_map) is supported since Ansible 2.5."
|
|
|
|
|
2016-12-05 17:35:13 +00:00
|
|
|
posixfs:
|
|
|
|
description:
|
|
|
|
- "Dictionary with values for PosixFS storage type:"
|
|
|
|
- "C(path) - Path of the mount point. E.g.: /path/to/my/data"
|
|
|
|
- "C(vfs_type) - Virtual File System type."
|
|
|
|
- "C(mount_options) - Option which will be passed when mounting storage."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "Note that these parameters are not idempotent."
|
2016-12-05 17:35:13 +00:00
|
|
|
glusterfs:
|
|
|
|
description:
|
|
|
|
- "Dictionary with values for GlusterFS storage type:"
|
2016-12-30 15:35:26 +00:00
|
|
|
- "C(address) - Address of the Gluster server. E.g.: myserver.mydomain.com"
|
2016-12-05 17:35:13 +00:00
|
|
|
- "C(path) - Path of the mount point. E.g.: /path/to/my/data"
|
|
|
|
- "C(mount_options) - Option which will be passed when mounting storage."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "Note that these parameters are not idempotent."
|
2016-12-05 17:35:13 +00:00
|
|
|
fcp:
|
|
|
|
description:
|
|
|
|
- "Dictionary with values for fibre channel storage type:"
|
|
|
|
- "C(address) - Address of the fibre channel storage server."
|
|
|
|
- "C(port) - Port of the fibre channel storage server."
|
|
|
|
- "C(lun_id) - LUN id."
|
2018-01-31 12:47:37 +00:00
|
|
|
- "C(override_luns) - If I(True) FCP storage domain luns will be overridden before adding."
|
2016-12-14 16:42:15 +00:00
|
|
|
- "Note that these parameters are not idempotent."
|
2018-01-17 09:48:27 +00:00
|
|
|
wipe_after_delete:
|
|
|
|
description:
|
|
|
|
- "Boolean flag which indicates whether the storage domain should wipe the data after delete."
|
|
|
|
version_added: "2.5"
|
|
|
|
backup:
|
|
|
|
description:
|
|
|
|
- "Boolean flag which indicates whether the storage domain is configured as backup or not."
|
|
|
|
version_added: "2.5"
|
|
|
|
critical_space_action_blocker:
|
|
|
|
description:
|
|
|
|
- "Inidcates the minimal free space the storage domain should contain in percentages."
|
|
|
|
version_added: "2.5"
|
|
|
|
warning_low_space:
|
|
|
|
description:
|
|
|
|
- "Inidcates the minimum percentage of a free space in a storage domain to present a warning."
|
|
|
|
version_added: "2.5"
|
2016-12-05 17:35:13 +00:00
|
|
|
destroy:
|
|
|
|
description:
|
2016-12-14 16:42:15 +00:00
|
|
|
- "Logical remove of the storage domain. If I(true) retains the storage domain's data for import."
|
2016-12-05 17:35:13 +00:00
|
|
|
- "This parameter is relevant only when C(state) is I(absent)."
|
|
|
|
format:
|
|
|
|
description:
|
2017-04-21 22:20:24 +00:00
|
|
|
- "If I(True) storage domain will be formatted after removing it from oVirt/RHV."
|
2016-12-05 17:35:13 +00:00
|
|
|
- "This parameter is relevant only when C(state) is I(absent)."
|
2017-12-06 21:22:44 +00:00
|
|
|
discard_after_delete:
|
|
|
|
description:
|
|
|
|
- "If I(True) storage domain blocks will be discarded upon deletion. Enabled by default."
|
|
|
|
- "This parameter is relevant only for block based storage domains."
|
|
|
|
version_added: 2.5
|
|
|
|
|
2016-12-05 17:35:13 +00:00
|
|
|
extends_documentation_fragment: ovirt
|
|
|
|
'''
|
|
|
|
|
|
|
|
EXAMPLES = '''
|
|
|
|
# Examples don't contain auth parameter for simplicity,
|
|
|
|
# look at ovirt_auth module to see how to reuse authentication:
|
|
|
|
|
|
|
|
# Add data NFS storage domain
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
name: data_nfs
|
|
|
|
host: myhost
|
|
|
|
data_center: mydatacenter
|
|
|
|
nfs:
|
|
|
|
address: 10.34.63.199
|
|
|
|
path: /path/data
|
|
|
|
|
2017-08-29 13:50:38 +00:00
|
|
|
# Add data NFS storage domain with id for data center
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
name: data_nfs
|
|
|
|
host: myhost
|
|
|
|
data_center: 11111
|
|
|
|
nfs:
|
|
|
|
address: 10.34.63.199
|
|
|
|
path: /path/data
|
2017-11-22 18:49:45 +00:00
|
|
|
mount_options: noexec,nosuid
|
2017-08-29 13:50:38 +00:00
|
|
|
|
2017-06-08 20:43:40 +00:00
|
|
|
# Add data localfs storage domain
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
name: data_localfs
|
|
|
|
host: myhost
|
|
|
|
data_center: mydatacenter
|
|
|
|
localfs:
|
|
|
|
path: /path/to/data
|
|
|
|
|
2016-12-05 17:35:13 +00:00
|
|
|
# Add data iSCSI storage domain:
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
name: data_iscsi
|
|
|
|
host: myhost
|
|
|
|
data_center: mydatacenter
|
|
|
|
iscsi:
|
|
|
|
target: iqn.2016-08-09.domain-01:nickname
|
2017-06-08 21:24:12 +00:00
|
|
|
lun_id:
|
|
|
|
- 1IET_000d0001
|
|
|
|
- 1IET_000d0002
|
2016-12-05 17:35:13 +00:00
|
|
|
address: 10.34.63.204
|
2018-01-17 09:48:27 +00:00
|
|
|
discard_after_delete: True
|
|
|
|
backup: False
|
|
|
|
critical_space_action_blocker: 5
|
|
|
|
warning_low_space: 10
|
2016-12-05 17:35:13 +00:00
|
|
|
|
2018-02-07 16:48:00 +00:00
|
|
|
# Since Ansible 2.5 you can specify multiple targets for storage domain,
|
|
|
|
# Add data iSCSI storage domain with multiple targets:
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
name: data_iscsi
|
|
|
|
host: myhost
|
|
|
|
data_center: mydatacenter
|
|
|
|
iscsi:
|
|
|
|
target_lun_map:
|
|
|
|
- target: iqn.2016-08-09.domain-01:nickname
|
|
|
|
lun_id: 1IET_000d0001
|
|
|
|
- target: iqn.2016-08-09.domain-02:nickname
|
|
|
|
lun_id: 1IET_000d0002
|
|
|
|
address: 10.34.63.204
|
|
|
|
discard_after_delete: True
|
|
|
|
|
2017-01-11 14:59:00 +00:00
|
|
|
# Add data glusterfs storage domain
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
name: glusterfs_1
|
|
|
|
host: myhost
|
|
|
|
data_center: mydatacenter
|
|
|
|
glusterfs:
|
|
|
|
address: 10.10.10.10
|
|
|
|
path: /path/data
|
|
|
|
|
2017-08-28 15:15:41 +00:00
|
|
|
# Create export NFS storage domain:
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
name: myexportdomain
|
|
|
|
domain_function: export
|
|
|
|
host: myhost
|
|
|
|
data_center: mydatacenter
|
|
|
|
nfs:
|
|
|
|
address: 10.34.63.199
|
|
|
|
path: /path/export
|
2018-01-17 09:48:27 +00:00
|
|
|
wipe_after_delete: False
|
|
|
|
backup: True
|
|
|
|
critical_space_action_blocker: 2
|
|
|
|
warning_low_space: 5
|
2017-08-28 15:15:41 +00:00
|
|
|
|
2016-12-05 17:35:13 +00:00
|
|
|
# Import export NFS storage domain:
|
|
|
|
- ovirt_storage_domains:
|
2017-08-28 15:15:41 +00:00
|
|
|
state: imported
|
2016-12-05 17:35:13 +00:00
|
|
|
domain_function: export
|
|
|
|
host: myhost
|
|
|
|
data_center: mydatacenter
|
|
|
|
nfs:
|
|
|
|
address: 10.34.63.199
|
|
|
|
path: /path/export
|
|
|
|
|
Add additional mapping attributes for VM/Template registration (#32835)
* Add VnicProfileMapping to register VM
Add vnic profile mappings to be supported in vm registration
* Add VnicProfileMapping to register template
Add vnic profile mappings to be supported in template registration
* Add reassign bad macs to register VM
Add reassign bad macs to be supported in vm registration.
* Add additional mappings params for VM registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any VM's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a VM configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that VM and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the VM to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the VM to cluster B in the secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
LUN mapping
Role mapping
Permissions mapping
Affinity group mapping
Affinity label mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add additional mappings params for Template registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any Template's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a Template configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that Template and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the Template to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the Template to cluster B in the
secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
Role mapping
Permissions mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add support for update OVF store
Add support for task of update OVF store in a storage domain.
2018-01-16 12:14:29 +00:00
|
|
|
# Update OVF_STORE:
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
state: update_ovf_store
|
|
|
|
name: domain
|
|
|
|
|
2016-12-05 17:35:13 +00:00
|
|
|
# Create ISO NFS storage domain
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
name: myiso
|
|
|
|
domain_function: iso
|
|
|
|
host: myhost
|
|
|
|
data_center: mydatacenter
|
|
|
|
nfs:
|
|
|
|
address: 10.34.63.199
|
|
|
|
path: /path/iso
|
|
|
|
|
|
|
|
# Remove storage domain
|
|
|
|
- ovirt_storage_domains:
|
|
|
|
state: absent
|
|
|
|
name: mystorage_domain
|
|
|
|
format: true
|
|
|
|
'''
|
|
|
|
|
|
|
|
RETURN = '''
|
|
|
|
id:
|
|
|
|
description: ID of the storage domain which is managed
|
|
|
|
returned: On success if storage domain is found.
|
|
|
|
type: str
|
|
|
|
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
|
2016-12-30 15:35:26 +00:00
|
|
|
storage_domain:
|
2017-04-21 22:20:24 +00:00
|
|
|
description: "Dictionary of all the storage domain attributes. Storage domain attributes can be found on your oVirt/RHV instance
|
2017-05-16 14:48:26 +00:00
|
|
|
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/storage_domain."
|
2016-12-05 17:35:13 +00:00
|
|
|
returned: On success if storage domain is found.
|
2017-04-27 11:01:11 +00:00
|
|
|
type: dict
|
2016-12-05 17:35:13 +00:00
|
|
|
'''
|
|
|
|
|
2017-02-02 19:45:22 +00:00
|
|
|
try:
|
|
|
|
import ovirtsdk4.types as otypes
|
|
|
|
|
|
|
|
from ovirtsdk4.types import StorageDomainStatus as sdstate
|
2017-10-04 09:27:25 +00:00
|
|
|
from ovirtsdk4.types import HostStatus as hoststate
|
2017-12-14 12:53:42 +00:00
|
|
|
from ovirtsdk4.types import DataCenterStatus as dcstatus
|
2017-02-02 19:45:22 +00:00
|
|
|
except ImportError:
|
|
|
|
pass
|
|
|
|
|
|
|
|
import traceback
|
|
|
|
|
|
|
|
from ansible.module_utils.basic import AnsibleModule
|
|
|
|
from ansible.module_utils.ovirt import (
|
|
|
|
BaseModule,
|
|
|
|
check_sdk,
|
|
|
|
create_connection,
|
|
|
|
equal,
|
|
|
|
get_entity,
|
2017-10-04 08:33:45 +00:00
|
|
|
get_id_by_name,
|
2017-02-02 19:45:22 +00:00
|
|
|
ovirt_full_argument_spec,
|
|
|
|
search_by_name,
|
2017-10-04 09:27:25 +00:00
|
|
|
search_by_attributes,
|
2017-02-02 19:45:22 +00:00
|
|
|
wait,
|
|
|
|
)
|
|
|
|
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
class StorageDomainModule(BaseModule):
|
|
|
|
|
|
|
|
def _get_storage_type(self):
|
2017-06-08 20:43:40 +00:00
|
|
|
for sd_type in ['nfs', 'iscsi', 'posixfs', 'glusterfs', 'fcp', 'localfs']:
|
2018-02-07 16:48:00 +00:00
|
|
|
if self.param(sd_type) is not None:
|
2016-12-05 17:35:13 +00:00
|
|
|
return sd_type
|
|
|
|
|
|
|
|
def _get_storage(self):
|
2017-06-08 20:43:40 +00:00
|
|
|
for sd_type in ['nfs', 'iscsi', 'posixfs', 'glusterfs', 'fcp', 'localfs']:
|
2018-02-07 16:48:00 +00:00
|
|
|
if self.param(sd_type) is not None:
|
|
|
|
return self.param(sd_type)
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
def _login(self, storage_type, storage):
|
|
|
|
if storage_type == 'iscsi':
|
|
|
|
hosts_service = self._connection.system_service().hosts_service()
|
2018-02-07 16:48:00 +00:00
|
|
|
host_id = get_id_by_name(hosts_service, self.param('host'))
|
|
|
|
if storage.get('target'):
|
|
|
|
hosts_service.host_service(host_id).iscsi_login(
|
|
|
|
iscsi=otypes.IscsiDetails(
|
|
|
|
username=storage.get('username'),
|
|
|
|
password=storage.get('password'),
|
|
|
|
address=storage.get('address'),
|
|
|
|
target=storage.get('target'),
|
|
|
|
),
|
|
|
|
)
|
|
|
|
elif storage.get('target_lun_map'):
|
|
|
|
for target in [m['target'] for m in storage.get('target_lun_map')]:
|
|
|
|
hosts_service.host_service(host_id).iscsi_login(
|
|
|
|
iscsi=otypes.IscsiDetails(
|
|
|
|
username=storage.get('username'),
|
|
|
|
password=storage.get('password'),
|
|
|
|
address=storage.get('address'),
|
|
|
|
target=target,
|
|
|
|
),
|
|
|
|
)
|
|
|
|
|
|
|
|
def __target_lun_map(self, storage):
|
|
|
|
if storage.get('target'):
|
|
|
|
lun_ids = storage.get('lun_id') if isinstance(storage.get('lun_id'), list) else [(storage.get('lun_id'))]
|
|
|
|
return [(lun_id, storage.get('target')) for lun_id in lun_ids]
|
|
|
|
elif storage.get('target_lun_map'):
|
|
|
|
return [(target_map.get('lun_id'), target_map.get('target')) for target_map in storage.get('target_lun_map')]
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
def build_entity(self):
|
|
|
|
storage_type = self._get_storage_type()
|
|
|
|
storage = self._get_storage()
|
|
|
|
self._login(storage_type, storage)
|
|
|
|
|
|
|
|
return otypes.StorageDomain(
|
2018-02-07 16:48:00 +00:00
|
|
|
name=self.param('name'),
|
|
|
|
description=self.param('description'),
|
|
|
|
comment=self.param('comment'),
|
|
|
|
wipe_after_delete=self.param('wipe_after_delete'),
|
|
|
|
backup=self.param('backup'),
|
|
|
|
critical_space_action_blocker=self.param('critical_space_action_blocker'),
|
|
|
|
warning_low_space_indicator=self.param('warning_low_space'),
|
|
|
|
import_=True if self.param('state') == 'imported' else None,
|
|
|
|
id=self.param('id') if self.param('state') == 'imported' else None,
|
|
|
|
type=otypes.StorageDomainType(self.param('domain_function')),
|
|
|
|
host=otypes.Host(name=self.param('host')),
|
|
|
|
discard_after_delete=self.param('discard_after_delete'),
|
2016-12-05 17:35:13 +00:00
|
|
|
storage=otypes.HostStorage(
|
|
|
|
type=otypes.StorageType(storage_type),
|
|
|
|
logical_units=[
|
|
|
|
otypes.LogicalUnit(
|
2017-06-08 21:24:12 +00:00
|
|
|
id=lun_id,
|
2016-12-05 17:35:13 +00:00
|
|
|
address=storage.get('address'),
|
2017-11-22 18:47:51 +00:00
|
|
|
port=int(storage.get('port', 3260)),
|
2018-02-07 16:48:00 +00:00
|
|
|
target=target,
|
2016-12-05 17:35:13 +00:00
|
|
|
username=storage.get('username'),
|
|
|
|
password=storage.get('password'),
|
2018-02-07 16:48:00 +00:00
|
|
|
) for lun_id, target in self.__target_lun_map(storage)
|
2016-12-05 17:35:13 +00:00
|
|
|
] if storage_type in ['iscsi', 'fcp'] else None,
|
2017-01-12 19:13:50 +00:00
|
|
|
override_luns=storage.get('override_luns'),
|
2016-12-05 17:35:13 +00:00
|
|
|
mount_options=storage.get('mount_options'),
|
2017-08-28 15:15:41 +00:00
|
|
|
vfs_type=(
|
|
|
|
'glusterfs'
|
|
|
|
if storage_type in ['glusterfs'] else storage.get('vfs_type')
|
|
|
|
),
|
2016-12-05 17:35:13 +00:00
|
|
|
address=storage.get('address'),
|
|
|
|
path=storage.get('path'),
|
2016-12-14 16:42:15 +00:00
|
|
|
nfs_retrans=storage.get('retrans'),
|
|
|
|
nfs_timeo=storage.get('timeout'),
|
|
|
|
nfs_version=otypes.NfsVersion(
|
|
|
|
storage.get('version')
|
|
|
|
) if storage.get('version') else None,
|
|
|
|
) if storage_type is not None else None
|
2016-12-05 17:35:13 +00:00
|
|
|
)
|
|
|
|
|
2017-12-14 12:53:42 +00:00
|
|
|
def _find_attached_datacenter_name(self, sd_name):
|
|
|
|
"""
|
|
|
|
Finds the name of the datacenter that a given
|
|
|
|
storage domain is attached to.
|
|
|
|
|
|
|
|
Args:
|
|
|
|
sd_name (str): Storage Domain name
|
|
|
|
|
|
|
|
Returns:
|
|
|
|
str: Data Center name
|
|
|
|
|
|
|
|
Raises:
|
|
|
|
Exception: In case storage domain in not attached to
|
|
|
|
an active Datacenter
|
|
|
|
"""
|
|
|
|
dcs_service = self._connection.system_service().data_centers_service()
|
|
|
|
dc = search_by_attributes(dcs_service, storage=sd_name)
|
|
|
|
if dc is None:
|
|
|
|
raise Exception(
|
|
|
|
"Can't bring storage to state `%s`, because it seems that"
|
|
|
|
"it is not attached to any datacenter"
|
2018-02-07 16:48:00 +00:00
|
|
|
% self.param('state')
|
2017-12-14 12:53:42 +00:00
|
|
|
)
|
|
|
|
else:
|
|
|
|
if dc.status == dcstatus.UP:
|
|
|
|
return dc.name
|
|
|
|
else:
|
|
|
|
raise Exception(
|
|
|
|
"Can't bring storage to state `%s`, because Datacenter "
|
|
|
|
"%s is not UP"
|
|
|
|
)
|
|
|
|
|
2017-10-04 09:27:25 +00:00
|
|
|
def _attached_sds_service(self, dc_name):
|
2016-12-05 17:35:13 +00:00
|
|
|
# Get data center object of the storage domain:
|
|
|
|
dcs_service = self._connection.system_service().data_centers_service()
|
2017-08-29 13:50:38 +00:00
|
|
|
|
2017-10-04 09:27:25 +00:00
|
|
|
# Search the data_center name, if it does not exists, try to search by guid.
|
|
|
|
dc = search_by_name(dcs_service, dc_name)
|
2016-12-05 17:35:13 +00:00
|
|
|
if dc is None:
|
2017-10-04 09:27:25 +00:00
|
|
|
dc = get_entity(dcs_service.service(dc_name))
|
2017-08-29 13:50:38 +00:00
|
|
|
if dc is None:
|
2017-10-04 08:43:42 +00:00
|
|
|
return None
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
dc_service = dcs_service.data_center_service(dc.id)
|
|
|
|
return dc_service.storage_domains_service()
|
|
|
|
|
2017-10-04 09:27:25 +00:00
|
|
|
def _attached_sd_service(self, storage_domain):
|
2018-02-07 16:48:00 +00:00
|
|
|
dc_name = self.param('data_center')
|
2017-12-14 12:53:42 +00:00
|
|
|
if not dc_name:
|
|
|
|
# Find the DC, where the storage resides:
|
|
|
|
dc_name = self._find_attached_datacenter_name(storage_domain.name)
|
2017-10-04 09:27:25 +00:00
|
|
|
attached_sds_service = self._attached_sds_service(dc_name)
|
2016-12-05 17:35:13 +00:00
|
|
|
attached_sd_service = attached_sds_service.storage_domain_service(storage_domain.id)
|
2017-10-04 09:27:25 +00:00
|
|
|
return attached_sd_service
|
|
|
|
|
|
|
|
def _maintenance(self, storage_domain):
|
|
|
|
attached_sd_service = self._attached_sd_service(storage_domain)
|
2017-01-18 15:00:23 +00:00
|
|
|
attached_sd = get_entity(attached_sd_service)
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
if attached_sd and attached_sd.status != sdstate.MAINTENANCE:
|
|
|
|
if not self._module.check_mode:
|
|
|
|
attached_sd_service.deactivate()
|
|
|
|
self.changed = True
|
|
|
|
|
|
|
|
wait(
|
|
|
|
service=attached_sd_service,
|
|
|
|
condition=lambda sd: sd.status == sdstate.MAINTENANCE,
|
2018-02-07 16:48:00 +00:00
|
|
|
wait=self.param('wait'),
|
|
|
|
timeout=self.param('timeout'),
|
2016-12-05 17:35:13 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
def _unattach(self, storage_domain):
|
2017-10-04 09:27:25 +00:00
|
|
|
attached_sd_service = self._attached_sd_service(storage_domain)
|
2017-01-18 15:00:23 +00:00
|
|
|
attached_sd = get_entity(attached_sd_service)
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
if attached_sd and attached_sd.status == sdstate.MAINTENANCE:
|
|
|
|
if not self._module.check_mode:
|
|
|
|
# Detach the storage domain:
|
|
|
|
attached_sd_service.remove()
|
|
|
|
self.changed = True
|
|
|
|
# Wait until storage domain is detached:
|
|
|
|
wait(
|
|
|
|
service=attached_sd_service,
|
|
|
|
condition=lambda sd: sd is None,
|
2018-02-07 16:48:00 +00:00
|
|
|
wait=self.param('wait'),
|
|
|
|
timeout=self.param('timeout'),
|
2016-12-05 17:35:13 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
def pre_remove(self, storage_domain):
|
2017-10-01 09:13:19 +00:00
|
|
|
# In case the user chose to destroy the storage domain there is no need to
|
|
|
|
# move it to maintenance or detach it, it should simply be removed from the DB.
|
2017-10-04 09:27:25 +00:00
|
|
|
# Also if storage domain in already unattached skip this step.
|
2018-02-07 16:48:00 +00:00
|
|
|
if storage_domain.status == sdstate.UNATTACHED or self.param('destroy'):
|
2017-10-01 09:13:19 +00:00
|
|
|
return
|
2016-12-05 17:35:13 +00:00
|
|
|
# Before removing storage domain we need to put it into maintenance state:
|
|
|
|
self._maintenance(storage_domain)
|
|
|
|
|
|
|
|
# Before removing storage domain we need to detach it from data center:
|
|
|
|
self._unattach(storage_domain)
|
|
|
|
|
|
|
|
def post_create_check(self, sd_id):
|
|
|
|
storage_domain = self._service.service(sd_id).get()
|
2018-02-07 16:48:00 +00:00
|
|
|
dc_name = self.param('data_center')
|
2017-12-14 12:53:42 +00:00
|
|
|
if not dc_name:
|
|
|
|
# Find the DC, where the storage resides:
|
|
|
|
dc_name = self._find_attached_datacenter_name(storage_domain.name)
|
2017-10-04 09:27:25 +00:00
|
|
|
self._service = self._attached_sds_service(dc_name)
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
# If storage domain isn't attached, attach it:
|
|
|
|
attached_sd_service = self._service.service(storage_domain.id)
|
2017-01-18 15:00:23 +00:00
|
|
|
if get_entity(attached_sd_service) is None:
|
2016-12-05 17:35:13 +00:00
|
|
|
self._service.add(
|
|
|
|
otypes.StorageDomain(
|
|
|
|
id=storage_domain.id,
|
|
|
|
),
|
|
|
|
)
|
|
|
|
self.changed = True
|
|
|
|
# Wait until storage domain is in maintenance:
|
|
|
|
wait(
|
|
|
|
service=attached_sd_service,
|
|
|
|
condition=lambda sd: sd.status == sdstate.ACTIVE,
|
2018-02-07 16:48:00 +00:00
|
|
|
wait=self.param('wait'),
|
|
|
|
timeout=self.param('timeout'),
|
2016-12-05 17:35:13 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
def unattached_pre_action(self, storage_domain):
|
2018-02-07 16:48:00 +00:00
|
|
|
dc_name = self.param('data_center')
|
2017-12-14 12:53:42 +00:00
|
|
|
if not dc_name:
|
|
|
|
# Find the DC, where the storage resides:
|
|
|
|
dc_name = self._find_attached_datacenter_name(storage_domain.name)
|
2017-10-04 09:27:25 +00:00
|
|
|
self._service = self._attached_sds_service(storage_domain, dc_name)
|
2016-12-05 17:35:13 +00:00
|
|
|
self._maintenance(self._service, storage_domain)
|
|
|
|
|
2016-12-14 16:42:15 +00:00
|
|
|
def update_check(self, entity):
|
|
|
|
return (
|
2018-02-07 16:48:00 +00:00
|
|
|
equal(self.param('comment'), entity.comment) and
|
|
|
|
equal(self.param('description'), entity.description) and
|
|
|
|
equal(self.param('backup'), entity.backup) and
|
|
|
|
equal(self.param('critical_space_action_blocker'), entity.critical_space_action_blocker) and
|
|
|
|
equal(self.param('discard_after_delete'), entity.discard_after_delete) and
|
|
|
|
equal(self.param('wipe_after_delete'), entity.wipe_after_delete) and
|
|
|
|
equal(self.param('warning_low_space_indicator'), entity.warning_low_space_indicator)
|
2016-12-14 16:42:15 +00:00
|
|
|
)
|
|
|
|
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
def failed_state(sd):
|
|
|
|
return sd.status in [sdstate.UNKNOWN, sdstate.INACTIVE]
|
|
|
|
|
|
|
|
|
|
|
|
def control_state(sd_module):
|
|
|
|
sd = sd_module.search_entity()
|
|
|
|
if sd is None:
|
|
|
|
return
|
|
|
|
|
|
|
|
sd_service = sd_module._service.service(sd.id)
|
|
|
|
if sd.status == sdstate.LOCKED:
|
|
|
|
wait(
|
|
|
|
service=sd_service,
|
|
|
|
condition=lambda sd: sd.status != sdstate.LOCKED,
|
|
|
|
fail_condition=failed_state,
|
|
|
|
)
|
|
|
|
|
|
|
|
if failed_state(sd):
|
|
|
|
raise Exception("Not possible to manage storage domain '%s'." % sd.name)
|
|
|
|
elif sd.status == sdstate.ACTIVATING:
|
|
|
|
wait(
|
|
|
|
service=sd_service,
|
|
|
|
condition=lambda sd: sd.status == sdstate.ACTIVE,
|
|
|
|
fail_condition=failed_state,
|
|
|
|
)
|
|
|
|
elif sd.status == sdstate.DETACHING:
|
|
|
|
wait(
|
|
|
|
service=sd_service,
|
|
|
|
condition=lambda sd: sd.status == sdstate.UNATTACHED,
|
|
|
|
fail_condition=failed_state,
|
|
|
|
)
|
|
|
|
elif sd.status == sdstate.PREPARING_FOR_MAINTENANCE:
|
|
|
|
wait(
|
|
|
|
service=sd_service,
|
|
|
|
condition=lambda sd: sd.status == sdstate.MAINTENANCE,
|
|
|
|
fail_condition=failed_state,
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
|
def main():
|
|
|
|
argument_spec = ovirt_full_argument_spec(
|
|
|
|
state=dict(
|
Add additional mapping attributes for VM/Template registration (#32835)
* Add VnicProfileMapping to register VM
Add vnic profile mappings to be supported in vm registration
* Add VnicProfileMapping to register template
Add vnic profile mappings to be supported in template registration
* Add reassign bad macs to register VM
Add reassign bad macs to be supported in vm registration.
* Add additional mappings params for VM registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any VM's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a VM configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that VM and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the VM to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the VM to cluster B in the secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
LUN mapping
Role mapping
Permissions mapping
Affinity group mapping
Affinity label mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add additional mappings params for Template registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any Template's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a Template configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that Template and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the Template to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the Template to cluster B in the
secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
Role mapping
Permissions mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add support for update OVF store
Add support for task of update OVF store in a storage domain.
2018-01-16 12:14:29 +00:00
|
|
|
choices=['present', 'absent', 'maintenance', 'unattached', 'imported', 'update_ovf_store'],
|
2016-12-05 17:35:13 +00:00
|
|
|
default='present',
|
|
|
|
),
|
2017-08-23 12:44:02 +00:00
|
|
|
id=dict(default=None),
|
2017-08-28 15:15:41 +00:00
|
|
|
name=dict(default=None),
|
2016-12-05 17:35:13 +00:00
|
|
|
description=dict(default=None),
|
|
|
|
comment=dict(default=None),
|
2017-10-04 09:27:25 +00:00
|
|
|
data_center=dict(default=None),
|
2016-12-05 17:35:13 +00:00
|
|
|
domain_function=dict(choices=['data', 'iso', 'export'], default='data', aliases=['type']),
|
|
|
|
host=dict(default=None),
|
2017-06-08 20:43:40 +00:00
|
|
|
localfs=dict(default=None, type='dict'),
|
2016-12-05 17:35:13 +00:00
|
|
|
nfs=dict(default=None, type='dict'),
|
|
|
|
iscsi=dict(default=None, type='dict'),
|
|
|
|
posixfs=dict(default=None, type='dict'),
|
|
|
|
glusterfs=dict(default=None, type='dict'),
|
|
|
|
fcp=dict(default=None, type='dict'),
|
2018-01-17 09:48:27 +00:00
|
|
|
wipe_after_delete=dict(type='bool', default=None),
|
|
|
|
backup=dict(type='bool', default=None),
|
|
|
|
critical_space_action_blocker=dict(type='int', default=None),
|
|
|
|
warning_low_space=dict(type='int', default=None),
|
2018-02-07 16:48:00 +00:00
|
|
|
destroy=dict(type='bool', default=None),
|
|
|
|
format=dict(type='bool', default=None),
|
|
|
|
discard_after_delete=dict(type='bool', default=None)
|
2016-12-05 17:35:13 +00:00
|
|
|
)
|
|
|
|
module = AnsibleModule(
|
|
|
|
argument_spec=argument_spec,
|
|
|
|
supports_check_mode=True,
|
|
|
|
)
|
|
|
|
check_sdk(module)
|
|
|
|
|
|
|
|
try:
|
2017-03-01 19:59:15 +00:00
|
|
|
auth = module.params.pop('auth')
|
|
|
|
connection = create_connection(auth)
|
2016-12-05 17:35:13 +00:00
|
|
|
storage_domains_service = connection.system_service().storage_domains_service()
|
|
|
|
storage_domains_module = StorageDomainModule(
|
|
|
|
connection=connection,
|
|
|
|
module=module,
|
|
|
|
service=storage_domains_service,
|
|
|
|
)
|
|
|
|
|
|
|
|
state = module.params['state']
|
|
|
|
control_state(storage_domains_module)
|
|
|
|
if state == 'absent':
|
2017-10-04 09:27:25 +00:00
|
|
|
# Pick random available host when host parameter is missing
|
|
|
|
host_param = module.params['host']
|
|
|
|
if not host_param:
|
|
|
|
host = search_by_attributes(connection.system_service().hosts_service(), status='up')
|
2017-12-08 15:22:53 +00:00
|
|
|
if host is None:
|
|
|
|
raise Exception(
|
|
|
|
"Not possible to remove storage domain '%s' "
|
|
|
|
"because no host found with status `up`." % module.params['name']
|
|
|
|
)
|
|
|
|
host_param = host.name
|
2016-12-05 17:35:13 +00:00
|
|
|
ret = storage_domains_module.remove(
|
|
|
|
destroy=module.params['destroy'],
|
|
|
|
format=module.params['format'],
|
2017-10-04 09:27:25 +00:00
|
|
|
host=host_param,
|
2016-12-05 17:35:13 +00:00
|
|
|
)
|
2017-08-23 12:44:02 +00:00
|
|
|
elif state == 'present' or state == 'imported':
|
2016-12-05 17:35:13 +00:00
|
|
|
sd_id = storage_domains_module.create()['id']
|
|
|
|
storage_domains_module.post_create_check(sd_id)
|
|
|
|
ret = storage_domains_module.action(
|
|
|
|
action='activate',
|
|
|
|
action_condition=lambda s: s.status == sdstate.MAINTENANCE,
|
|
|
|
wait_condition=lambda s: s.status == sdstate.ACTIVE,
|
|
|
|
fail_condition=failed_state,
|
2017-08-28 15:15:41 +00:00
|
|
|
search_params={'id': sd_id} if state == 'imported' else None
|
2016-12-05 17:35:13 +00:00
|
|
|
)
|
|
|
|
elif state == 'maintenance':
|
|
|
|
sd_id = storage_domains_module.create()['id']
|
|
|
|
storage_domains_module.post_create_check(sd_id)
|
|
|
|
ret = storage_domains_module.action(
|
|
|
|
action='deactivate',
|
|
|
|
action_condition=lambda s: s.status == sdstate.ACTIVE,
|
|
|
|
wait_condition=lambda s: s.status == sdstate.MAINTENANCE,
|
|
|
|
fail_condition=failed_state,
|
|
|
|
)
|
|
|
|
elif state == 'unattached':
|
|
|
|
ret = storage_domains_module.create()
|
|
|
|
storage_domains_module.pre_remove(
|
|
|
|
storage_domain=storage_domains_service.service(ret['id']).get()
|
|
|
|
)
|
|
|
|
ret['changed'] = storage_domains_module.changed
|
Add additional mapping attributes for VM/Template registration (#32835)
* Add VnicProfileMapping to register VM
Add vnic profile mappings to be supported in vm registration
* Add VnicProfileMapping to register template
Add vnic profile mappings to be supported in template registration
* Add reassign bad macs to register VM
Add reassign bad macs to be supported in vm registration.
* Add additional mappings params for VM registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any VM's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a VM configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that VM and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the VM to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the VM to cluster B in the secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
LUN mapping
Role mapping
Permissions mapping
Affinity group mapping
Affinity label mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add additional mappings params for Template registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any Template's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a Template configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that Template and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the Template to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the Template to cluster B in the
secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
Role mapping
Permissions mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add support for update OVF store
Add support for task of update OVF store in a storage domain.
2018-01-16 12:14:29 +00:00
|
|
|
elif state == 'update_ovf_store':
|
|
|
|
ret = storage_domains_module.action(
|
|
|
|
action='update_ovf_store'
|
|
|
|
)
|
2016-12-05 17:35:13 +00:00
|
|
|
module.exit_json(**ret)
|
|
|
|
except Exception as e:
|
|
|
|
module.fail_json(msg=str(e), exception=traceback.format_exc())
|
|
|
|
finally:
|
2017-03-01 19:59:15 +00:00
|
|
|
connection.close(logout=auth.get('token') is None)
|
2016-12-05 17:35:13 +00:00
|
|
|
|
|
|
|
|
|
|
|
if __name__ == "__main__":
|
|
|
|
main()
|