* Gather device information on Solarish systems
* Gather uptime information on Solarish systems
* Fix typo in variable name
* Add comments and example output from kstat command
Use frozenset instead of set
Make parsing of line a little bit safer
* updates the deprecated ios_template module to use network_cli
* adds unit test cases for ios_template
* adds check for provider argument and displays warning message
The `except` block with exception matching throught
`if 'connection refused' in str(e).lower():` is funny,
but is not user-friendly.
Probably related issues:
- #15679
- #12161
- #9966
- #8221
- #7218
... and more
This allows the ios_* modules to take advantage of the new network_cli
connection plugin by refactoring the ios shared module. Individual modules
need to be udpated as well
* moves parse() into the instance
* removes old Config instance and supporting code
* adds net_common shared module
* minor tweaks to NetworkConfig class for parsing config files
When becoming an unprivileged user using non-sudo on a platform where
getlogin() failed in our situation we were not able to detect that the
user had switched. This meant that all of our logic to use move vs copy
if the user had switched was attempting the wrong thing. This change
tries the to do the right thing but then falls back to an acceptable
second choice if it doesn't work.
The bug wasn't easily detected because:
* sudo was not affected because sudo records that the user's have been
switched so we were able to detect that.
* getlogin() works on most platforms. RHEL5 with python-2.4 seems to be
the only platform we still care about where getlogin() fails for this
case.
* It had to be becoming an unprivileged user. When becoming
a privileged user, the user would be able to successfully perform the
best case tasks.
os.write() needs bytes objects on python3 while python2 can work with
either a byte or unicode string. Mark the DUMMY_CA_CERT string as
a byte string so it will work.
Fixes#19265Fixes#19266
When the same role is listed consecutively in a play, the previous role
completion detection failed to mark it as complete as it only checked to
see if the role changed.
This patch addresses that by also keeping track of which task in the role
we are on, so that even if the same role is encountered during later passes
the task number will be less than or equal to the last noted task position.
Related to #15409
Support for the Google API and GCloud-Python Clients have been added.
The three libraries:
* GCloud-Python: A new function, get_google_cloud_credentials, should be used. The credentials-object returned can be passed to any gcloud-python client. Using this client library requires in the installation of gcloud-python. This is preferred library for new modules.
* Google API: A new function, gcp_api_auth, should be used to take advantage of services requiring this client. This client library should be used if the desired functionality is not available in GCloud-Python. Using this library requires the installation of google-api-python-client.
* libcloud: Existing function, gcp_connect, should be used. The interface and return values have not changed and existing modules (such as gce, gce_pd and gce_net) should work without modification. Note that the credentials-fetching code has been refactored out of gcp_connect so that can be reused by all connection functions. To use this function, apache-libcloud must be installed.
Import guards have been added and will only be trigger if a user tries to use a function that is missing dependencies.
Credential-specifying mechanisms (i.e, ansible module params, env vars and libcloud secrets.py) have not changed. They have been refactored and unit tests have been added to allow for changes going forward. We are deprecating (and removing in a subsequent release) the ability to specify credentials via the libcloud secrets file. Also, we have deprecated (and also plan to remove in a subsequent release) the ability to use a p12 pem file for a key - the JSON format is strongly preferred. Deprecation warnings have been added for both of these issues (see the Ansible docs on how to disable deprecation warnings).
* Initial Commit for Infinidat Ansible Modules
Skip tests for python 2.4 as infinisdk doesn't support python 2.4
Move common code and arguments into module_utils/infinibox.py
Move common documentation to documentation_fragments. Cleanup Docs and Examples
Fix formating in modules description
Add check mode support for all modules
Import AnsibleModule only from ansible.module_utils.basic in all modules
Skip python 2.4 tests for module_utils/infinibox.py
Documentation and code cleanup
Rewrite examples in multiline format
Misc Changes
Test
* Add Infinibox modules to CHANGELOG.md
* Add ANSIBLE_METADATA to all modules
* Add update parameter in junos_config module which supports
configuration action like merge, replace and overwrite.
* Add support for replace along with update
argument
* adds new error AnsibleModuleExit to handle module returns
* adds new action plugin network for attaching connection to network modules
* adds new shared module local to receive connection
* splits out function to update task_args with common updates
This commit provides a mechansim for running local modules that require
a connection object for interative commands tyically implemented for
network devices. It provides a way to locally import modules (post fork)
and run them using exception handling to exit.
The process to poll for data in the stdout and/or stderr pipes during a
low-level command execution was repetitive. Factoring this out into a
function DRYs out the code.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
The overwrite parameter is forcibly set to false, meaning a module
passing that parameter will have no effect. The overwrite facility
is necessary to ensure that conflicting options can be written the
configuration (which, in replace mode, they cannot).
This change ensures that if overwrite is set, it will not be changed
to False in the logic.
* Fixes#18663. Bad handling of existing config in dellos9 module.
The dellos9 module doesn't build correctly the internal
structures used to represent the existing config of the managed
network device. This leads to apply changes every time the
playbook is run, even if the existing config is the same that the
one you are trying to push into the device.
Probably this problem exist also in the dellos6 and dellos10
modules, but I only fixed it in the dellos9 module.
The fix modifies two methods. The first one is `get_config`,
where the return clause didn't work correctly when the flow
doesn't enter in the `if` block. In that case the `contents`
variable is not an array an this should be handled.
The second fix is in the `get_sublevel_config` method. In this
case the indentation whitespaces of the parents should be rebuild
because further functions and methods required it to handle
correctly comparisons used to check if changes should be pushed
into device.
* Fixes#18663 for dellos10 module with the same patches as dellos9.
* Moved JSON-RPC client IPAClient class to ansible.module_utils.ipa, which is extended by all ipa modules
* IPAClient: Changed to 2-clause BSD license
* IPAClient (lines 37-39): Added some additional imports for use with Python 3
* IPAClient (line 41): Explicitly extend Python base object
* IPAClient (line 57): Properly URL quoted the username/password form data as per https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.1
* IPAClient (line 62): Data should be bytes or bytearray in Python 3 (still str in Python 2)
* IPAClient (line 65): Print error message, not returned body
* IPAClient (line 70): getheader() is not present in Python 3 version of HTTPMessage; get() is present in both Python 2/3
* IPAClient (line 88): Convert form data to bytes for Python 3 again
* IPAClient (line 91): Print error message, not returned body
* IPAClient (line 96-104): json.loads() requires a string; HTTPResponse.read() returns bytes in Python 3 and str in Python 2, so decode the bytes/string using the HTTPResponse returned charset (default to 'latin-1')
* Add author/copyright notice
* Add RHEV host detection support
This adds RHEV host detection support based on a running 'vdsm' process and the existence of _/rhev/_ (which are both part of the vdsm RPM package in a RHEV installation). Without this change, a RHEV host would be reported as a kvm host (which is also true, but often not specific enough).
This closes#17058
* Only scan the process list when we determined /rhev/ exists
Small performance improvement, so we do not have to scan the process list if /rhev/ does not exist.
* fixed detection of ansible_virtualization_(role|path) facts for VM's running in
OpenStack Instances
* NOTE: this will break detection of ansible_virtualization_(role|path) facts
if you are using Openstack Instaces with nested virtualization
* fixed detection of ansible_virtualization_(role|path) facts for VM's running in
OpenStack Instances
Fixes#15165
* NOTE: this will break detection of ansible_virtualization_(role|path) facts
if you are using Openstack Instaces with nested virtualization
* log on target based on nolog, not verbosity
fies #18569
* initialize module name
removing verbosity exposed missing name at certain stages, initialize to file name
and update later once module args are parsed
Commit 8b08a28c89 removed a
call to get_exception() that was needed. Without it, the fail_json
references an undefined variable ('exception') and throws an exception.
Add the get_exception() back in where needed and update references.
Now the proper module failure is returned.
Fixes#18628
The timeout param was exposed to the socket connection but was not
enforced for commands. This update will now cause a command to timeout
based on the module parameter.
for solaris, add get_dmi_facts to get product_name fact, and update memtotal_mb to integer for consistency.
for hp-ux, user machinfo to get product_serial fact
VMs in VPC and not in VPC can have an identical name. As a result VMs in a VPC must be sorted out if no VPC is given.
Due the API limitation, the only way is to check if the network of the VM is in a VPC.
* Refactor OpenBSD sysctl based detection in a separate class
The idea is later to reuse this code for NetBSD and FreeBSD, who
use a different sysctl key for vendor and product.
* Add detection of virtualisation on NetBSD
* Add support to detect running as a Xen guest
tested on NetBSD 7 on Rackspace.
* Add support for OpenBSD dmi fact gathering
* Refactor get_sysctl in the Hardware class
Due to difference between Darwin/NetBSD and OpenBSD, we
have to change the regexp used split the key/value
* Add support for dmi facts on NetBSD
Text strings and byte strings both have a translate method but the byte
string version is harder to use. It requires a mapping of all 256 bytes
to a translation value. Text strings only require a mapping from the
characters that are changing to the new string. Switching to text
strings on both py2 and py3 allows us to state what we're getting rid of
simply without having to rely on the maketrans() helper function.
The traceback is the following:
Traceback (most recent call last):
File \"/tmp/ansible_8s0bj604/ansible_module_setup.py\", line 134, in <module>
main()
File \"/tmp/ansible_8s0bj604/ansible_module_setup.py\", line 126, in main
data = get_all_facts(module)
File \"/tmp/ansible_8s0bj604/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3641, in get_all_facts
File \"/tmp/ansible_8s0bj604/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3584, in ansible_facts
File \"/tmp/ansible_8s0bj604/ansible_modlib.zip/ansible/module_utils/facts.py\", line 1600, in populate
File \"/tmp/ansible_8s0bj604/ansible_modlib.zip/ansible/module_utils/facts.py\", line 1649, in get_memory_facts
TypeError: translate() takes exactly one argument (2 given)
And the swapctl output is this:
# /sbin/swapctl -sk
total: 83090 1K-blocks allocated, 0 used, 83090 available
The only use of the code is to remove prefix in case they are present, so just
replacing them with empty space is sufficient.
smbios -i 256 return:
# smbios -i 256
ID SIZE TYPE
256 77 SMB_TYPE_SYSTEM (system information)
Manufacturer: Red Hat
Product: KVM
Version: RHEL 6.4.0 PC
UUID: 8a3b8b1a-ba59-1a4b-5f85-ab53a5a885a9
Wake-Up Event: 0x6 (power switch)
SKU Number:
Family: Red Hat Enterprise Linux
So to get the type of the python interpreter, we need to look at
sys.implementation.name which do not return 'cpython', instead of 'CPython',
but that's upstream breakage, so not much we can do.
While testing on netbsd 6.0, ansible setup failed with:
Traceback (most recent call last):
File \"/tmp/ansible_m2ieeq/ansible_module_setup.py\", line 134, in <module>
main()
File \"/tmp/ansible_m2ieeq/ansible_module_setup.py\", line 126, in main
data = get_all_facts(module)
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3609, in get_all_facts
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3552, in ansible_facts
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 2500, in populate
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 2584, in get_interfaces_info
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 2644, in parse_inet_line
socket.error: illegal IP address string passed to inet_aton
The cause is having aliases on lo like this:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33184
inet 127.0.0.1 netmask 0xff000000
inet alias 127.1.1.1 netmask 0xff000000
So if the address is 'alias', we have to skip it.
- Remove shebangs from:
- ini files
- unit tests
- module_utils
- plugins
- module_docs_fragments
- non-executable Makefiles
- Change non-modules from '/usr/bin/python' to '/usr/bin/env python'.
- Change '/bin/env' to '/usr/bin/env'.
Also removed main functions from unit tests (since they no longer
have a shebang) and fixed a python 3 compatibility issue with
update_bundled.py so it does not need to specify a python 2 shebang.
A script was added to check for unexpected shebangs in files.
This script is run during CI on Shippable.
If there is an intermittent network failure, we might be trying to reach
an URL multiple times. Without this patch, we would be re-adding the same
certificate to the OpenSSL default context multiple times.
Normally, this is no big issue, as OpenSSL will just silently ignore them,
after registering the error in its own error stack.
However, when python-cryptography initializes, it verifies that the current
error stack of the default OpenSSL context is empty, which it no longer is
due to us adding the certificates multiple times.
This results in cryptography throwing an Unknown OpenSSL Error with details:
OpenSSLErrorWithText(code=185057381L, lib=11, func=124, reason=101,
reason_text='error:0B07C065:x509 certificate routines:X509_STORE_add_cert:cert already in hash table'),
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- added configurable timeout to module paramaters
modified: lib/ansible/utils/module_docs_fragments/nxos.py
- added documentation for timeout
* Changes to be committed:
modified: ansible/module_utils/nxos.py
- added timeout option for nxapi transport and added documentation
- option works with CLI or NXAPI transport
* Changes to be committed:
modified: lib/ansible/utils/module_docs_fragments/nxos.py
- Changed per comments in PR 18074
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- added try/except block to test for timeout
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- tweaked timeout
These ENV vars are:
- CLOUDSTACK_ZONE
- CLOUDSTACK_DOMAIN
- CLOUDSTACK_ACCOUNT
- CLOUDSTACK_PROJECT
help to DRY on every task, args still have precedence.
As neon is derived from Ubuntu, ansible_os_family should have the value
"Debian" instead of "Neon". Add a test case for KDE neon and set
os_family correctly for it.
This limitation of python-3.4 mkstemp() is the final reason we made
python-3.5 our minimum version. Since we know about it, give a nice
error to the user with a hint that Python3.4 could be the issue.
Fixes#18160
When the client certificate is already stored, lxd returns a JSON error with message "Certificate already in trust store". This "error" will occur on every task run after the initial run. The cert should be in the trust store after the first run and this error message should really only be viewed as informational as it does not indicate a real problem.
Fixes:
ansible/ansible-modules-extras#2750
Slightly better handling of http headers from http (CONNECT) proxy. Buffers up to 128KiB of headers and raises exception if this size is exceeded.
This could be optimized further, but for the time being it does the trick.
- When there is no file at the destination yet, we have no modification time for the `If-Modified-Since`-Header. In this case trust the cache to make the right decision to either serve a cached version or to refresh from origin. This should help with mass-deployment scenarios where you want to use a local cache to relieve your uplink.
- If you don't trust the cache to make the right decision you can still force it to refresh by providing the `force: yes` option.
Since ifconfig/ip are not present on the system, and there is no /proc
to be parsed, the only way to get information is by looking at the
argument of the pfinet translator, the process in charge of network.
In turn, this is done with fsysopts on the appropriate path, who return
something like this:
# fsysopts -L /servers/socket/inet
/hurd/pfinet --interface=/dev/eth0 --address=192.168.122.130
--netmask=255.255.255.0 --gateway=192.168.122.1 --address6=fe80::5254:12:ced/10
--address6=fe80::5054:ff:fe12:ced/10 --gateway6=::
So to get the IP addresses, one has to parse that string and fill the appropriate
structure.
More information on the system and on limitation can be found on
- https://www.gnu.org/software/hurd/hurd/translator/pfinet.html
- https://www.gnu.org/software/hurd/hurd/translator/pfinet/implementation.html
- https://www.debian.org/ports/hurd/hurd-install
* Fallback to /proc/mounts if /etc/mtab do not exist
On modern system, the file is just a compatibility symlink, and
some system (like GNU Hurd) do not have it, but provides /proc/mounts
* Add support for uptime, memory and mount facts on GNU Hurd
On openSUSE Tumbleweed, lsb-release -a currently reports
the distributor ID as "openSUSE Tumbleweed". On openSUSE
Leap, the distributor ID is "SUSE LINUX".
Add them to the OS_FAMILY dict as Suse family systems.
Also add an entry to TESTSETS in test_distribution_version.py
for openSUSE Tumbleweed.
The network module will now log a message when it connects to a remote host
successfully and specify the transport used. It will also log a message
when the module discconnect() method is called.
Earlier versions of EOS that do not support config sessions would
create an exception. This fix will now check if the device supports
sessions and if it doesn't, it will fall back to not using sessions
In order for the config to be returned with vpn passwords, the get_config()
method now supports a keyword arg include=passwords to return the desired
configuration. This replaces the show_command argument
foo.split('\n') is picky about the type of 'foo'.
if 'foo' is a bytes type, then foo.split('\n')
will fail on py3 with:
TypeError: a bytes-like object is required, not 'str'
The foo.split('\n') change isn't strictly required
when run_command returns native str types, but it
is more idiomatic and conceptually also supports other
line endings.
* Specify run_command decode error style as arg
Instead of getting the stdout/stderr text from
run_command, and then decoding to utf-8 with a
particular error scheme, use the 'errors' arg
to run_command so it does that itself.
* Use 'surrogate_or_replace' instead of 'replace'
For the text decoding error scheme in run_command calls.
* Let the local_facts run_command use default errors
* fix typo
The kickstart kwarg should be set to False for eos based devices and
was set to True. This change cleans up problems loading json output
from cli commands
All eos_command test cases are now passing successfully
fixes#17441
When adding condition statements, the Conditional instance will now generate
an AddConditionError if is unable to map the condition to a function in the
instance
When the conditional cannot extract a value from the result string,
an unhandled exception would be raised. This fix now gracefully handles
the exception
An unhandled exeception is raised with using nxapi transport and setting
the save argument to true. This fix will allow the configuration to be
saved regardless of the transport.
fixesansible/ansible-modules-core#5094
The conditional processing was failing due for two reasons:
1) The xml to json conversion string was not happening before the runner
was processing the results
2) The Conditional instance was not parsing conditionals encoded with []
This fix address both issues.
The junos load_config() method supports operations of overwrite, replace
and merge. This adds the missing overwrite keyword arg to load_config()
so that action in junos_template can be procesed correctly.
The Conditional class now raises a ValueError with message if it cannot
correclty parse the passed in conditional. This makes it easier to
detect issues in modules that specify conditionals.
The arguments for the regex search() function were transposed in the
netcli match() method that caused conditionals to fail. Switched the
arguments to fixe the bug
fixes#17749
files is really a placeholder for common code for separate service modules, was copy of current service module and this seemed to confuse people so this update should clear that up
The raw kwarg was added to return raw output from devices with if the
attempt to convert to json failed. The change was causing all json
output to be returned raw. This fixes that issue.
This fixes a problem with the Netconf transport in which the ssh keyfile
wasn't being used if it was defined. The ref issue is filed against 2.1.1
but have been unable to replicate the problem in that version
ref: ansible/ansible-modules-core#4966
* fixes issue #13981: unsafe_writes block appeared too late in the atomic_move
workflow. This led to errno.EBUSY to not be managed in the context of
issue #!#981
* Reduce changes to fix#13981
* Abstract the unsafe_writes fallback into a helper method.
Explicitly try/except os.rename part of the code and call this helper method.
If the code fails in shutil.copy2 or shutil.move this should not be related to issue #13981
since they write to b_tmp_dest_name.
(as suggested by @abadger)
* Check if unsafe_writes in the caller, not in _unsafe_writes.
That way the function call reads as "Do an unsafe write"
and not as "I think we should do an unsafe_write.
* Add oVirt utility module
This patch add oVirt utility module, which contains helper functions,
for oVirt modules and also shared documentation fragment for oVirt.
* Adjust to Python 2.4
* Fixups
* Add support for poll interval and fixes
When using the Cli transport, if the session hung on a command and the
socket timed out, the config session would be left behind. This change
will allow the shell to try to get control back and remove the config
session, assuming the channel is still open.
fixesansible/ansible-modules-core#4945
The Conditional instance will cause a stack trace if the provided conditional
does not map properly to the response. This fixes that issue so that the
Conditional instance will now raise a FailedConditionalError with the
conditional that caused the failure.
Modules *_command modules (and any other modules that create an instance
of Conditional) should be updated to catch the FailedConditionalError
exception.
* By default, ansible_distribution is not set on DragonFly systems,
preventing some distribution-specific tests from being written
* This commit fixes the issue by returning the quite logical value
of "DragonFly" when appropriate
Change linux fact gathering to correctly gather ansible_processor_count
and ansible_processor_vcpus on systems without vendor_id/model_name in
/proc/cpuinfo (for ex, ppc64/POWER)
* Added aws_retry decorator function with unit tests
* Restructured the code to be used with a base class.
This base class CloudRetry can be reused by any other cloud provider.
This decorator should be used in situations, where you need to implement
a backoff algorithm and want to retry based on the status code from the
exception.
* updated documentation
* fixed tabs
* added botocore and boto3 to requirements.txt
* removed cloud.py from py24 tests, as it depends on boto3
* fix relative imports
* updated test to be 2.6 compat
* updated method name from retry to backoff
* readded lxd
* Updated default backoff from 2 seconds to 1.1s.
This will be about a total of 48 seconds in 10 tries. This is
configurable.