* dynamic role_include
* more fixes for dynamic include roles
* set play yfrom iterator when dynamic
* changes from jimi-c
* avoid modules that break ad hoc
TODO: should really be a config
* adds squashing to objects, which allows them to be squashed down
to a final "view" before post_validate to avoid expensive evaluations
of parent attributes
* attempt #11 to role_include
* fixes from jimi-c
* do not override load_data, move all to load
* removed debugging
* implemented tasks_from parameter, must break cache
* fixed issue with cache and tasks_from
* make resolution of from_tasks prioritize literal
* avoid role dependency dedupe when include_role
* fixed role deps and handlers are now loaded
* simplified code, enabled k=v parsing
used example from jimi-c
* load role defaults for task when include_role
* fixed issue with from_Tasks overriding all subdirs
* corrected priority order of main candidates
* made tasks_from a more generic interface to roles
* fix block inheritance and handler order
* allow vars: clause into included role
* pull vars already processed vs from raw data
* fix from jimi-c blocks i broke
* added back append for dynamic includes
* only allow for basename in from parameter
* fix for docs when no default
* fixed notes
* added include_role to changelog
ran task_executor through python-modernize and then made changes to the
code pointed out by it:
* Most places where we looped through dict.keys() changed to
for key in dict:
Using keys() in python2 creates a list() of keys. For iterating, we
can iterate over the dict itself and we'll be handed back each key.
In python3, doing it this way does not create a new list and thus is
more memory efficient.
* In one place, use:
for key in list(dict.keys()):
because we're deleting elements from the dictionary inside of the
loop. So we really do need to iterate over a separate list of the
keys to avoid modifying the dictionary that we're iterating over.
(Fixes Python3 bug)
* In one place, change the order of an if-elif-else tree so that the
most frequent cases are evaluated first. (Optimization)
The calculation for max_fail_percentage was moved into the linear
strategy a while back, and works better there in the stategy layer
rather than at the PBE layer. This patch removes it from the PBE layer
and tweaks the logic controlling whether or not the next batch is run.
Fixes#15954
The flag new_pb_basedir is not being utilized in Inventory._get_hostgroup_vars,
leading to the situation where an inventory with no playbook basedir set will
read host/group vars from the $CWD, regardless of the inventory and/or playbook
relative location. This patch corrects that by not using the playbook basedir
if it is unset (None).
This patch also corrects a bug in which the VariableManager would accumulate
host/group vars files, which could lead to incorrect vars files being used when
playbooks are run from different directories containing their own group/host vars
directories.
Fixes#16953
We want to NOT consider the async task as failed if the result is
not parsed, which was the intent of:
https://github.com/ansible/ansible/pull/16458
However, the logic doesn't actually do that because we default
the 'parsed' value to True. It should default to False so that
we continue waiting, as intended.
Instead of immediately returning a failed code (indicating a break in
the play execution), we internally 'or' that failure code with the result
(now an integer flag instead of a boolean) so that we can properly handle
the rescue/always portions of blocks and still remember that the break
condition was hit.
Fixes#16937
Rather than repeatedly searching for tasks by uuid via iterating over
all known blocks, cache the tasks when they are added to the PlayIterator
so the lookup becomes a simple key check in a dict.
When a task result has an empty results list, the
list should be ignored when determining the results
of `_check_key`. Here the empty list is treated the
same as a non-existent list.
This fixes a bug that manifests itself with squashed
items - namely the task result contains the correct
value for the key, but an empty results list. The
empty results list was treated as zero failures
when deciding which handler to call - so the task
show as a success in the output, but is deemed to
have failed when deciding whether to continue.
This also demonstrates a mismatch between task
result processing and play iteration.
A test is also added for this case, but it would not
have caught the bug - because the bug is really in
the display, and not the success/failure of the
task (visually the test is more accurate).
Fixesansible/ansible-modules-core#4214
This feature changes the scalar value of `serial:` to a list, which
allows users to specify a list of values, so batches can be ramped
up (commonly called "canary" setups):
- hosts: all
serial: [1, 5, 10, "100%"]
tasks:
...
I'm not sure why that would be desirable -- we really want __version__
to come from the controller whereas importing will come from the client
node. If it turns out there was a reason to do that, please be sure to
use an exception handler that catches all exceptions instead of only
catching ImportError:
```
try:
from ansible.release import __version__, __author__
except:
__version__ = [...]
```
Fixes#16523
* fixed lookup search path
added ansible_search_path var that contains the proper list and in order
removed roledir var which was only used by first_found, rest used role_path
added needle function for lookups that mirrors the action plugin one, now
both types of plugins use same pathing.
* added missing os import
* renamed as per feedback
* fixed missing rename in first_found
* also fixed first_found
* fixed import to match new error class
* fixed getattr ref
Since Ansiballz, we no longer need to import basic directly into
a new-style module. Some modules, like the Networking modules, may
import basic in their own module_utils files and the module will import
that specialized module_util file rather than basic.
* Don't treat parsing problems as async task timeout
If there is a problem reading/writing the status file that manifests as
not being able to parse the data, that doesn't mean the task timed out,
it means there was what was likely a tempoarary problem. Move on and
keep polling for success. The only things that should cause the async
status to not be parseable are bugs in the async_runner.
* Add comment explaining not bailing out of loop
* Return different error when result is unparseable
* Remove extraneous else
* Instead of rebuilding the handler list all over the place, we now
compile the handlers at the point the play is post-validated so that
the view of the play in the PlayIterator contains the definitive list
* Assign the dep_chain to the handlers as they're compiling, just as we
do for regular tasks
* Clean up the logic used to find a given handler, which is greatly
simplified by the above changes
Fixes#15418
6eefc11c converted task.loop_control into an object, but while the other
callers were updated to use .loop_var instead of .get('loop_var'), this
site was overlooked.
This can be reproduced by including with loop_control a file that does
set_fact; a simple regression test along these lines is included.
Again, as we're carrying failed/unreachable hosts forward from play to play via
internal structures, we need to remember which ones had previously failed so that
unrelated host failures don't inflate the numbers for a given serial batch in the
PlaybookExecutor causing a premature exit.
Fixes#16364
The listen statement on handlers should have supported a list, however
it was broken in the revision of the pub/sub feature based on the handler
revamp. This patch corrects the bug, so this works again:
- name: some handler
...
listen:
- some target
- another target
Fixes#16378
Due to the fact that roles may be instantiated with different sets of
params (multiple inclusions of the same role or via role dependencies),
simply tracking notified handlers by name does not work. This patch
changes the way we track handler notifications by using the handler
object itself instead of just the name, allowing for multiple internal
instances. Normally this would be bad, but we also modify the way we
search for handlers by first looking at the notifying tasks dependency
chain (ensuring that roles find their own handlers first) and then at
the main list of handlers, using the first match it finds.
This patch also modifies the way we setup the internal list of handlers,
which should allow us to correctly identify if a notified handler exists
more easily.
Fixes#15084
When setuptools installs a python module (as is done via python setup.py
install) It puts the module into a subdirectory of site-packages and
then creates an entry in easy-install.pth to load that directory. This
makes it difficult for Ansiballz to function correctly as the .pth file
overrides the sys.path that the wrapper constructs. Using
sitecustomize.py fixes this because sitecustomize overrides the
directories handled in .pth files.
Fixes#16187