In 159aa26b36
the result of _get_serialized_branches when no hosts were matched
changed from [[]] to []. Thus, the v2_playbook_on_no_hosts_matched
callback would not fire. Change the check so we get the error message
again. Fixes#17706
* Pass the absolute path to dirname when assigning basedir
If no path is specified when calling the playbook, os.path.dirname(playbook_path) returns ''
This will cause failure when creating the retry file.
Fixes#17456
* Updated to use os.pathdirname(os.path.abspath())
* Add a new config option to cache the check for controlpersist on the
control machine.
Fixes#15844
* Remove the option and make the behavior the default
* Make the check for controlpersist cache its status per-ssh executable
We couldn't copy to_unicode, to_bytes, to_str into module_utils because
of licensing. So once created it we had two sets of functions that did
the same things but had different implementations. To remedy that, this
change removes the ansible.utils.unicode versions of those functions.
The calculation for max_fail_percentage was moved into the linear
strategy a while back, and works better there in the stategy layer
rather than at the PBE layer. This patch removes it from the PBE layer
and tweaks the logic controlling whether or not the next batch is run.
Fixes#15954
The flag new_pb_basedir is not being utilized in Inventory._get_hostgroup_vars,
leading to the situation where an inventory with no playbook basedir set will
read host/group vars from the $CWD, regardless of the inventory and/or playbook
relative location. This patch corrects that by not using the playbook basedir
if it is unset (None).
This patch also corrects a bug in which the VariableManager would accumulate
host/group vars files, which could lead to incorrect vars files being used when
playbooks are run from different directories containing their own group/host vars
directories.
Fixes#16953
Instead of immediately returning a failed code (indicating a break in
the play execution), we internally 'or' that failure code with the result
(now an integer flag instead of a boolean) so that we can properly handle
the rescue/always portions of blocks and still remember that the break
condition was hit.
Fixes#16937
This feature changes the scalar value of `serial:` to a list, which
allows users to specify a list of values, so batches can be ramped
up (commonly called "canary" setups):
- hosts: all
serial: [1, 5, 10, "100%"]
tasks:
...
Again, as we're carrying failed/unreachable hosts forward from play to play via
internal structures, we need to remember which ones had previously failed so that
unrelated host failures don't inflate the numbers for a given serial batch in the
PlaybookExecutor causing a premature exit.
Fixes#16364
* Fix: create retry_files_save_path if it doesn't exist
Ansible documentation states that retry_files_save_path directory will be
created if it does not already exist. It currently doesn't, so this patch
fixes it :)
* Use makedirs_safe to ensure thread-safe dir creation
@bcoca suggested to use the makedirs_safe helper function :)
This allows the PlaybookExecutor to receive more information regarding
what happened internal to the TaskQueueManager and strategy, to determine
things like whether or not the play iteration should stop.
Fixes#15523
rm _del_ as it might leak memory
renamed to tmp file cleanup
added exception handling when traversing file list, even if one fails try rest
added cleanup to finally to ensure removal in most cases
Ansible when there was a percentage that was calculated to be less than
1.0 would run all hosts as the value for a rolling update.
The error is due to the fact that Python will round a
float that is under 1.0 to 0, which will trigger the case of
0 hosts. The 0 host case tells ansible to run all hosts.
The fix will see if the percentage calculation after int
conversion is 0 and will else to 1 host.
- moved to base cli class to handle centrally and duplicate less code
- now avoids duplication and reiteration of signal handler by reassigning it
- left note on how to do non-graceful in case we add in future
as I won't remember everything i did here and don't want to 'relearn' it.
- adhoc now terminates gracefully
- avoid race condition on terminations by ignoring errors if
worker might have been reaped between checking if active and termination call
- ansible-playbook now properly exits on sigint/term
- adhoc and playbook now give exceptions that we should not normally capture
and rely on top level finally to reap children
- handle systemexit breaks in workers
- added debug to see at which frame we exit
partial fix for #14346
pushed it to use the existing propmpt from display and moved the vars prompt code there also for uniformity
changed vars_prompt to check extra vars vs the empty play.vars to restore 1.9 behaviour
sipmlified the code as it didn't need to check for syntax again (tqm is made none prior based on that)
fixes#13770
* Move self._tqm.load_callbacks() earlier to ensure that v2_on_playbook_start can fire
* Pass the playbook instance to v2_on_playbook_start
* Add a _file_name instance attribute to the playbook
I inadvertently introduced it in
ca826508d9 and didn't notice, because
there are no unit tests for playbook_executor.py. Sorry!
(The "from ansible.errors import *" was used *only* to get the 'os'
module, which makes go "what?")