* add validated content plugin support
Signed-off-by: Rohit Thakur <rohitthakur2590@outlook.com>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Signed-off-by: Rohit Thakur <rohitthakur2590@outlook.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Fix filter and test documentation syntax errors.
Most cases were strings that were YAML-parsable as dictionaries due to colons,
sometimes of the form `str: str`, and sometimes of the form `str: list`.
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Add Unit Tests To Capture Failures From 'subnet' Generator
The netaddr library returns a generator for the 'subnet' call. This works great until you use larger networks. While it is uncommon to encounter it in IPv4 usage it is trivial to hit it in IPv6.
* Switch To Calculating Networking Information Directly For Performance
This replaces the inefficient generator for 'subnet' and uses math to determine the result directly. Since a list is not returned directly to the client in the implemented cases this works great and is fast.
A further optimization at least on the logic of this might be to break the different cases implemented by the filter out into unique functions. I did not do this yet because I wanted to get feedback on this direction.
* Changelog Fragment For PR / Bugfix
Adding changelog fragment that references source issue.
* Dropping Python 3.7 Bypass Removes Need For 'sys' Module
A test for ipsubnet was bypassed under 3.7 because of an inconsistent return value w/3.6 and 2.7. I removed the bypass and changed the behavior of the filter to raise an AnsibleFilterError in all versions of Python.
* Add A Pair of Integration Tests
These demonstrate the issue with the current implementation and would normally stall out while building the list of possible subnets from the generator.
* Address Changelog Feedback
I kept the performance item as a bugfix but bumped the typing to a minor change.
* Add 'netaddr' To Integration Test 'requirements.txt'
* The `ansible-test integration --docker` requires this line in requirements.txt to pass the 'netaddr' related tests
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Replace str -> to_text
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Kate Case <this.is@katherineca.se>
* Enable update docs
* Remove foo
* Changelog
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Move check_mode setting to the proper location
SUMMARY
ISSUE TYPE
Test Pull Request
COMPONENT NAME
ADDITIONAL INFORMATION
Reviewed-by: Nilashish Chakraborty <nilashishchakraborty8@gmail.com>
* Fix validate-module sanity
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Bradley A. Thornton <bthornto@redhat.com>
Add basic configurations for pytest, flake8 and black
Move the flake8 config into the flake8 file out of the tox file, this allows for the use of flake8 with an IDE
Add a gitignore entry for collections, this will allow for GHA CI pytest testing of unit tests in the future
Add a pyproject.toml file for the black and pytest config
Ignore several pylance and flake8 issues
Remove duplicate config from tox.ini
Reviewed-by: GomathiselviS <None>
Add `__init__.py` files to tests directory
The 60 new exclusions are because sanity currently doesn't allow for a docstring in an init: ansible/ansible#77506
Reviewed-by: Nilashish Chakraborty <nilashishchakraborty8@gmail.com>
Reviewed-by: Sagar Paul <sagpaul@redhat.com>
Consolidate filter plugin
SUMMARY
Consolidate filter plugin
This plugin presents collective structured data including all supplied facts grouping on common attributes mentioned.
ISSUE TYPE
New Module Pull Request
COMPONENT NAME
ansible.utils.consolidate
ADDITIONAL INFORMATION
- hosts: localhost
gather_facts: false
tasks:
- name: Define some test data
ansible.builtin.set_fact:
values:
- name: a
value: 1
- name: b
value: 2
- name: c
value: 3
colors:
- name: a
color: red
- name: b
color: green
- name: c
color: blue
- name: Define some test data
ansible.builtin.set_fact:
base_data:
- data: "{{ values }}"
match_key: name
name: values
- data: "{{ colors }}"
match_key: name
name: colors
- name: Consolidate the data source using the name key
ansible.builtin.set_fact:
consolidated: "{{ data_sources|ansible.utils.consolidate }}"
vars:
sizes:
- name: a
size: small
- name: b
size: medium
- name: c
size: large
additional_data_source:
- data: "{{ sizes }}"
match_key: name
name: sizes
data_sources: "{{ base_data + additional_data_source }}"
# consolidated:
# a:
# colors:
# color: red
# name: a
# sizes:
# name: a
# size: small
# values:
# name: a
# value: 1
# b:
# colors:
# color: green
# name: b
# sizes:
# name: b
# size: medium
# values:
# name: b
# value: 2
# c:
# colors:
# color: blue
# name: c
# sizes:
# name: c
# size: large
# values:
# name: c
# value: 3
- name: Consolidate the data source using different keys
ansible.builtin.set_fact:
consolidated: "{{ data_sources|ansible.utils.consolidate }}"
vars:
sizes:
- title: a
size: small
- title: b
size: medium
- title: c
size: large
additional_data_source:
- data: "{{ sizes }}"
match_key: title
name: sizes
data_sources: "{{ base_data + additional_data_source }}"
# consolidated:
# a:
# colors:
# color: red
# name: a
# sizes:
# size: small
# title: a
# values:
# name: a
# value: 1
# b:
# colors:
# color: green
# name: b
# sizes:
# size: medium
# title: b
# values:
# name: b
# value: 2
# c:
# colors:
# color: blue
# name: c
# sizes:
# size: large
# title: c
# values:
# name: c
# value: 3
- name: Consolidate the data source using the name key (fail_missing_match_key)
ansible.builtin.set_fact:
consolidated: "{{ data_sources|ansible.utils.consolidate(fail_missing_match_key=True) }}"
ignore_errors: true
vars:
vars:
sizes:
- size: small
- size: medium
- size: large
additional_data_source:
- data: "{{ sizes }}"
match_key: name
name: sizes
data_sources: "{{ base_data + additional_data_source }}"
# fatal: [localhost]: FAILED! => {
# "msg": "Error when using plugin 'consolidate': 'fail_missing_match_key'
# reported Missing match key 'name' in data source 2 in list entry 0,
# Missing match key 'name' in data source 2 in list entry 1,
# Missing match key 'name' in data source 2 in list entry 2"
# }
- name: Consolidate the data source using the name key (fail_missing_match_value)
ansible.builtin.set_fact:
consolidated: "{{ data_sources|ansible.utils.consolidate(fail_missing_match_value=True) }}"
ignore_errors: true
vars:
sizes:
- name: a
size: small
- name: b
size: medium
additional_data_source:
- data: "{{ sizes }}"
match_key: name
name: sizes
data_sources: "{{ base_data + additional_data_source }}"
# fatal: [localhost]: FAILED! => {
# "msg": "Error when using plugin 'consolidate': 'fail_missing_match_value'
# reported Missing match value c in data source 2"
# }
- name: Consolidate the data source using the name key (fail_duplicate)
ansible.builtin.set_fact:
consolidated: "{{ data_sources|ansible.utils.consolidate(fail_duplicate=True) }}"
ignore_errors: true
vars:
sizes:
- name: a
size: small
- name: a
size: small
additional_data_source:
- data: "{{ sizes }}"
match_key: name
name: sizes
data_sources: "{{ base_data + additional_data_source }}"
# fatal: [localhost]: FAILED! => {
# "msg": "Error when using plugin 'consolidate': 'fail_duplicate' reported Duplicate values in data source 2"
# }
Reviewed-by: Ashwini Mhatre <mashu97@gmail.com>
Reviewed-by: Sagar Paul <sagpaul@redhat.com>
Reviewed-by: Bradley A. Thornton <bthornto@redhat.com>