* Change example syntax on supervisorctl module
* Change example syntax or _ec2_ami_search module
* Change example syntax on cloudformation module
* Change example syntax on ec2 module
* Change example syntax on ec2_facts module
* Change example syntax on ec2_eip module
* Change example syntax on rds module
* Change example syntax on route53 module
* Change example syntax on s3 module
* Change example syntax on digital_ocean module
* Change example syntax on docker_service module
* Change example syntax on cloudformation module
* Change example syntax on gc_storage module
* Change example syntax on gce module
* Change example syntax on gce_mig module
* Change example syntax on _glance_image module
* Change example syntax on _keystone_user module
* Change example syntax on _nova_keypair module
* Change example syntax on _quantum_floating module
* Change example syntax on _quantum_floating_ip_associate module
* Change example syntax on _quantum_network module
* Change example syntax on _quantum_router module
* Change example syntax on _quantum_router_gateway module
* Change example syntax on _quantum_router_interface module
* Change example syntax on _quantum_subnet module
* SQUASH _quantum_subnet
* Add missing quotes
the docker container module's `exposed_ports` was slightly ambigous.
Use the official Docker documentation to define what an `exposed port`
is.
Resolves: ansible/ansible-modules-core#5303
Signed-off-by: Daniel Andrei Minca <mandrei17@gmail.com>
- Removed required_if.
- Fixed doc strings.
- Removed debug output being appended to actions.
- Put import of basics at bottom to be consistent with other docker modules
- Added 'containers' alias to 'connected' param
- Put facts in ansible_facts.ansible_docker_network
The "Developing Modules" documentation states:
Include a minimum of dependencies if possible. If there are
dependencies, document them at the top of the module file, and have
the module raise JSON error messages when the import fails.
When docker_service runs on a remote host without PyYAML it crashes with
ImportError.
This patch raises a JSON error message when import fails, but only if
the PyYAML module is actually used. It's only needed when the
"definition" parameter is given.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
Reading the entire tar file into memory can result in out-of-memory
conditions such as this traceback:
Traceback (most recent call last):
File "/tmp/ansible_YELTSu/ansible_module_docker_image.py", line 486, in load_image
self.client.load_image(image_data)
File "/usr/local/lib/python2.7/dist-packages/docker/api/image.py", line 147, in load_image
res = self._post(self._url("/images/load"), data=data)
...
File "/usr/lib/python2.7/httplib.py", line 997, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 848, in _send_output
msg += message_body
MemoryError
Luckily docker-py's load_image(), which calls requests post(), accepts a
file-like object instead of a string. Pass in the file object to avoid
reading the full file into memory. This allows larger tar files to load
succesfully.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
* Adding docker_container
* If state absent, stop the container before attempting to remove. Fixed status running check.
* If container absent, stop before removing. Fix container status check.
Apologies, but I no longer use this module day-to-day myself, and I don't have the bandwidth right now to effectively triage changes in any kind of timely fashion.
Hello!
I wanted stop the containers matched only by image name, but can't do this, if I not set cmd in playbook.
This behavior confused me.
If cmd or entrypoint is defined for running container, but not defined in playbook, makes matching behavior as this sample:
https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker.py#L463
restart_containers(containers.running) may try to restart containers
that are deleted when looping through get_differing_containers()
fix this by refreshing list after first loop
The ulimit will be specified as a list and separated by colons. The
hard limit is optional, in which case it is equal to the soft limit.
The ulimits are compared to the ulimits of the container and added
or adjusted accordingly on by a reload.
The module ensures that ulimits are available in the capabilities
iff ulimits is passes as a parameter.
Previously the logging module hard coded the default logging driver. This means
if the docker daemon is started with a different logging driver, the ansible
module would continually restart it when run.
This fix adds a call to docker.Client.info(), which is inspected if a logging
driver is not supplied in the playbook, and the container only restarted if
the logging driver applied differs from the configured default.
In usage, this has solved issues with using alternative logging drivers.