If the sftp fails, roll over to scp by default. This saves users
from having to know about the scp_if_ssh method when sftp is broken
on the remote host.
We couldn't copy to_unicode, to_bytes, to_str into module_utils because
of licensing. So once created it we had two sets of functions that did
the same things but had different implementations. To remedy that, this
change removes the ansible.utils.unicode versions of those functions.
This fix prevents a broken pipe exception from occurring when password-less
SSH is configured and the sshpass process exits and closes the pipe before
the password is written to the pipe.
AIX ssh does not seem to like compression, moved it to ssh_args
to allow making it configurable. Note that those using ssh_args
already will need to add it explicitly to keep compression.
Due to an apparent race condition while using pty's on a heavily loaded
system, rarely a request to create a temp directory returns an empty
string rather than the newly created path, causing an error. Disabling
forced pty's appears to resolve the issue, so this patch modifies the
mkdtemp remote call not use -tt as we're not escalating privileges and
thus no pty is required.
Fixes#13876
We were logging the command to be executed many times, which made debug
logs very hard to read. Now we do it only once.
Also makes the logged ssh command line cut-and-paste-able (the lack of
which has confused a number of people by now; the problem being that we
pass the command as a single argument to execve(), so it doesn't need an
extra level of quoting as it does when you try to run it by hand).
Pipelining is a *significant* performance benefit, because each task can
be completed with a single SSH connection (vs. one ssh connection at the
start to mkdir, plus one sftp and one ssh per task).
Pipelining is disabled by default in Ansible because it conflicts with
the use of sudo if 'Defaults requiretty' is set in /etc/sudoers (as it
is on Red Hat) and su (which always requires a tty).
We can (and already do) make sudo/su happy by using "ssh -t" to allocate
a tty, but then the python interpreter goes into interactive mode and is
unhappy with module source being written to its stdin, per the following
comment from connections/ssh.py:
# we can only use tty when we are not pipelining the modules.
# piping data into /usr/bin/python inside a tty automatically
# invokes the python interactive-mode but the modules are not
# compatible with the interactive-mode ("unexpected indent"
# mainly because of empty lines)
Instead of the (current) drastic solution of turning off pipelining when
we use a tty, we can instead use a tty but suppress the behaviour of the
Python interpreter to switch to interactive mode. The easiest way to do
this is to make its stdin *not* be a tty, e.g. with cat|python.
This works, but there's a problem: ssh will ignore -t if its input isn't
really a tty. So we could open a pseudo-tty and use that as ssh's stdin,
but if we then write Python source into it, it's all echoed back to us
(because we're a tty). So we have to use -tt to force tty allocation; in
that case, however, ssh puts the tty into "raw" mode (~ICANON), so there
is no good way for the process on the other end to detect EOF on stdin.
So if we do:
echo -e "print('hello world')\n"|ssh -tt someho.st "cat|python"
…it hangs forever, because cat keeps on reading input even after we've
closed our pipe into ssh's stdin. We can get around this by writing a
special __EOF__ marker after writing in_data, and doing this:
echo -e "print('hello world')\n__EOF__\n"|ssh -tt someho.st "sed -ne '/__EOF__/q' -e p|python"
This works fine, but in fact I use a clever python one-liner by mgedmin
to achieve the same effect without depending on sed (at the expense of a
much longer command line, alas; Python really isn't one-liner-friendly).
We also enable pipelining by default as a consequence.
If we request escalation with a password, we start in expecting_prompt
state. If the escalation then succeeds without the password, i.e., the
become_success response arrives, we must explicitly move into the next
state (awaiting_escalation, which immediately goes into ready_to_send),
so that we no longer try to apply the timeout.
Otherwise, we would leak the success notification and eventually
timeout. But if the module response did arrive before the timeout
expired, the "process has already exited" test would do the right
thing by accident (which is why it didn't fail more often).
Fixes#13289
It was set to match the SSH connect timeout. Unfortunately, they would
race when ssh fails to connect, and the connect timeout usually failed.
This led to some misleading error messages.
Fixes#12916
also remove condition to bypass setting user if user matches current user
this enables forcing user when set to the same user as current user and ignoring .ssh/config
while keeping .ssh/config with current user if nothing is specified.
The earlier code behaved exactly as though this default had been set,
but it was actually handled as a(n unnecessary) special case inside the
connection plugin, rather than set as an explicit default.
If the default is overriden either in ansible.cfg or the environment,
the new code will continue to work (in fact, it won't know or care,
since it just uses the value set in the PlayContext).
This is submitted as a separate commit for easier review to address
backwards-compatibility concerns.
Using set_host_overrides() in the connection plugin to access the ssh
argument variables from the inventory didn't see group_vars/host_vars
settings, as noted earlier. Instead, we can set the correct values in
the PlayContext, which has access to all command-line options, task
settings, and variables.
The only downside of doing so is that the source of the settings is no
longer available in ssh.py, and therefore can't be logged. But the code
is simpler, and it actually works.
This change was suggested by @jimi-c in response to the FIXME in the
earlier commit.
Now we have the following ways to set additional arguments:
1. [ssh_connection]ssh_args in ansible.cfg: global setting, prepended to
every command line for ssh/scp/sftp. Overrides default ControlPersist
settings.
2. ansible_ssh_common_args inventory variable. Appended to every command
line for ssh/scp/sftp. Used in addition to ssh_args, if set above, or
the default settings.
3. ansible_{sftp,scp,ssh}_extra_args inventory variables. Appended to
every command line for the relevant binary only. Used in addition to
#1 and #2, if set above, or the default settings.
3. Using the --ssh-common-args or --{sftp,scp,ssh}-extra-args command
line options (which are overriden by #2 and #3 above).
This preserves backwards compatibility (for ssh_args in ansible.cfg),
but also permits global settings (e.g. ProxyCommand via _common_args) or
ssh-specific options (e.g. -R via ssh_extra_args).
Fixes#12576
This is also peripheral to what _build_command needs, can be improved
and tested independently, and so makes more sense in a separate method.
This commit doesn't change any functionality (and I've verified that it
works with the various combinations: control_path set in ansible.cfg,
ssh_args adding or not adding ControlMaster/ControlPersist, etc.).
There doesn't appear to be anything that actually uses tmp_path in the
connection plugins so we don't need to pass that in to exec_command.
That change also means that we don't need to pass tmp_path around in
many places in the action plugins any more. there may be more cleanup
that can be done there as well (the action plugin's public run() method
takes tmp as a keyword arg but that may not be necessary).
As a sideeffect of this patch, some potential problems with chmod and
the patch, assemble, copy, and template modules has been fixed (those
modules called _remote_chmod() with the wrong order for their
parameters. Removing the tmp parameter fixed them.)
The process is already gone, so there's not going to be any new data
showing up on its stderr; we only want to make sure that we haven't
missed something that was already written. So polling once is enough.