# ssh-agent for remote to remote connections

## paradigm-X

I am having to use a work-around to get past a little problem using ssh-agent on a LAN. I am hoping that someone else, who may have figured out a solution previously, can provide a pointer to information on how to fix it properly. For sake of discussion I will use designations H1, R1, and R2, to refer to the three machines involved. The goal is to use ssh-agent running on H1 to connect first with R1 and from there to R2. Only public key authentication files are used for connecting.

The reason for using the agent is to hold all the keys on H1 and to avoid having to put keys on R1 to connect with R2. Presently, my configuration for '.ssh/config' on H1 is such that I can use the agent on H1 to connect to either of R1 or R2 simply by using this sort of command from xterm on H1:

ssh R1 

(or)

ssh R2

Where the designations R1 and R2 are merely names I have used as aliases for these machines. For example, in the file '.ssh/config' on H1, I put this 

Host R1

User R1-user

Hostname <IP-for-R1>

IdentityFile ~/.ssh/id_rsa_R1

PubkeyAuthentication yes

UserKnownHostsFile ~/.ssh/known_hosts

BindAddress IP1

ForwardAgent yes

Host R2

User R2-user

Hostname <IP-for-R2>

IdentityFile ~/.ssh/id_rsa_R2

PubkeyAuthentication yes

UserKnownHostsFile ~/.ssh/known_hosts

BindAddress IP2

ForwardAgent yes

With these settings in the upper part of H1's '.ssh/config' file, I can use the ssh-agent easily to open connections to either of these other two by using simply "ssh R1" or "ssh R2" in xterm. What I would like to do additionally is to use the same procedure to make this sort of connection from R1 to R2 after having connected to R1 with the same agent. In other words, first do "ssh R1" to connect with R1, and then again do "ssh R2" in xterm on R1, which is displayed now on H1, to connect further on to R2. 

This does not work simply as I would like: I have to change the command to make the subsequent connection to R2 after I have connected to R1. Instead of using merely "ssh R2" on xterm while already connected to R1, letting the agent handle the negotiations, I have to use the command in this way: ssh R2-user@<IP-for-R2>. The command works formatted like so and the agent works properly since I have no keys on R1 and do not have to provide additional authentication.

When I attempt the connection from R1 with the simpler format, i.e., ssh R2, I get an error message about pubkey authentication being at fault. However, I get no such message when I use the longer format for this command, i.e., ssh R2-user@<IP-for-R2>, while the agent works out  negotiations. So I am not inclined to think that the error message about pubkey authentication is really true because this very same authentication works through the agent when I change the command format. Since it does in fact work when I avoid using the alias, I am more inclined to think that there may be something set up wrong in the section of '.ssh/config' on R1 corresponding to R2.

Currently, on R1 in ".ssh/config" I have the very same settings as I now have for it on H1, specifically, as shown above:

Host R2

User R2-user

Hostname <IP-for-R2>

IdentityFile ~/.ssh/id_rsa_R2

PubkeyAuthentication yes

UserKnownHostsFile ~/.ssh/known_hosts

BindAddress IPX

ForwardAgent yes

I have tried removing the line for "IdentityFile", thinking that it may confuse the issue because there is not any private-keys on R2. It is not clear to me really what I should use, if anything, referring to the location of the file for this private-key from the perspective of R1, since it is the agent who already has it available. I have also tried the procedure after removing 'PubkeyAuthentication yes' from this section on R1's file. Even so, when I tried removing these lines, or only one of them, either way I got the same message about a pubkey-authentication error.

I hope that someone can provide insight to a possible solution or a link to more information discussing these config files. I spent a fair amount of time researching the matter, but I did not find any illustrations of properly configured files for remote machines intending to make use of ssh-agent from a single, different machine, for even a simple case as this one seems to be.

----------

## paradigm-X

Judging from the lack of response and views, I am not sure anyone else is much interested in this problem, but I have now solved it, and I hope sharing the information may be helpful to someone. Surely, I am not the only one who has faced this problem.

Let me recap the issue in question. It is a feature of the .ssh/config file that allows one to set several specific features to apply to indivdual remote hosts, setting them within differently named sections of the ssh/config file, in such a way that one can connect to any of said hosts by merely making reference to an identifying name put into the file. So, I could use something like this on the command line:

ssh server-name1

...where server-name1 is nothing more than a convenient name I have made up to remember this particular server. Doing it in this way allows me to set all the connection, protocol, authentication features, etc., ahead of time within the ssh/config file. This is especially handy when using an ssh-agent to connect with a remote host. Why? Among other reasons, I can avoid having to put the remote hosts' keys on any other remote machines besides the one from which I am on locally, while still being able to connect from one remote host to another.

The problem I was having is that I was able to use this technique for the first remote host only, but for each of the subsequent hosts I was having to enter much of the information on the command line instead of having it already available from the originating .ssh/config. In other words,

I could not do this:

ssh server-name1

...and, from server1, then do this:

ssh server-name2

...but I was having to do this:

ssh server-name1

ssh user@server2-IP

When I did it in this way, the cached ssh-agent keys worked, but the configuration file information was not getting used as I intended even though I had all the servers' sections configured in the original '.ssh/config'. At first I thought that I must not have set up the information correctly in the .ssh/config files of the subsequent remote host, the one from which I was making a secondary connection to a subsequent sshd host. However, I was careful to get the remote hosts' .ssh/config files properly configured, as well as their /etc/ssh/sshd_config files. 

Why was the setting in a locally originating .ssh/config file accepted but not so from a remote one when they were identically configured? I was stumped, and so I put it aside to let my brain cogitate on it in the background for a bit while my feelers were out meanwhile. 

Then, while I was looking around and reading discussions about the changes in the latest version of OpenSSH, I came across a discussion that prompted me to consider something new. Someone was having a problem connecting with a host, and they ended up getting booted away after having made too many connection attempts. That prevention measure built into ssh had caused me some grief earlier too when I was fiddling around with different settings. I had figured, because I was using the ssh-agent instead of a password or the like, that I would not need to have any more than one configured in .ssh/config:  "ConnectionAttempts 1". I mean, how could that go wrong when using an agent, and it sounded like a good idea from a security point as well. Does it not?

However, here is the rub. When using an ssh-agent with multiple, cached keys, so as not to put any keys on remote hosts, "all" the keys have to be cached initially. It ends up that the "order" in which you commit these keys to memory will be the same order in which they get presented to subsequent hosts during authentication. So, if you have more keys cached than the number you have set the value for in "ConnectionAttempts", it becomes possible, even probable, that the latter hosts' keys would never get a chance to be used because the number of connection attempts made with the first several keys in cache end up exceeding the number allowed by your setting. You get disconnected. This happens even though you might have named the specific key needed for each host in the .ssh/config file section for it on the preceeding host.

As a matter of fact, as far as I could see, you cannot just tell the program to use a specific identification key, hoping to bypass a counting routine, by naming its key with "IdentitiesOnly" or "IdentityFile" because using these parameters ends up making the program look onto the local machine's filesystem instead of drawing upon the names of keys in memory. At least that is how it seemed to function from what I could see.

Therefore, it does in fact make a difference in the order in which you have cached keys in ssh-add when using multiple keys with multiple hosts if you intend to have the keys used by a sequence of hosts in succession (as opposed to using these multiple hosts all while originating from only one source connection), if you also limit the number of "ConnectionAttempts" simultaneously.

----------

