I can use the ssh configuration file to enable the forwarding of ssh keys added to ssh-agent. How can I do the same with gpg keys?
-
3Both answers suggest running socat to expose the GPG agent unix socket on a tcp port. However, unlike unix sockets, TCP ports do not have the same level on access control. In particular, _every_ user on the same host can now connect to your GPG agent. This is probably ok if you have a single-user laptop, but if any other users can also log into the same system (the system where the GPG agent is running), they can also access your GPG agent, posing a significant security problem. Letting socat directly start SSH using the EXEC address type is probably the best way to fix this. – Matthijs Kooijman Aug 04 '14 at 10:02
-
For another presentation of the openssh 6.7+ solution, see https://2015.rmll.info/IMG/pdf/an-advanced-introduction-to-gnupg.pdf – phs Oct 05 '16 at 21:21
-
[This](http://www.gossamer-threads.com/lists/gnupg/users/77816) was useful to me. – phs Dec 05 '16 at 18:54
6 Answers
OpenSSH's new Unix Domain Socket Forwarding can do this directly starting with version 6.7.
You should be able to something like:
ssh -R /home/bminton/.gnupg/S.gpg-agent:/home/bminton/.gnupg/S-gpg-agent -o "StreamLocalBindUnlink=yes" -l bminton 192.168.1.9
- 579
- 7
- 13
-
-
4I found a required critical detail: on the remote (private key-less) machine, the _public_ key of the signing identity must be present. Local gpg version 2.1.15 OS X, remote 2.1.11 linux. – phs Oct 06 '16 at 04:06
-
1If you have your pyblic keys published on the keyservers, you can setup the remote keyring with something like `gpg -K --keyid-format long | grep '\[SC\]' | grep -v expired | sed 's#sec \+[^/]\+/\([0-9A-F]\+\).*#\1#' | ssh user@target 'xargs -n 1 gpg --recv-key'`. We can figure out the socket names automatically: `ssh -A user@target -R "$(ssh user@target 'gpgconf --list-dirs agent-socket')":"$(gpgconf --list-dirs agent-extra-socket)" 'gpg -K'` – pkoch Mar 21 '20 at 20:56
-
This does not work with OpenSSH 8.1 client (mac) and OpenSSH server 7.6 (ubuntu), even after exporting and importing the public key. Maybe I'm doing something wrong, but `passwordstore` decryption fails with `gpg: decryption failed: No secret key`. Is it intentional that the remote gpg agent file has the dot replaced by a hyphen? – oarfish Mar 11 '21 at 13:04
-
2It seems that on Ubuntu 18.04, systemd owns the file with the host's extra socket, and the ssh deamon cannot bind to it (joutnalctl tells me address already in use and ` error: unix_listener: cannot bind to path: /run/user/1001/gnupg/S.gpg-agent.extra`). Is this answer still current? – oarfish Mar 11 '21 at 13:41
EDIT: This answer is obsolete now that proper support has been implemented in OpenSSH, see Brian Minton's answer.
SSH is only capable of forwarding tcp connections within the tunnel.
You can, however, use a program like socat to relay the unix socket over TCP, with something like that (you will need socat both on the client and the server hosts):
# Get the path of gpg-agent socket:
GPG_SOCK=$(echo "$GPG_AGENT_INFO" | cut -d: -f1)
# Forward some local tcp socket to the agent
(while true; do
socat TCP-LISTEN:12345,bind=127.0.0.1 UNIX-CONNECT:$GPG_SOCK;
done) &
# Connect to the remote host via ssh, forwarding the TCP port
ssh -R12345:localhost:12345 host.example.com
# (On the remote host)
(while true; do
socat UNIX-LISTEN:$HOME/.gnupg/S.gpg-agent,unlink-close,unlink-early TCP4:localhost:12345;
done) &
Test if it works out with gpg-connect-agent. Make sure that GPG_AGENT_INFO is undefined on the remote host, so that it falls back to the $HOME/.gnupg/S.gpg-agent socket.
Now hopefully all you need is a way to run all this automatically!
- 2,245
- 16
- 18
-
Well the ssh agent keys are forwarded automatically when the forwarding is set in the configuration file. I will try this out. – txwikinger Jul 19 '10 at 14:18
-
You're right, ssh-agent uses a unix socket too, but has special support for it (little bit tired here :) Nevertheless, the solution should still work. – b0fh Jul 19 '10 at 14:32
-
1For this solution, my gpg-agent would be publicy accessible via port 12345 if I was not behind a firewall/NAT. This should be mentioned in the answer please. – Jonas Schäfer Apr 30 '12 at 14:46
-
I'm guessing your last edit fixed that issue, Jonas? it's only binding to `localhost` now. – jmtd May 01 '12 at 08:19
-
This fails for me with the following argument from the remote host's `gpg-connect-agent`: `can't connect to server: ec=31.16383 gpg-connect-agent: error sending RESET command: Invalid value passed to IPC`. The remote `socat` then dies. The local `socat` dies and utters `socat[24692] E connect(3, AF=1 "", 2): Invalid argument`. [This page](http://snafu.priv.at/interests/crypto/remotegpg.html) leads me to believe that this will never work, because the agent doesn't store the key (just the passphrase). Has this been confirmed to work by anyone? – jmtd May 01 '12 at 08:24
-
@jmtd yes, this fixes the privacy issue. However, I was unable to get it to work with socat, which is why I hacked up a python script which does the trick: http://fpaste.org/Um0D/ (this may need improvement). Other issues I had with socat was lingering tcp sockets and stuff. – Jonas Schäfer May 01 '12 at 13:25
-
@JonasWielicki - the fpaste.org link is now broken. Can you provide your script via a new pfaste.org link or better yet as an A to this Q? – slm Aug 19 '14 at 13:12
-
@slm I used fpaste as it was an informational comment. In fact, I cannot even recall what it was, although the comments make be believe that it was a simple socat-like utility binding to localhost and forwarding the traffic between the tcp and the unix socket. – Jonas Schäfer Aug 22 '14 at 09:29
In new versions of GnuPG or Linux distributions the paths of the sockets can change. These can be found out via
$ gpgconf --list-dirs agent-extra-socket
and
$ gpgconf --list-dirs agent-socket
Then add these paths to your SSH configuration:
Host remote
RemoteForward <remote socket> <local socket>
Quick solution for copying the public keys:
scp .gnupg/pubring.kbx remote:~/.gnupg/
On the remote machine, activate GPG agent:
echo use-agent >> ~/.gnupg/gpg.conf
On the remote machine, also modify the SSH server configuration and add this parameter (/etc/ssh/sshd_config):
StreamLocalBindUnlink yes
Restart SSH server, reconnect to the remote machine - then it should work.
- 241
- 2
- 4
-
A more detailed tutorial including some troubleshooting can be found here: https://mlohr.com/gpg-agent-forwarding/ – MaLo Jun 07 '18 at 07:39
-
2In case the remote host runs a current version of Debian, it seems running `systemctl --global mask --now gpg-agent.service gpg-agent.socket gpg-agent-ssh.socket gpg-agent-extra.socket gpg-agent-browser.socket` is required to prevent systemd from launching a socket stealing remote gpg-agent. According to https://bugs.debian.org/850982 this is the intended behavior. – sampi Jul 16 '18 at 10:13
-
2You might need to prevent the remote `gpg-agent` from starting and removing the forwarded socket. This is also described in the gpg wiki. What they don't say is how to do that. `echo no-autostart >> ~/.ssh/gpg-agent.conf` on the remote machine worked for me. – magiconair Mar 16 '20 at 15:14
-
1
-
1`use-agent` is no longer necessary, nowadays it is a dummy option https://gnupg.org/documentation/manuals/gnupg/GPG-Configuration-Options.html#index-use_002dagent – Augusto Hack Aug 29 '22 at 12:06
-
When trying to forward the local gpg-agent's `extra` socket to the remote host, I always get `error fetching identities: invalid format` when checking `ssh-add -L` on the remote machine after connecting. I had to forward the actual `S.gpg-agent.ssh` socket from the local to the remote and after doing that, everything works great. – dephekt Nov 07 '22 at 05:38
As an alternative to modifying /etc/ssh/sshd_config with StreamLocalBindUnlink yes, you can instead prevent the creation of the socket files that need replacing:
systemctl --global mask --now \
gpg-agent.service \
gpg-agent.socket \
gpg-agent-ssh.socket \
gpg-agent-extra.socket \
gpg-agent-browser.socket
Note that this affects all users on the host.
Bonus: How to test GPG agent forwarding is working:
- Local:
ssh -v -o RemoteForward=${remote_sock}:${local_sock} ${REMOTE} - Check that
${remote_sock}is shown in the verbose output from ssh - Remote:
ls -l ${remote_sock} - Remote:
gpg --list-secret-keys- You should see lots of
debug1messages from ssh showing the forwarded traffic
- You should see lots of
If that doesn't work (as it didn't for me) you can trace which socket GPG is accessing:
strace -econnect gpg --list-secret-keys
Sample output:
connect(5, {sa_family=AF_UNIX, sun_path="/run/user/14781/gnupg/S.gpg-agent"}, 35) = 0
In my case the path being accessed perfectly matched ${remote_sock}, but that socket was not created by sshd when I logged in, despite adding StreamLocalBindUnlink yes to my /etc/ssh/sshd_config. I was created by systemd upon login.
(Note I was too cowardly to restart sshd, since I've no physical access to the host right now. service reload sshd clearly wasn't sufficient...)
Tested on Ubuntu 16.04
- 675
- 1
- 6
- 10
I had to do the same, and based my script on the solution by b0fh, with a few tiny modifications: It traps exits and kills background processes, and it uses the "fork" and "reuseaddr" options to socat, which saves you the loop (and makes the background socat cleanly kill-able).
The whole thing sets up all forwards in one go, so it probably comes closer to an automated setup.
Note that on the remote host, you will need:
- The keyrings you intend to use to sign/en/decrypt stuff.
- Depending on the version of gpg on the remote, a fake
GPG_AGENT_INFOvariable. I prefill mine with~/.gnupg/S.gpg-agent:1:1- the first 1 is a PID for the gpg agent (I fake it as "init"'s, which is always running), the second is the agent protocol version number. This should match the one running on your local machine.
#!/bin/bash -e
FORWARD_PORT=${1:-12345}
trap '[ -z "$LOCAL_SOCAT" ] || kill -TERM $LOCAL_SOCAT' EXIT
GPG_SOCK=$(echo "$GPG_AGENT_INFO" | cut -d: -f1)
if [ -z "$GPG_SOCK" ] ; then
echo "No GPG agent configured - this won't work out." >&2
exit 1
fi
socat TCP-LISTEN:$FORWARD_PORT,bind=127.0.0.1,reuseaddr,fork UNIX-CONNECT:$GPG_SOCK &
LOCAL_SOCAT=$!
ssh -R $FORWARD_PORT:127.0.0.1:$FORWARD_PORT socat 'UNIX-LISTEN:$HOME/.gnupg/S.gpg-agent,unlink-close,unlink-early,fork,reuseaddr TCP4:localhost:$FORWARD_PORT'
I believe there's also a solution that involves just one SSH command invocation (connecting back from the remote host to the local one) using -o LocalCommand, but I couldn't quite figure out how to conveniently kill that upon exit.
- 29
- 3
-
Aren't you missing some 'user@host' argument before socat, in the last command? Anyhow even after fixing that, this fails for me with "socat[6788] E connect(3, AF=2 127.0.0.1:0, 16): Connection refused" popping up locally, when trying gpg-connect-agent remotely. – David Faure Aug 07 '16 at 18:56
According to GnuPG Wiki, you have to forward the remote socket S.gpg-agent.extra to local socket S.gpg-agent.
Furthermore you need to enable StreamLocalBindUnlink on the server.
Keep in mind that you also need the public part of your key available on remote GnuPG.
Use gpgconf --list-dir agent-socket respectively gpgconf --list-dir agent-extra-socket on the remote to get the actual paths.
Summary
Addded configuration on remote
/etc/sshd_config:StreamLocalBindUnlink yesImport your public key on remote:
gpg --export <your-key> >/tmp/public scp /tmp/public <remote-host>:/tmp/public ssh <remote-host> gpg --import /tmp/publicCommand to connect through SSH with gpg-agent forwarding enabled: (paths for my Debian)
ssh -R /run/user/1000/gnupg/S.gpg-agent:/run/user/1000/gnupg/S.gpg-agent.extra <remote-host>
- 364
- 3
- 11
-
@brian minton: It does not work for me if not forwarding to the extra socket. – doak Jun 06 '18 at 12:48