0

My question is a bit different than serve ssh & https at the same time or redirect http requests to ssh. I don't want to multiplex the stream. I do have a virtual server to "spare" instead.

What I am thinking if it is possible, to have different virtual server(s) that I can always connect through ssh, while others can always connect through https. No multiplexing; clean/direct solutions, no decision taking. So for example the server web.myserver.com will be am https-only server and ssh.myserver.com a ssh-only server.

Not giving extra "stream overhead" would also be desired, if possible. In my eyes the stream top-level directives seems to have this overhead.

EDIT: to add more info what have I tried up to now:

On the stream{} part of the configuration I put this code:

    upstream ssh {
        server 192.168.1.5:22;
    }
    server {
        listen 443;
        listen [::]:443;
        proxy_pass ssh;
    }

and when I tried to connect to ssh I got this log information:

debug2: resolving "***" port 443
debug3: resolve_host: lookup ***:443
debug3: ssh_connect_direct: entering
debug1: Connecting to *** [***] port 443.
debug3: set_sock_tos: set socket 3 IP_TOS 0x48
debug1: Connection established.
...
debug1: Local version string SSH-2.0-OpenSSH_9.2
debug1: kex_exchange_identification: banner line 0: HTTP/1.1 400 Bad Request
debug1: kex_exchange_identification: banner line 1: Server: nginx/1.18.0
debug1: kex_exchange_identification: banner line 2: Date: Thu, 09 Mar 2023 15:10:38 GMT
debug1: kex_exchange_identification: banner line 3: Content-Type: text/html
debug1: kex_exchange_identification: banner line 4: Content-Length: 157
debug1: kex_exchange_identification: banner line 5: Connection: close
debug1: kex_exchange_identification: banner line 6: 
debug1: kex_exchange_identification: banner line 7: <html>
debug1: kex_exchange_identification: banner line 8: <head><title>400 Bad Request</title></head>
debug1: kex_exchange_identification: banner line 9: <body>
debug1: kex_exchange_identification: banner line 10: <center><h1>400 Bad Request</h1></center>
debug1: kex_exchange_identification: banner line 11: <hr><center>nginx/1.18.0</center>
debug1: kex_exchange_identification: banner line 12: </body>
debug1: kex_exchange_identification: banner line 13: </html>
kex_exchange_identification: Connection closed by remote host
Panayotis
  • 103
  • 4
  • Are you sure the configuration you pasted is being used, and you don't have e.g. something else listening to port 443 instead? I just tried the exact same configuration, and I can ssh connect normally using port 443. In contrast, if I define a normal (http!) server listening on port 443, I get the same debug output as you, which indicates something else must be listening on port 443 and responding with plain http without tls. – gepa Mar 09 '23 at 17:45
  • @gepa It's itself (nginx) that listens to port 443, on the specific `server {}` parts that are included on the main `http` area (as is by default). But I don't want nginx to be "smart" and listen to the stream; I jsut want to redirect the stream as appropriate based on the hostname. – Panayotis Mar 13 '23 at 14:15
  • Why the default configuration in the `http` section would listen to 443 instead of 80 (since it seems to reply unencrypted anyway), is strange. In any way, you will need to disable port 443 in the `http` section if you want the `stream` section to take effect, you can not have the same port in both the `http` and the `stream` section. See also this: https://stackoverflow.com/questions/65033538/how-to-combine-nginx-stream-and-http-for-the-same-servername – gepa Mar 13 '23 at 17:20
  • Regarding your actual question, I am not sure if I understand fully what you are trying to do, but if you want nginx to handle both ssh and https, you have to put both in the `stream` section, and differentiate by protocol (something similar to this: https://serverfault.com/a/1081363). On connection establishment there is no hostname information, only IP and port, you can not do anything with the hostname yet. – gepa Mar 13 '23 at 17:31
  • Your last answer is exactly what I was missing. The missing information is that, "On connection establishment there is no hostname information, only IP and port". So I can't do what I was thinking to do. The rest is similar to what I have found. If you formulate this as an answer I'd make it the accepted answer :) – Panayotis Mar 14 '23 at 23:37

1 Answers1

1

Based on your latest comment it appears that you want to do the split of the traffic at a very early stage, directly at connection establishment.

At that point, the receiving server has no information about the hostname you are trying to connect to, so there is no much it can do so early. The way it works, your client does a DNS lookup to get the IP of web.myserver.com or ssh.myserver.com, and then does a TCP connect to that IP (and port 443 in this case). Now whatever you have listening on that port at that IP goes through the normal TCP connection establishment, which has only IP/port information and no hostnames.

Since it seems that it is a requirement that both respond to the same port (443), depending on your setup (that is, if you have a possibility to add a second IP to your proxy server) you could differentiate by IP and have DNS map each of web.myserver.com and ssh.myserver.com to each of the IPs, and then do the split very early depending on the IP the client is connecting to (e.g. with some iptables rules).

If there is no possibility to add a second IP, I don't really see a solution without at least inspecting the first few packets after connection establishment, which is exactly what the mentioned solutions involving nginx stream sections do. The "stream overhead" you mention should be minimal, since the stream section would directly forward all packets without inspection after the initial figuring out the destination. But it seems the "stream overhead" is not the only part you are trying to avoid, as you say:

Not giving extra "stream overhead" would also be desired

(emphasis mine...). If you share any other requirements that make the nginx solutions infeasible for your use case, I could update my answer with any other possibilities that could come to mind (or somebody else from the community may be able to think of something better).

gepa
  • 811
  • 1
  • 2
  • 10