Multiple tcp services on the same port - linux

I'm working on a project where some clients (embedded linux systems) needs to connect to a main server using so far at least two protocols: HTTPS and SSH. One of the requirement is that only one flow is allowed from every client to the server, so I've to found a way to make the two services works on the same port.
One solution that I was thinking about is to use the iptables markers: on the client side mark the packets for SSH with 0x1, the packets for HTTPS with 0x2 and then on the server side, based on the marker, redirect the packets to the right service listening on the local interface. Is it an acceptable solution? Are the markers kept by the network routers or is only something working locally on the same machine for iptables?
And anyway, if you can advice about a different solution, of course it's welcome!

More for other users finding this question in the future:
https://github.com/yrutschle/sslh has what you might need. I haven't used it (yet) but planning on it.
From the Github site:
sslh -- A ssl/ssh multiplexer
sslh accepts connections on specified ports, and forwards them further based on tests performed on the first data packet sent by the remote client.
Probes for HTTP, SSL, SSH, OpenVPN, tinc, XMPP are implemented, and any other protocol that can be tested using a regular expression, can be recognised. A typical use case is to allow serving several services on port 443 (e.g. to connect to SSH from inside a corporate firewall, which almost never block port 443) while still serving HTTPS on that port.
Hence sslh acts as a protocol demultiplexer, or a switchboard. Its name comes from its original function to serve SSH and HTTPS on the same port.

Related

How to create a secure internal route in the CloudFoundry environment (Swisscom AppCloud)

I would like to create a secure internal route between two applications within the same space/organization. It should never be possible to reach the Node.js application from the outside. My Java application connects via HTTP to the Node application (running on express).
I have now tried to setup the desired configuration by creating a route called example-route.apps.internal and assigned it to the Node application. As a next step, I've opened the port (I've tried 443, 80, 8080) in the network configuration of the Java application (with the destination being the Node app). I restaged both applications.
Then, I opened a Java connection to the link http://example-route.apps.internal/test123. I've also tried to use https. The result was the same. Java refused to conncet to this URL.
Now, the following questions:
How can I properly set up this communication? Should I resolve this internal DNS somehow? Which port is the correct one if I just use the port of the env variable? How should I read this port from the other application?
How secure is the communication, if HTTP is used instead of HTTPS? (I assume HTTPS is not possible internally). Is it as safe as an HTTPS connection from the outside? Which devices are between, how far out does the connection go?
Thank you!
I think you're almost there.
Then, I opened a Java connection to the link http://example-route.apps.internal/test123. I've also tried to use https. The result was the same. Java refused to conncet to this URL.
You should use http://example-route.apps.internal:8080/test123. Your app is set to listen on $PORT, which is always 8080 in current versions of CF.
Normally you don't need to worry about this because your traffic goes in through Gorouter which translates for you (maps external port 80 -> internal 8080). With internal routes, traffic is direct so there is no transformation. That's why you need to use port 8080 in your URL.
Alternatively, you could use a service discovery mechanism like Eureka or Consul, but it's not a requirement. In this case, the service would know it's listening on 8080 and register that in the registry.
As far as HTTPS, that's tricky. Your app is only listening on 80/HTTP. You would have to change it to listen on 443/HTTPS, but then you need certs and different server configuration. It's technically possible, but it's a whole can of worms.
In some newer versions, Envoy is present and accepts HTTPS traffic into a container, can make HTTPS easier but it's still not a slam dunk (at the time of writing, at least). I expect this will get better in the future.
Should I resolve this internal DNS somehow?
Internal DNS helps with locating your other apps, not the port. Otherwise you'd need to manage IP addresses, which change often, and that would require something like Eureka or Consul.
Which port is the correct one if I just use the port of the env variable?
See above.
How should I read this port from the other application?
It's always 8080 at the moment, and has been for multiple years. It's unlikely to change, so you could probably hard code or set it in a config file safely.
How secure is the communication, if HTTP is used instead of HTTPS? (I assume HTTPS is not possible internally).
Is it as safe as an HTTPS connection from the outside? Which devices are between, how far out does the connection go?
Traffic would not be accessible externally as it wouldn't leave the Cell in some cases or worst case it goes between two Cells, but traffic would be visible internally since it's not encrypted. That means you need to have more trust on your CF provider, who would have access to internal traffic.
If it were HTTPS, only someone with the key would be able to decrypt it. You would still have to trust your provider though as they could likely get the key & use it to decrypt traffic. It would just be more work for them than if traffic is unencrypted.
Hope that helps!

How tunneling services like 'Localtunnel' works without SSH?

I want to understand how my local IP address (localhost) can be exposed to Internet. For that I've read [here] a method of port forwarding using SSH. Which basically does routing from publicly available server to our localhost using SSH.
But I wonder how the service like 'LocalTunnel' works? In above article it's written as following:
There services (localtunnel for example) creates a tunnel from their server back to port 3000 on your dev box. It functions pretty much exactly like an SSH tunnel, but doesn’t require that you have a server.
I've tried reading code from it's github repository and what I understood is:
These services have a server which is publicly available, which generates different URLs, and when we hit that URL, It forward the request to localhost corresponding to that URL.
So basically it's works like a proxy server.
Is these understanding correct? If yes then what I don't understand is that how these server has access to some localhost running on my computer? How it perform request to it? What I'm missing here? Here is the code which I referred.
The Short Answer (TL;DR)
The Remote (i.e. the localtunnel software on your computer) initializes the connection to the Relay (i.e. localtunnel.me) acts as a multiplexing proxy and when Clients (i.e. web browsers) connect, the relay multiplexes the connections between remotes and clients by appending special headers with network information.
Browser <--\ /--> Device
Browser <---- M-PROXY Service ----> Device
Browser <--/ \--> Device
Video Presentation
If you prefer a video preso, I just gave a talk on this at UtahJS Conf 2018, in which I talk a little about all of the other potential solutions as well: SSH Socksv5 proxies (which you mentioned), VPN, UPnP, DHT, Relays, etc:
Access Ability: Access your Devices, Share your Stuff
Slides: http://telebit.cloud/utahjs2018
How localtunnel, ngrok, and Telebit work (Long Answer)
I'm the author of Telebit, which provides service with similar features to what ngrok, localtunnel, and libp2p provide (as well as open source code for both the remote/client and relay/server to run it yourself).
Although I don't know the exact internals of how localtunnel is implemented, I can give you an explanation of how it's generally done (and how we do it), and it's most likely nearly identical to how they do it.
The magic that you're curious about happens in two places: the remote socket and the multiplexer.
How does a remote client access the server on my localhost?
1. The Remote Socket
This is pretty simple. When you run a "remote" (telebit, ngrok, and localtunnel all work nearly the same in this regard), it's actually your computer that initiates the request.
So imagine that the relay (the localtunnel proxy in your case) uses port 7777 to receive traffic from "remotes" (like your computer) and your socket number (randomly chosen source address on your computer) is 1234:
Devices: [Your Computer] (tcp 1234:7777) [Proxy Server]
Software: [Remote] -----------------------> [Relay]
(auth & request 5678)
However, the clients (such as browsers, netcat, or other user agents) that connect to you actually also initiate requests with the relay.
Devices: [Proxy Service] (tcp 5678) [Client Computer]
Software: [Relay] <------------------------ [netcat]
If you're using tcp ports, then the relay service keeps an internal mapping, much like NAT
Internal Relay "Routing Table"
Rule:
Socket remote[src:1234] --- Captures ------> ALL_INCOMING[dst:5678]
Condition:
Incoming client[dst:5678] --- MATCHES -------> ALL_INCOMING[dst:5678]
Therefore:
Incoming client[dst:5678] --- Forwards To ---> remote[src:1234]
Both connections are "incoming" connections, but the remote connection on the "south end" is authorized to receive traffic coming from another incoming source (without some form of authorized session anyone could claim use of that port or address and hijack your traffic).
[Client Computer] (tcp 5678) [Proxy Service] (tcp 1234) [Your Computer]
[netcat] --------------> <--[Relay]--> <------------ [remote]
2. The Multiplexer
You may have noticed that there's a critical flaw in the description above. If you just route the traffic as-is, your computer (the remote) could only handle one connection at a time. If another client (browser, netcat, etc) hopped on the connection, your computer wouldn't be able to tell which packets came from where.
Browser <--\ /--> Device
Browser <---- M-PROXY Service ----> Device
Browser <--/ \--> Device
Essentially what the relay (i.e. localtunnel proxy) and the remote (i.e. your computer) do is place a header in front of all data that is to be received by the client. It needs to be something very similar to HAProxy's The PROXY Protocol, but works for non-local traffic as well. It could look like this:
<src-address>,<src-port>,<sni>,<dst-port>,<protocol-guess>,<datalen>
For example
172.2.3.4,1234,example.com,443,https,1024
That info could be sent exactly before or append to each data packet that's routed.
Likewise, when the remote responds to the relay, it has to include that information so that the relay knows which client the data packet is responding to.
See https://www.npmjs.com/package/proxy-packer for long details
Sidenote/Rant: Ports vs TLS SNI
The initial explanation I gave using tcp ports, because it's easy to understand. However, localtunnel, ngrok, and telebit all have the ability to use tls servername indicator (SNI) instead of relying on port numbers.
[Client Computer] (https 443) [Proxy Service] (wss 443) [Your Computer]
[netcat+openssl] --------------------> <--[Relay]--> <------------ [remote]
(or web browser) (sni:xyz.somerelay.com) (sni:somerelay.com)
MITM vs p2p
There are still a few different ways you can go about this (and this is where I want to give a shameless plug for telebit because if you're into decentralization and peer-to-peer, this is where we shine!)
If you only use the tls sni for routing (which is how localtunnel and ngrok both work by default last time I checked) all of the traffic is decrypted at the relay.
Anther way requires ACME/Let's Encrypt integration (i.e. Greenlock.js) so that the traffic remains encrypted, end-to-end, routing the tls traffic to the client without decrypting it. This method functions as peer-to-peer channel for all practical purposes (the relay acts as just another opaque "switch" on the network of the Internet, unaware of the contents of the traffic).
Furthermore, if https is used both for remotes (for example, via Secure WebSockets) and the clients, then the clients look just like any other type of https request and won't be hindered by poorly configured firewalls or other harsh / unfavorable network conditions.
Now, solutions built on libp2p also give you a virtualized peer connection, but it's far more indirect and requires routing through untrusted parties. I'm not a fan of that because it's typically slower and I see it as more risky. I'm a big believer than network federation will win out over anonymization (like libp2p) in the long. (for our use case we needed something that could be federated - run by independently trusted parties- which is why we built our solution the way that we did)

Is there a way to test if a computer's connection is firewalled?

I'm writing a piece of P2P software, which requires a direct connection to the Internet. It is decentralized, so there is no always-on server that it can contact with a request for the server to attempt to connect back to it in order to observe if the connection attempt arrives.
Is there a way to test the connection for firewall status?
I'm thinking in my dream land where wishes were horses, there would be some sort of 3rd-party, public, already existent servers to whom I could send some sort of simple command, and they would send a special ping back. Then I could simply listen to see if that arrives and know whether I'm behind a firewall.
Even if such a thing does not exist, are there any alternative routes available?
Nantucket - does your service listen on UDP or TCP?
For UDP - what you are sort of describing is something the STUN protocol was designed for. It matches your definition of "some sort of simple command, and they would send a special ping back"
STUN is a very "ping like" (UDP) protocol for a server to echo back to a client what IP and port it sees the client as. The client can then use the response from the server and compare the result with what it thinks its locally enumerated IP address is. If the server's response matches the locally enumerated IP address, the client host can self determinte that it is directly connected to the Internet. Otherwise, the client must assume it is behind a NAT - but for the majority of routers, you have just created a port mapping that can be used for other P2P connection scenarios.
Further, you can you use the RESPONSE-PORT attribute in the STUN binding request for the server to respond back to a different port. This will effectively allow you to detect if you are firewalled or not.
TCP - this gets a little tricky. STUN can partially be used to determine if you are behind a NAT. Or simply making an http request to whatismyip.com and parsing the result to see if there's a NAT. But it gets tricky, as there's no service on the internet that I know of that will test a TCP connection back to you.
With all the above in mind, the vast majority of broadband users are likely behind a NAT that also acts as a firewall. Either given by their ISP or their own wireless router device. And even if they are not, most operating systems have some sort of minimal firewall to block unsolicited traffic. So it's very limiting to have a P2P client out there than can only work on direct connections.
With that said, on Windows (and likely others), you can program your app's install package can register with the Windows firewall so your it is not blocked. But if you aren't targeting Windows, you may have to ask the user to manually fix his firewall software.
Oh shameless plug. You can use this open source STUN server and client library which supports all of the semantics described above. Follow up with me offline if you need access to a stun service.
You might find this article useful
http://msdn.microsoft.com/en-us/library/aa364726%28v=VS.85%29.aspx
I would start with each os and ask if firewall services are turned on. Secondly, I would attempt the socket connections and determine from the error codes if connections are being reset or timeout. I'm only familiar with winsock coding, so I can't really say much for Linux or mac os.

Remote port blocking in firewalls?

some guys use a firewall on their laptops which not only blocks their own local incoming ports (except those they need for their application) but also blocks messages unless they are issued from a distinct port number. We're talking about a local UDP server which is listening to UDP broadcasts.
The problem is that the remote client uses a random port, say 1024, which is blocked unless they tell the firewall to accept it.
What puzzles me is that as far as I know from using sockets in my programs is that usually the client gets its port number from the OS, whereas only when you have a server, you bind your socket to a distinct port, right?
In my literature and in tutorials and code snippets in the web I haven't found any clue that clients should be using fixed port numbers at all.
So how is this in reality? Am I probably missing a point?
Are there client applications around using fixed ports?
Is is actually useful to block remote ports with a firewall?
And if yes, what level of added security does this give to you?
Thanks for enlightenment in beforehand...
Although the default API's allow the network stack to select a local port for client connections, clients may specify a fixed port for various reasons.
Some specifications (FTP) specify a fixed port for clients. Most servers don't care if clients get this correct.
Some clients use a fixed pool of ports for egress from a LAN to the Internet. This allows firewall rules to more completely lock down outbound traffic.
Source ports are sometimes uses as a weak type of "security through obscurity".
You always get a random address and/or port when not explicitly having bound to one before sending.
Daemons are usually bound to a fixed port, so that:
you can actually contact them without having to try all possible ports or utilize a secondary resolver (remember the SUNRPC portmapping crap?)
and because a TCP socket is not allowed to listen() if it has not bound to a port, IIRC.
Are there client applications around using fixed ports?
Some can be configured so, like BIND9.
useful to block remote ports with a firewall?
No, because your peer may choose any port of his. Block him and you'll lose a customer, so to speak.

Should a web server's firewall block outbound HTTP traffic over port 80?

I understand the need for putting a web server in a DMZ and blocking inbound traffic to all ports except 80 and 443. I can also see why you should probably also block most outbound traffic in case the server is compromised.
But is it necessary to block outbound HTTP traffic over port 80? If so, why? A lot of web applications these days rely on sending/retrieving data from external web services and APIs, so blocking outbound traffic over port 80 would prevent this capability. Is there a security concern that's valid enough to justify this?
The only reason I can think of is if your machine is somehow compromomised remotely then it won't be able to DDoS another website on port 80. It's not something I normally do though.
Rather then blocking it, throttle it. Use iptables -m limit.
I have several web apps that invoke external web services, so I would say it's a bad idea to block output HTTP traffic. If you're concerned with security, you could block it and allow for only certain destinations.
Depending on your SQL version, you could have certificate authentication time out issues with SQL server 2005.
First - I agree with #vartec on throttling "Rather then blocking it, throttle it. Use iptables -m limit" as at least part of the solution.
However I can offer another reason to not block port 80 outbound at all times. If you have automatic security updates turned on the server can't reach out to PPAs over port 80 to initiate a security update. Thus if you have automatic security updates set up they won't run. On ubuntu auto-security updates are turned on in 14.04 LTS with:
sudo apt-get install unattended-upgrades update-notifier-common && \
sudo dpkg-reconfigure -plow unattended-upgrades
(then select "YES")
More graceful solutions would be ansible scripts opening the port automatically, possibly also modifying an AWS security group rule via the CLI in addition to iptables if you are at AWS. I prefer modifying my outbound rules temporarily via AWS CLI initiated by a stealth box. This forces logging the update up in my AWS S3 log buckets but never shows up in the logs on the server itself. Further the server that initiates the update doesn't even have to be in the private subnet ACL.
Maybe do both? You have to figure at times an attack is going to relay off an internal IP in your subnet so there is merit to doubling down while preserving the ability to automate backups and security updates.
I hope this helps. If not reply and provide more code examples to be more specific and exact. #staysafe !
If the machine is compromised and outbound traffic on port 80 is allowed, it would make it easier for intruders to send back harvested data to themselves. Allowing outbound traffic means you can initiate a connection from your machine to the outside world. A better approach would be allowing outbound traffic only to certain web sites/addresses that you trust (i.e. Microsoft Windows Update, Google reCAPTCHA) rather than any destination in the world.
what do you mean with blocking outbound traffic over port 80.
You have two possibilities. Gernerate Dynamic Rules which allow communication from client to your webserver for this session. Search for Stateful firewall rules.
Or you generally allow established Connections to communicate in and outgoing with each other.
If you generally block all outbound traffic over Port 80 your Webserver could not reply to any client.
The other way around, if your Webserver needs to get some API, e.g. a jquery library he wont use port 80 as his Port to communicate with the Webserver who holds the API.
Your Webserver would normally choose a port > 1024 and use it for his request to get the API from the remote Server.
So blocking all traffic over port 80 (as your port you connecting from) would not prevent your Server from sending any requests for apis and such things. because he doesnt use port 80 when he acts as a client.

Resources