Any tips from a book on traffic analysis? - security

I'm studying to pentest, so I basically start with networking, programming, and pentest methods, but in a book called The Basics of Hacking and Penetration Testing on Port Scanning, she says that "she tries to log on to any remote access that were discovered on your doorstep scan", but the problem starts here, I began to be in doubt about the relationship between ports and their services, and how to deeply understand things like:
80 / tcp open http openresty
443 / tcp open ssl / https
8080 / tcp open http-proxy?
So i was searching google about ssl, openresty, in the case https and http i study from some about networks but still i was only touching on deeper knowledge and dispersed on the internet, so i would like one or more books that deal specifically with the part practice a network traffic analysis and recommend other books to better understand processes for answering questions like "oh cool, but which port is associated with remote access?".
I've researched remote access and found that port 3389 is commonly used at first, but I feel that's not what that quoted part of the book meant.
Thanks and sorry for my bad english!

First off, I have not read the book, so I cannot comment on it.
However, I can give some direction on what the nmap scan means and what your next steps would likely be.
The line
80 / tcp open http openresty
means that TCP port 80 is open, the protocol name is "http" and the server is running "openresty".
For port 80, your next step would be to run some sort of fingerprinting on that server. For example, an "nmap -sV -p 80" scan or run some tool like Sparta or Nikto.
This should be done to identify the version of openresty on the server. The next step would be to see if there are any login pages exposed on the server. If you do find login pages, you could try brute-forcing the login to get access.
Or, using the version of openresty, you could search for exploits for that version of openresty (usually via Google search, or some exploit database like "exploit-db.com"). Then use those exploits to gain access to the server, or do a DoS for example.
However, when you do any of this, make sure you get permission from your target to do this, or be aware that there can be consequences. If you are just starting to learn about this stuff, it would be better to create your own VM with DVWA deployed on it. That would be your target for practice.
Hope that helps.

Related

Is there a valid rationale for blocking HTTP traffic to ports other than 80?

I installed TomCat using the default port: 8080. The hosted application started to get some significant use. Today I got an email from someone else who says that their network security rules do not allow them to access any web server that is not on port 80.
I resolved the issue by running TomCat on both port 80 and 8080, however, I keep thinking how silly this "security rule" is. Clearly, harmful servers can be run on any port, including port 80. Does running a server on port 80 make it in any way more trustable? I am assuming that this rule was created at one time long ago when someone found a rogue server running on a non-80 port, and decided the best way to prevent this was to block all HTTP servers that are not on port 80. In other words: an inappropriate over-generalization.
Perhaps I simply am not aware of it, but is there some valid rationale for restricting users to only access web servers on port 80?
In the past there was an elevated risk of phishing/pharming attacks using high-port web servers, though these days it really isn't an issue. The reason being was that in order to bind a listening socket to a lower order port on a *nix machine, you had to be root. Attackers without root could phish or otherwise somehow direct users to point their browser at the evil listening socket and receive a payload. These days the scenario described rarely if ever occurs.
It is also possible that an admin in the past sought to impede a user connecting to a proxy. Proxies commonly listen on port 8080 for web requests. If your organization has a policy that can be complied with by having this rule, then that may be the reason for its existence.
I personally see that rule as an annoyance rather than an effective filter. Nowadays there are much better tools that do smart filtering so you don't have to rely on security through obscurity tricks like restricting HTTP to port 80.

How would one connect two clients (one of them is browser) behind firewalls

I know p2p software like Skype is using UDP hole punching for that. But what if one of the clients is a web browser which needs to download a file from another client (TCP connection instead of UDP)? Is there any technique for such case?
I can have an intermediate public server which can marry the clients but I can't afford all the traffic between these clients go through this server. The public server can only establish the connection between the clients, like Skype does, and that's all. And this must work via TCP (more exactly, HTTP) to let the downloading client be a web browser.
Both clients must not be required to setup anything in their routers or anything like that.
I'll plan to code this in C/C++ but at the point I'm wondering if this idea is possible at all.
I previously wrote up a very consolidated rough answer on how P2P roughly works with some discussion on various protocols and corresponding open-source libraries. You can read it here.
The reliability of P2P is ultimately a result of how much you invest in it from both a client coding perspective and a service configuration (i.e. signaling servers and relays). You can settle for easy NAT traversal of UDP with no firewall support. Maybe a little more effort and you get TCP connectivity. And you can go "all the way" and have relays that have HTTPS listeners for clients behind the hardest of firewalls to traverse.
As to the answer of your question about firewalls. Depends on how the Firewall is configured. Many firewalls are just glorified NATs with security to restrict traffic to certain ports and block unsolicited incoming connections. Others are extremely restrictive and just allow HTTP/HTTPS traffic over a proxy.
The video conference apps will ultimately fallback to emulating an HTTPS connection over the PC's configured proxy server to port 443 (or 80) of a remote relay server if it can't get directly connected. (And in some cases, the remote client will try to listen on port 80 or port 443 so it can connect direct).
You are absolutely right to assume that having all the clients going through a relay will be expensive to maintain. If your goal is 100% connectivity no matter what type of firewall the clients is behind, some relay solution will have to exist. If you don't support a relay solution, you can invest heavily in getting the direct connectivity to work reliably and only have a small percentage of clients blocked.
Hope this helps.
PeerConnection, part of WebRTC solves this in modern browsers.
Under the hood it uses ICE which is an RFC for NAT hole-punching.
For older browsers, it is possible to use the P2P support in Flash.

Is there a way to test if a computer's connection is firewalled?

I'm writing a piece of P2P software, which requires a direct connection to the Internet. It is decentralized, so there is no always-on server that it can contact with a request for the server to attempt to connect back to it in order to observe if the connection attempt arrives.
Is there a way to test the connection for firewall status?
I'm thinking in my dream land where wishes were horses, there would be some sort of 3rd-party, public, already existent servers to whom I could send some sort of simple command, and they would send a special ping back. Then I could simply listen to see if that arrives and know whether I'm behind a firewall.
Even if such a thing does not exist, are there any alternative routes available?
Nantucket - does your service listen on UDP or TCP?
For UDP - what you are sort of describing is something the STUN protocol was designed for. It matches your definition of "some sort of simple command, and they would send a special ping back"
STUN is a very "ping like" (UDP) protocol for a server to echo back to a client what IP and port it sees the client as. The client can then use the response from the server and compare the result with what it thinks its locally enumerated IP address is. If the server's response matches the locally enumerated IP address, the client host can self determinte that it is directly connected to the Internet. Otherwise, the client must assume it is behind a NAT - but for the majority of routers, you have just created a port mapping that can be used for other P2P connection scenarios.
Further, you can you use the RESPONSE-PORT attribute in the STUN binding request for the server to respond back to a different port. This will effectively allow you to detect if you are firewalled or not.
TCP - this gets a little tricky. STUN can partially be used to determine if you are behind a NAT. Or simply making an http request to whatismyip.com and parsing the result to see if there's a NAT. But it gets tricky, as there's no service on the internet that I know of that will test a TCP connection back to you.
With all the above in mind, the vast majority of broadband users are likely behind a NAT that also acts as a firewall. Either given by their ISP or their own wireless router device. And even if they are not, most operating systems have some sort of minimal firewall to block unsolicited traffic. So it's very limiting to have a P2P client out there than can only work on direct connections.
With that said, on Windows (and likely others), you can program your app's install package can register with the Windows firewall so your it is not blocked. But if you aren't targeting Windows, you may have to ask the user to manually fix his firewall software.
Oh shameless plug. You can use this open source STUN server and client library which supports all of the semantics described above. Follow up with me offline if you need access to a stun service.
You might find this article useful
http://msdn.microsoft.com/en-us/library/aa364726%28v=VS.85%29.aspx
I would start with each os and ask if firewall services are turned on. Secondly, I would attempt the socket connections and determine from the error codes if connections are being reset or timeout. I'm only familiar with winsock coding, so I can't really say much for Linux or mac os.

NAT, P2P and Multiplayer

How can an application be designed such that two peers can communicate directly with each other (assuming both know each other's IPs), but without outgoing connections? That's, no ports will be opened. Bitorrent for example does it, but multiplayer games (as far as I know) require port forwarding.
I'm not sure what you mean by No Outgoing Connections, I'm going to assume like everyone else you meant no Incoming Connections (they are behind a NAT/FW/etc).
The most common one mentioned so far is UPNP, which in this context is a protocol that allows you as a computer to talk to the Gateway and say forward me this port because I want someone on the outside to be able to talk to me. UPNP is also designed for other things, but this is the common thing for home networking (Actually it's one of many definitions).
There are also more common and slightly more reliable ways if you don't own the network. The most common is called STUN but if I recall correctly there are a few variants. Basically you use a third party server that allows incoming connections to try and coordinate a communication channel. Basically, what you do is send a UDP packet to you're peer, which will open up you're NAT for a response, but gets dropped on you're peer's NAT (since no forwarding rule exists yet). Through the connection to the intermediary, they are then told to do the same, which now opens up their NAT, and matches the existing rule in you're NAT. Now the communications can proceed. Their is a variant of this which will allow a TCP/IP connection as well by sending SYN and SYN-ACK messages with some coordination.
The Wikipedia articles I've linked to has links to the relevant rfc's for these protocols on precisely how they work. Essentially it comes down to, there isn't an easy answer, as this is a very network centric problem.
You need a "meeting point" in the network somewhere: the participants "meet" at a "gateway" of some sort and the said "gateway function" takes care of the forwarding.
At least that's one way of doing it: I won't try to comment on the details of Bittorrent... I am sure you can google for links.
UPNP dealt with this mostly in the recent years, but the need to open ports is because the application has been coded to listen on a specific port for a response.
Ports beneath 1024 are called "registered" because they've been assigned a port number because a company paid for it. This doesn't mean you couldn't use port 53 for a webserver or SSH, just that most will assume when they see it that they are dealing with DNS. Ports above 1024 are unregistered, so there's no association - your web browser, be it Internet Explorer/Firefox/etc, is using an unregistered port to send the request to the StackOverflow webserver(s) on port 80. You can use:
netstat -a
..on windows hosts to see what network connections are currently established, including the port involved.
UPNP can be used to negotiate with the router to open and forward a port to your application. Even bit-torrent needs at least one of the peers to have an open port to enable p2p connections. There is no need for both peers to have an open port however, since they both communicate with the same server (tracker) that lets them negotiate and determine who has an open port.
An alternative is an echo-server / relay-server somewhere on the internet that both peers trust, and have that relay all the traffic.
The "problem" with this solution is that the echo-server needs to have lots of bandwidth to accomodate all connected peers since it relays all the traffic rather than establish p2p connections.
Check out EchoWare: http://www.echogent.com/tech.htm

Avoid traffic shaping by using ssh on port 443

I heard that if you use port 443 (the port usually used for https) for ssh, the encrypted packets look the same to your isp.
Could this be a way to avoid traffic shaping/throttling?
I'm not sure it's true that any given ssh packet "looks" the same as any given https packet.
However, over their lifetime they don't behave the same way. The session set up and tear down don't look alike (SSH offer a plain text banner during initial connect, for one thing). Also, typically wouldn't an https session be short lived? Connect, get your data, disconnect, whereas ssh would connect and persist for long periods of time? I think perhaps using 443 instead of 22 might get past naive filters, but I don't think it would fool someone specifically looking for active attempts to bypass their filters.
Is throttling ssh a common occurrence? I've experienced people blocking it, but I don't think I've experienced throttling. Heck, I usually use ssh tunnels to bypass other blocks since people don't usually care about it.
443, when used for HTTPS, relies on SSL (not SSH) for its encryption. SSH looks different than SSL, so it would depend on what your ISP was actually looking for, but it is entirely possible that they could detect the difference. In my experience, though, you'd be more likely to see some personal firewall software block that sort of behavior since it's nonstandard. Fortunately, it's pretty easy to write an SSL tunnel using a SecureSocket of some type.
In general, they can see how much bandwidth you are using, whether or not the traffic is encrypted. They'll still know the endpoints of the connection, how long it's been open, and how many packets have been sent, so if they base their shaping metrics on this sort of data, there's really nothing you can do to prevent them from throttling your connection.
Your ISP is probably more likely to traffic shape port 443 over 22, seeing as 22 requires more real-time responsiveness.
Not really a programming question though, maybe you'll get a more accurate response somewhere else..

Resources