I too feel that this is a stupid question but I'm unsure if capturing traffic will work if the host is offline? I was actually discussing about Man-in-The-Middle attacks and just thought lets suppose https://example.com is offline (down or blocked on a network) and someone made a request to http://example.com/example-category/example so will an attacker be able to capture this complete GET request in a local network?
I think yes because the request will anyhow be sent from the client to host and there it should be captured. If that is the case then can HTTPS traffic be also captured (talking only GET based) if the host is offline or blocked intentionally on a local network?
If the man in the middle is located in the network before the host is found unreachable (for example in the local network before the router), then yes, the request would go to the MITM.
Yet the MITM might be in a bit of a situation if he finds destination host unreachable from his network, too.
If the router/gateway that is blocking the request is before the MITM, the request will be blocked and not received by the MITM.
If there is no MITM, but just traffic monitoring, there will be no connection made and thus no request transmitted to be monitored.
As to HTTPS: If the MITM cannot provide a valid certificate for the domain name (usually, MITM cannot), the connection would fail on the TLS part.
No. Before the http request itself can be sent, the initial TCP connection has to be established. And if the host is offline, then the TCP connection CANNOT be established.
No connection, no request, therefore nothing to sniff. The only thing that COULD be sniffed/intercepted would therefore be the initial TCP SYN packet, and that by itself is essentially useless.
It's like dialing a phone number that doesn't exist - the attempt to dial can be monitored, but since the call can never be established, there's no voice chatter to intercept.
Related
I want to understand how my local IP address (localhost) can be exposed to Internet. For that I've read [here] a method of port forwarding using SSH. Which basically does routing from publicly available server to our localhost using SSH.
But I wonder how the service like 'LocalTunnel' works? In above article it's written as following:
There services (localtunnel for example) creates a tunnel from their server back to port 3000 on your dev box. It functions pretty much exactly like an SSH tunnel, but doesn’t require that you have a server.
I've tried reading code from it's github repository and what I understood is:
These services have a server which is publicly available, which generates different URLs, and when we hit that URL, It forward the request to localhost corresponding to that URL.
So basically it's works like a proxy server.
Is these understanding correct? If yes then what I don't understand is that how these server has access to some localhost running on my computer? How it perform request to it? What I'm missing here? Here is the code which I referred.
The Short Answer (TL;DR)
The Remote (i.e. the localtunnel software on your computer) initializes the connection to the Relay (i.e. localtunnel.me) acts as a multiplexing proxy and when Clients (i.e. web browsers) connect, the relay multiplexes the connections between remotes and clients by appending special headers with network information.
Browser <--\ /--> Device
Browser <---- M-PROXY Service ----> Device
Browser <--/ \--> Device
Video Presentation
If you prefer a video preso, I just gave a talk on this at UtahJS Conf 2018, in which I talk a little about all of the other potential solutions as well: SSH Socksv5 proxies (which you mentioned), VPN, UPnP, DHT, Relays, etc:
Access Ability: Access your Devices, Share your Stuff
Slides: http://telebit.cloud/utahjs2018
How localtunnel, ngrok, and Telebit work (Long Answer)
I'm the author of Telebit, which provides service with similar features to what ngrok, localtunnel, and libp2p provide (as well as open source code for both the remote/client and relay/server to run it yourself).
Although I don't know the exact internals of how localtunnel is implemented, I can give you an explanation of how it's generally done (and how we do it), and it's most likely nearly identical to how they do it.
The magic that you're curious about happens in two places: the remote socket and the multiplexer.
How does a remote client access the server on my localhost?
1. The Remote Socket
This is pretty simple. When you run a "remote" (telebit, ngrok, and localtunnel all work nearly the same in this regard), it's actually your computer that initiates the request.
So imagine that the relay (the localtunnel proxy in your case) uses port 7777 to receive traffic from "remotes" (like your computer) and your socket number (randomly chosen source address on your computer) is 1234:
Devices: [Your Computer] (tcp 1234:7777) [Proxy Server]
Software: [Remote] -----------------------> [Relay]
(auth & request 5678)
However, the clients (such as browsers, netcat, or other user agents) that connect to you actually also initiate requests with the relay.
Devices: [Proxy Service] (tcp 5678) [Client Computer]
Software: [Relay] <------------------------ [netcat]
If you're using tcp ports, then the relay service keeps an internal mapping, much like NAT
Internal Relay "Routing Table"
Rule:
Socket remote[src:1234] --- Captures ------> ALL_INCOMING[dst:5678]
Condition:
Incoming client[dst:5678] --- MATCHES -------> ALL_INCOMING[dst:5678]
Therefore:
Incoming client[dst:5678] --- Forwards To ---> remote[src:1234]
Both connections are "incoming" connections, but the remote connection on the "south end" is authorized to receive traffic coming from another incoming source (without some form of authorized session anyone could claim use of that port or address and hijack your traffic).
[Client Computer] (tcp 5678) [Proxy Service] (tcp 1234) [Your Computer]
[netcat] --------------> <--[Relay]--> <------------ [remote]
2. The Multiplexer
You may have noticed that there's a critical flaw in the description above. If you just route the traffic as-is, your computer (the remote) could only handle one connection at a time. If another client (browser, netcat, etc) hopped on the connection, your computer wouldn't be able to tell which packets came from where.
Browser <--\ /--> Device
Browser <---- M-PROXY Service ----> Device
Browser <--/ \--> Device
Essentially what the relay (i.e. localtunnel proxy) and the remote (i.e. your computer) do is place a header in front of all data that is to be received by the client. It needs to be something very similar to HAProxy's The PROXY Protocol, but works for non-local traffic as well. It could look like this:
<src-address>,<src-port>,<sni>,<dst-port>,<protocol-guess>,<datalen>
For example
172.2.3.4,1234,example.com,443,https,1024
That info could be sent exactly before or append to each data packet that's routed.
Likewise, when the remote responds to the relay, it has to include that information so that the relay knows which client the data packet is responding to.
See https://www.npmjs.com/package/proxy-packer for long details
Sidenote/Rant: Ports vs TLS SNI
The initial explanation I gave using tcp ports, because it's easy to understand. However, localtunnel, ngrok, and telebit all have the ability to use tls servername indicator (SNI) instead of relying on port numbers.
[Client Computer] (https 443) [Proxy Service] (wss 443) [Your Computer]
[netcat+openssl] --------------------> <--[Relay]--> <------------ [remote]
(or web browser) (sni:xyz.somerelay.com) (sni:somerelay.com)
MITM vs p2p
There are still a few different ways you can go about this (and this is where I want to give a shameless plug for telebit because if you're into decentralization and peer-to-peer, this is where we shine!)
If you only use the tls sni for routing (which is how localtunnel and ngrok both work by default last time I checked) all of the traffic is decrypted at the relay.
Anther way requires ACME/Let's Encrypt integration (i.e. Greenlock.js) so that the traffic remains encrypted, end-to-end, routing the tls traffic to the client without decrypting it. This method functions as peer-to-peer channel for all practical purposes (the relay acts as just another opaque "switch" on the network of the Internet, unaware of the contents of the traffic).
Furthermore, if https is used both for remotes (for example, via Secure WebSockets) and the clients, then the clients look just like any other type of https request and won't be hindered by poorly configured firewalls or other harsh / unfavorable network conditions.
Now, solutions built on libp2p also give you a virtualized peer connection, but it's far more indirect and requires routing through untrusted parties. I'm not a fan of that because it's typically slower and I see it as more risky. I'm a big believer than network federation will win out over anonymization (like libp2p) in the long. (for our use case we needed something that could be federated - run by independently trusted parties- which is why we built our solution the way that we did)
I'm just wondering if we could send HTTP request to API / Web Server anonymously? right now after some googling. i cannot find any answer if it is possible.
i'm writing a code that will scrape the data from its server but i think they might have an API monitoring feature for their Data.
right now i am using node with Axios and the script i am using is fetching almost ~10k requests per minute, which i think is bad because their server could blew up.
i tried googling but i didn't find any answer to my problem.
Sending http request to servers anonymously
The HTTP protocol uses TCP as the underlying transport protocol. The TCP protocol uses the three-way handshake to establish connections. In theory you could send packets without your source address, or with someone else's address - just like you could write someone else's address as a sender on an envelope in traditional mail.
Now, the three-way handshake works like this: You send the first SYN packet, then the server sends a SYN-ACK packet - to whom? If your address was not in the first SYN packet then the server cannot send you the second packet. And if you cannot get the SYN-ACK packet then you cannot even establish the connection. This is all before you can even think about sending the HTTP request on the TCP connection because there is no connection.
So, the answer is: No. You cannot send HTTP requests anonymously because you cannot establish a TCP connection anonymously.
Of course you could use a proxy, VPN, a tunnel, NAT or something like that so that the requests appear as not originating from you but keep in mind that the proxy needs to know your address to pass responses to you so you are not completely anonymous, just someone else knows who you are and that someone else will not hesitate to reveal your identity as soon as you cause any trouble.
I have a question: What is the difference between sniffing and forwarding.
I mean that when I am in the MITM position (the gateway of a client), I can access to all the HTTPS website with this client browser.
In addition, I can check the generated traffic on the gateway side (including HTTPS requests/answers - encrypted of course!).
But as soon as I am using tools called "sniffers" (ettercap for instance) on the gateway side I am getting certificate errors and cannot even acces those HTTPS websites on the client side.
I am thus wondering what is the difference between sniffing and forwarding the traffic, in both cases we have access to the exact same information on the gateway side (generated traffic).
Finally, when sending HTTPS requests, those request has to go throw numerous routers to reach the server destination, a router is not a sniffer I suppose that is why we don't get the SSL certificate errors, right?
Sniffing is passive, whereas forwarding (MITM) is active.
When forwarding (MITM), you are part of the route. The traffic goes from the client to your IP address, then on to the server.
When sniffing, you're simply on the same physical network as the client and are able to receive a copy of the packets that the client is sending to the server.
If sniffing is causing HTTPS to fail, then there's something wrong. Perhaps you have mixed up the two terms?
If it were possible to retrieve the remote IP from a packet received by my Apache2 server (through a custom plugin perhaps), would it always be guaranteed to be accurate? Or is this value as easy to spoof as the referrer header?
My intended use case is to rate-limit unauthenticated API calls.
If it's a TCP packet, then it'll be accurate as to the sending host. IPs in TCP packets cannot be spoofed unless you've got control of the routers involved. With spoofed source packets, only the initial SYN packet will come back, and then the SYN+ACK response from the server will go to the spoofed address, not wherever the forgery came from - e.g. you cannot do the full 3-way handshake unless you can control packet routing from the targetted machine.
UDP packets, on the other hand, can be trivially forged and you cannot trust anything about them.
As well, even simple things like proxy servers and NAT gateways can cloak the 'real' ip from where the packet originated. You'll get an IP, but it'll be the IP of the gateway/proxy, not the original machine.
It is not reliable. Not only because it can be spoofed, but also because a network element can make your server see a different IP address.
For example, it is very typical in a company to access the Internet through a proxy. Depending on the configuration, from your server point of view, all the different users come from the same IP address.
In any case is a filter you can use in many scenarios. For example, show a captcha when you detect too many login requests from the same IP address.
If your intention is to rate-limit invalid API calls you might want to consider using a service like spamhaus. They list IP's that are likely bots and probes. There are other companies and lists as well. But if your intention is to ever ID the 'bad guy' the source IP is very unlikely to be correct.
I have an interesting network security challenge that I can't figure out the best way to attack.
I need to provide a way to allow two computers (A and B) that are behind firewalls to make a secure connection to each other using only a common "broker" untrusted server on the internet (somewhere like RackSpace). (the server is considered untrusted because the customers behind the firewalls won't trust it since it is on an open server) I can not adjust the firewall settings to allow the networks to directly connect to each other because the connections are no known ahead of time.
This is very similar to a NAT to NAT connection problem like that handled by remote desktop help tools (crossloop, copilot, etc).
What I would really like to find is a way to open an SSL connection between the two hosts and have the public server broker the connection. Preferably when host A tries to connect to host B, it should have to provide a token that the broker can check with host B before establishing the connection.
To add another wrinkle to this, the connection mechanism needs to support two types of communication. First, HTTP request/response to a REST web service and second persistent socket connection(s) to allow for real-time message passing.
I have looked at the techniques I know about like OpenSSL using certificates, OAuth, etc, but I don't see anything that quite does what I need.
Has anyone else handled something like this before? Any pointers?
You can solve your problem with plain SSL.
Just have the untrusted server forward connections between the client hosts as opaque TCP connections. The clients then establish an end-to-end SSL connection over that forwarded TCP tunnel - with OpenSSL, one client calls SSL_accept() and the other calls SSL_connect().
Use certificates, probably including client certificates, to verify that the other end of the SSL connection is who you expect it to be.
(This is conceptually similar to the way that HTTPS connections work over web proxies - the browser just says "connect me to this destination", and establishes an SSL connection with the desired endpoint. The proxy just forwards encrypted SSL data backwards and forwards, and since it doesn't have the private key for the right certificate, it can't impersonate the desired endpoint).
In general, SSL is packet-based protocol (for the purpose of solving your task). If you can have the host forward the packets back and forth, you can easily have SSL-secured communication channel. One thing you need is something like our SSL/TLS components, which allow any transport and not just sockets. I.e. the component tells your code "send this packet to the other side" or "do you have anything for me to receive?" and your code communicates with your intermediate server.