Sending http request to servers anonymously - node.js

I'm just wondering if we could send HTTP request to API / Web Server anonymously? right now after some googling. i cannot find any answer if it is possible.
i'm writing a code that will scrape the data from its server but i think they might have an API monitoring feature for their Data.
right now i am using node with Axios and the script i am using is fetching almost ~10k requests per minute, which i think is bad because their server could blew up.
i tried googling but i didn't find any answer to my problem.

Sending http request to servers anonymously
The HTTP protocol uses TCP as the underlying transport protocol. The TCP protocol uses the three-way handshake to establish connections. In theory you could send packets without your source address, or with someone else's address - just like you could write someone else's address as a sender on an envelope in traditional mail.
Now, the three-way handshake works like this: You send the first SYN packet, then the server sends a SYN-ACK packet - to whom? If your address was not in the first SYN packet then the server cannot send you the second packet. And if you cannot get the SYN-ACK packet then you cannot even establish the connection. This is all before you can even think about sending the HTTP request on the TCP connection because there is no connection.
So, the answer is: No. You cannot send HTTP requests anonymously because you cannot establish a TCP connection anonymously.
Of course you could use a proxy, VPN, a tunnel, NAT or something like that so that the requests appear as not originating from you but keep in mind that the proxy needs to know your address to pass responses to you so you are not completely anonymous, just someone else knows who you are and that someone else will not hesitate to reveal your identity as soon as you cause any trouble.

Related

close connection from proxy server to target server and not from client to proxy server

Objective:
Never close connection between client and SOCKS proxy + reuse it to send multiple HTTPS requests to different targets (example targets: google.com, cloudflare.com) without closing the socket during the switch to different target.
Step 1:
So I have client which connects to SOCKS proxy server over TCP connection. That is client socket(and only socket(file descriptor) used in this project).
client -> proxy
Step 2:
Then after connection is established and verified. Then it does TLS connect to the target server which can be for example google.com (DNS lookup is done before this).
Now we have connection:
client -> proxy -> target
Step 3:
Then client sends HTTPS request over it and receives response successfully.
Issue appears:
After that I want to close connection explicitly between proxy and target so I can send request to another target. For this it is required to close TLS connection and I don't know how to do it without closing connection between client and proxy which is not acceptable.
Possible solutions?:
1:
Would sending Connection: close\n\r request to current target close connection only between proxy and target and not close the socket.
2:
If I added Connection: close\n\r to headers of every request, would that close the socket and thus it's not valid solution?
Question:
(NodeJS) I made custom https Agent which handles Agent-s method -> callback(req, opts) where opts argument is request options from what client sent to target (through proxy). This callback returns tls socket after it's connected, I built tls socket connection outside of the callback and passed it to agent. Is it possible to use this to close connection between proxy and target using req.close(), would this close the socket? Also what is the point of req in Agent's callback, can it be used in this case?
Any help is appreciated.
If you spin up wireshark and look at what is happening through your proxy, you should quickly see that HTTP/S requests are connection oriented, end-to-end (for HTTPS) and also time-boxed. If you stop and think about it, they are necasarily so, to avoid issues such as the confused deputy problem etc.
So the first bit to note is that for HTTPS, the proxy will only see the initial CONNECT request, and then from there on everything is just a TCP stream of TLS bytes. Which means that the proxy won't be able to see the headers (that is, unless your proxy is a MITM that intercepts the TLS handshake, and you haven't mentioned this, so I've assumed not).
The next bit is that the agent/browser will open connections in parallel (typically a half-dozen for a browser) and will also use pipelining and keep-alive to send multiple requests down the same connection.
Then there are connection limits imposed by the browser, and servers. These typically cap the number of requests, and the duration that they are held open, before speculatively closing them. If they didn't, any reasonably busy server would quickly exhaust all their TCP sockets.
So all-in, what you are looking to achieve isn't going to work.
That said, if you are looking to improve performance, the node client has a few things you can enable and tweak:
Enable TLS session reuse, which will make connections much more
efficient to establish.
Enable keep-alive, which will funnel multiple requests through
the same connection.

Client security using UDP

Introduction
I am currently trying to build up a networking layer for Unity from scratch. Currently I am testing the communication via UDP using Node.js for the server and the client. However I guess the language of the implementation will not matter for what I am asking for.
Current approach
The current approach using Node.js for the server and the client is pretty basic. I simply send a packet from a client to my server while the client and the server are not in the same local network. Both are behind a router and therefore also behind a NAT.
The server then sends back an answer to the IP and port received within the UDP packet that was sent from the client.
Problem
I am curious about the security on the client side regarding to ports being opened on the client machines and routers. So far I assumed that I don't need to do anything to secure the client from attackers or anything else that can do something with the ports that are used by my application. The following assumption shows why I think that I don't need to do anything to secure the clients.
Assumption
Server is setting up callbacks.
Server starts listening to a specific port which is also forwarded to the servers machine within the router.
Server now will call a callback when a UDP message was received. The server then will send a UDP message to the address and the port of the client obtained by the message received.
Client is setting up callbacks.
Client starts listening to port 0 which for Node.js's dgram means:
For UDP sockets, causes the dgram.Socket to listen for datagram messages on a named port and optional address. If port is not specified or is 0, the operating system will attempt to bind to a random port. - https://nodejs.org/api/dgram.html#dgram_socket_bind_port_address_callback
So the operating system now knows that packets sent to this port belong to my application.
Nobody can use this for something malicious.
Client, which knows the servers address and port, starts the process of sending a UDP message to the server.
Clients router receives the UDP message. NAT creates a random port (used on the public side) and maps it to the clients (local) address and port.
So the router now knows that packets sent to the public address and the newly generated port belong to the local address and port.
Nobody can use this for something malicious.
Clients router sends UDP message containing the public address and the NAT generated port to the server.
The worst thing that can happen is that a man-in-the-middle attacker can read the data the client is sending. Due to it is only gamedata like positions and so on that is sent this is not a big problem while developing the basics.
Nobody can use this for something malicious.
Server receives the message and calls the callback described in 3. So the server sends to the public address and the NAT generated port of the client.
The worst thing that can happen is that a man-in-the-middle attacker can read the data the server is sending. Due to it is only gamedata like positions and so on that is sent this is not a big problem while developing the basics.
Nobody can use this for something malicious.
Same as 7. with the servers router and the servers local address and port.
Same as 8. with the servers router.
Client receives the UDP message of the server and calls a callback which processes the message contents.
Due to the local port of the client is bound to my application only nobody can use this for something malicious due to I simply ignore the contents if they are not from the real server.
Question
So is my assumption correct and I really don't need to secure the client from any attacks that will harm the clients in any way?

Can I capture HTTP traffic even if the host if offline?

I too feel that this is a stupid question but I'm unsure if capturing traffic will work if the host is offline? I was actually discussing about Man-in-The-Middle attacks and just thought lets suppose https://example.com is offline (down or blocked on a network) and someone made a request to http://example.com/example-category/example so will an attacker be able to capture this complete GET request in a local network?
I think yes because the request will anyhow be sent from the client to host and there it should be captured. If that is the case then can HTTPS traffic be also captured (talking only GET based) if the host is offline or blocked intentionally on a local network?
If the man in the middle is located in the network before the host is found unreachable (for example in the local network before the router), then yes, the request would go to the MITM.
Yet the MITM might be in a bit of a situation if he finds destination host unreachable from his network, too.
If the router/gateway that is blocking the request is before the MITM, the request will be blocked and not received by the MITM.
If there is no MITM, but just traffic monitoring, there will be no connection made and thus no request transmitted to be monitored.
As to HTTPS: If the MITM cannot provide a valid certificate for the domain name (usually, MITM cannot), the connection would fail on the TLS part.
No. Before the http request itself can be sent, the initial TCP connection has to be established. And if the host is offline, then the TCP connection CANNOT be established.
No connection, no request, therefore nothing to sniff. The only thing that COULD be sniffed/intercepted would therefore be the initial TCP SYN packet, and that by itself is essentially useless.
It's like dialing a phone number that doesn't exist - the attempt to dial can be monitored, but since the call can never be established, there's no voice chatter to intercept.

Sniff over HTTPS

I have a question: What is the difference between sniffing and forwarding.
I mean that when I am in the MITM position (the gateway of a client), I can access to all the HTTPS website with this client browser.
In addition, I can check the generated traffic on the gateway side (including HTTPS requests/answers - encrypted of course!).
But as soon as I am using tools called "sniffers" (ettercap for instance) on the gateway side I am getting certificate errors and cannot even acces those HTTPS websites on the client side.
I am thus wondering what is the difference between sniffing and forwarding the traffic, in both cases we have access to the exact same information on the gateway side (generated traffic).
Finally, when sending HTTPS requests, those request has to go throw numerous routers to reach the server destination, a router is not a sniffer I suppose that is why we don't get the SSL certificate errors, right?
Sniffing is passive, whereas forwarding (MITM) is active.
When forwarding (MITM), you are part of the route. The traffic goes from the client to your IP address, then on to the server.
When sniffing, you're simply on the same physical network as the client and are able to receive a copy of the packets that the client is sending to the server.
If sniffing is causing HTTPS to fail, then there's something wrong. Perhaps you have mixed up the two terms?

Unable to exchange UDP packets with node.js AWS server

I would like to use my AWS instance to exchange UDP packets with various client applications.
When I run the server-side code locally, everything works as expected. However, when the code is run from AWS, I can only receive packets, not send. The logs tell me that, at least, the server-side send() is being invoked, but nothing else can be discerned.
Edit:
I'm not using a load balancer; I only have one instance [SO post]
I've enabled all UDP inbound/outbound traffic [AWS post]
I created a second AWS instance, and I am able to exchange packets between my, now, two instances.
Wireshark doesn't detect incoming packets on my client, even when its firewall is disabled.
I've successfully sent UDP packets to my instance (where they've been detected). The problem of outbound traffic remains.
Advice?
The issue was that I was sending from a private IP. This explains why I could contact the server, but it could not contact me; various application logic prevented this. In order for this to work, you must use the server to reply to the non-private IP.

Resources