Python 3 Specifying Socks5 Proxy In Specified Request Only - python-3.x

The answer here states how to use a Socks5 proxy in Python 3. Unfortunately this answer wouldn't work when one wants to send requests over the socks proxy and not over the proxy at virtually the same time.
I want to be able to do this where the first request uses the proxy and the second uses my IP at virtually the same time without having to switch the sockets around:
import concurrent.futures
import requests
threadPoolTasks1=[]
with concurrent.futures.ThreadPoolExecutor(max_workers=7) as executor:
#this request should use the proxy. I should see the IP reflecting that
threadPoolTasks1.append(executor.submit(print(requests.get("http://checkip.amazonaws.com/").text)))
#this request won't use the proxy.
threadPoolTasks1.append(executor.submit(print(requests.get("http://checkip.amazonaws.com/").text)))

Related

Make all http requests in Node follow machine's proxy settings

I am investigating some background Http requests made by a third party module. I want to route all such requests (I don't know where they are located) through a proxy (Fiddler) configured at the PC level so that I can inspect the payload and responses.
Is there a way to transparently make all network requests in a Node app follow the PC's proxy settings?

WSGIServer not working with https and python3

We got a old package write in python2, and working on upgrade it to python3. It's a web app and we are using the WSGIServer.
from gevent import pywsgi
from geventwebsocket.handler import WebSocketHandler
server = pywsgi.WSGIServer(('', port), apphandler, handler_class=WebSocketHandler)
def apphandler(request, start_response):
log.info("ATTENTION: request is {}".format(request))
# do something
However, when I try to use the web app with https://website, it would show this error:
This site can’t provide a secure connection
website.com sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
And in my server log, I would see things like:
Invalid HTTP method: '\x16\x03\x01\x02\x00\x01\x00\x01ü\x03\x03Èæ\x01\x92É\x16ÁW»P½\x1aBÐa\x83ÆÊa]ãíDp¥¥¥\x12Ç\x82|\x1f E5\x03aAv\x99¢Nª\x93ÅÏ:Ð\x9d÷3£\x80çÍïÌÑÿC\rÏUÄ\x8b\x00 ªª\x13\x01\x13\x02\x13\x03À+À/À,À0̨̩À\x13À\x14\x00\x9c\x00\x9d\x00/\x005\x01\x00\x01\x93ZZ\x00\x00\x00\x00\x005\x003\x00\x000website.com\x00\x17\x00\x00ÿ\x01\x00\x01\x00\x00\n'
When I actually use the website as http://website.com, it would work. And when in python2, the server would respond correctly to https.
I'm guessing it might be encoding issue, but it won't even reach where I put log in apphandler function. Does anyone know how am I possible to fix this? Do I need to change server or python3 encoding setup?
Looks like you are starting up a plain TCP web server sitting on port 443 and then hitting it from a web browser using an https: url.
If you check out the documentation:
https://www.gevent.org/api/gevent.pywsgi.html#gevent.pywsgi.WSGIServer
You need to supply ssl_args in order to get the socket to run under SSL instead of TCP. Just supplying a port number of 443 is not sufficient.
You can find a few examples of how to set ssl_args here:
https://www.programcreek.com/python/example/78007/gevent.pywsgi.WSGIServer

node server placed behind proxy server failed to GET request to https://localhost:<port>

On my machine, im hosting a node server that is listening on port 5000. Before setting up a forward proxy (squid), i was able to perform a GET on https://localhost:<port>. However, after setting up a forward proxy and setting the environmental variable http_proxy=<ip addr:port>, this GET request no longer works.
The error that shows up is: tunnelling socket could not be established, statusCode=503
Some additional information:
The proxy server works as I am able to connect to the internet via it.
Performing curl instead, on the https:localhost:5000/api works.
Am using request.js for the requests, using agentOptions to specify TLS protocols & ca cert.
I am hoping to understand how the traffic is now different after i add in a proxy. From my understanding, now we have to go through a sort of TLS CONNECT / tunnelling since to the proxy first, since its a HTTPS request, before coming back to my localhost. But in the case without the proxy, how is it that its working?
Any help would be greatly appreciated. Thanks!
you must add
export no_proxy='localhost,127.0.0.1'
the https work because you don't use proxy for https , you must set https_proxy='<tour_proxy>'

Identify Internal Request Nginx + NodeJS Express

I have an NodeJS API using Express Framework.
I use Nginx for Load Balancing betwween my NodeJS instances. I use PM2 to spawn NodeJS Instances.
I identified in the log that Ngnix makes some "dummy/internal" requests, probably to identify if the instance is on (heartbeat requests could be the appropriate name for this requests).
My question is: Which is the right method to identifiy these "dummy/internal" requests on my API?
I'm fairly certain that nginx only uses passive health checks for upstream servers. In other words – because all HTTP requests are assumed to result in a response, nginx says "If I send this server a bunch of requests and don't get responses for them, I'll consider the server to be unhealthy".
Can you share some access logs of the requests you're seeing?
As far as I know, nginx does not send any requests to upstream servers that are not ultimately initiated by a client.

Router blocking response from external networks

I'm working on a company network right now, and I've come across a problem where my scripts cannot connect to external networks. I'm just wondering if anyone knows common practices in network security that may cause this?
Ex. I can visit www.example.com on firefox, but my python script will get a timeout error if it tries to connect.
These scripts work perfectly fine on another network or if I change the URL to something on the local network.
import urllib.request
f = urllib.request.urlopen('http://www.python.org/')
print(f.read(300))
ANSWER: the browser uses the network's proxy. Scripts also have to use that proxy to run
import urllib.request
proxy = urllib.request.ProxyHandler({'http': '127.0.0.1'})
opener = urllib.request.build_opener(proxy)
urllib.request.install_opener(opener)
req = urllib.request.urlopen('http://www.google.com')
print(req.read())
It is very likely that your browser is configured to use a proxy. If that is true, then you will need to augment your python script with ProxyHandler (see Proxy with urllib2)

Resources