I installed mitmproxy by using pip (pip(3) install mitmproxy). I created a script that performs HTTP requests - I use https://requests.readthedocs.io/en/master/ obviously - on a specific trigger (e.g. an image or file went through the reverse proxy).
Versions: Python 3.9.1 for Windows 10 64 bits, pip version 20.2.3, and mitmproxy 6.0.2.
#staticmethod
def _file_exists(file_name: str) -> bool:
request_path = "https://<url>/{}".format(file_name)
req = requests.get(request_path) # import requests
return True if req.status_code == 200 else False
This blocks forever if I use the command mitmdump -s script.py. Adding a timeout will result in a TCP timeout exception - for HTTP and TLS.
I tried the following:
Re-installing the SSL certificate of mitmproxy
Using a clean Windows installation
I tried to connect to an IP address
I tried to connect without HTTPS
I'm stuck. Any ideas?
Related
The code is as follows:
import requests
proxy = 'username:password#ip:9999'
print(requests.get('https://api.ipify.org/', proxies={'http': f'http://{proxy}', 'https': f'http://{proxy}'}).text)
(username, password and ip have been omitted)
OS = Ubuntu 20.04.5 LTS
Python version = 3.8.10
Requests version = 2.28.1
When ran on said server Proxy-Authorization is not passed in the connection headers. However when running the exact same script on my Windows device it is passed.
Before opening an issue on GitHub, I wanted to see if anyone knows why this may be?
Update:
I have found the issue to be that create-react-app is never changing the host to 0.0.0.0. This is due to the network I'm running this on at work. If I remove 0.0.0.0 from the flask server, it gives the same errors. It only works with 0.0.0.0 as the hostname in Flask but I can't find any way to run the react server on it. It only ever runs on localhost:3001 and ip:3001, both of which give me errors with https.
I am having trouble launching the sample app from create-react-app and TLS 1.2+ are the only protocols supported on my work network.
I've made an .env file for React as below. However, Chrome says it's not a trusted certificate and won't load. Is there any way to force the protocol used with an env variable?
HTTPS=true
HOST=0.0.0.0
PORT=3001
SSL_CRT_FILE=file.cer
SSL_KEY_FILE=file.key
I had similar problems with my flask server but eventually got it to work as below
def ssl_setup():
cer_file = file.cer
key_file = file.key
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
context.load_cert_chain(cer_file, key_file)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5001, ssl_context=ssl_setup())
Using chrome dev tools, it shows that Flask is using AES256 while React is using AES128. Maybe this is the issue?
Forcing 256 bit is still not working, it keep saying my certificate is invalid even though it works just fine in Python. To force it I used export NODE_OPTIONS=--tls-cipher-list=''ECDHE-RSA-AES128-GCM-SHA256:!RC4''
Splash browser does not send anything to through the http proxy. The pages are fetched even when the proxy is not running.
I am using scrapy with splash in python 3 to fetch pages after authentication for a an Angular.js website. The script is able to fetch pages, authenticate, and fetch pages after authentication. However, it does not use the proxy setup at localhost:8090 and wireshark confirms that traffic coming from port 8050 goes to some port in the 50k range.
The setup is
- splash running locally on a docker image (latest) on port 8050
- python 3 running locally on a mac
- Zap proxy running locally on a mac at port 8090
- Web page accessed through VPN
I have tried to specify the proxy host:port through the server using Chrome with a LUA script. Page is fetched without the proxy.
I have tried to specify the proxy in the python script with both Lua and with the api (args={'proxy':'host:port'} and the page is fetched without using the proxy.
I have tried using the proxy-host file and I get status 502.
Proxy set through Lua on Chrome (no error, not proxied):
function main(splash, args)
splash:on_request(function(request)
request:set_proxy{
host = "127.0.0.1",
port = 8090,
username = "",
password = "",
type = "HTTP"
}
end
)
assert(splash:go(args.url))
assert(splash:wait(0.5))
return {
html = splash:html(),
png = splash:png(),
har = splash:har(),
}
end
req = SplashRequest("http://mysite/home", self.log_in,
endpoint='execute', args={'lua_source': script})
Proxy set through api (status 502):
req = SplashRequest("http://mysite/home",
self.log_in, args={'proxy': 'http://127.0.0.1:8090'})
Proxy set through Lua in Python (no error, not proxied):
def start_requests(self):
script = """
function main(splash, args)
assert(splash:go(args.url))
assert(splash:wait(0.5))
splash:on_request(function(request)
request:set_proxy{
host = "127.0.0.1",
port = 8090,
username = "",
password = "",
type = "HTTP"
}
end
)
return {
html = splash:html(),
png = splash:png(),
har = splash:har(),
}
end
"""
req = SplashRequest("http://mysite/home", self.log_in,
endpoint='execute', args={'lua_source': script})
# req.meta['proxy'] = 'http://127.0.0.1:8090'
yield req
Proxy set through proxy file in docker image (status 502):
proxy file:
[proxy]
; required
host=127.0.0.1
port=8090
Shell command:
docker run -it -p 8050:8050 -v ~/Documents/proxy-profile:/etc/splash/proxy-profiles scrapinghub/splash --proxy-profiles-path=/etc/splash/proxy-profiles
All of the above should display the page in zap proxy at port 8090.
Some of the above seem to set the proxy, but the proxy can't reach localhost:8090 (status 502). Some don't work at all (no error, not proxied). I think this may be related to fact that a docker image is being used.
I am not looking to use Selenium because that is what this replacing.
All methods returning status 502 are working correctly. The reason for this issue is that docker images cannot access localhost on the host. To resolve this, use http://docker.for.mac.localhost:8090 as the proxy host:port on mac host and use docker run -it --network host scrapinghub/splash for linux with localhost:port. For linux, -p is invalidated since all services on the container will be on localhost.
Method 2 is best for a single proxy without rules. Method 4 is best for multiple proxies with rules.
I did not try other methods to see what they would return with these changes and why.
Alright I have been struggling with the same problem for a while now, but I found the solution for your first method on GitHub, which is based on what the Docker docs state:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host.
The gateway is also reachable as gateway.docker.internal.
Meaning that you should/could use the "host.docker.internal" as host instead for your proxy E.g.
splash:on_request(function (request)
request:set_proxy{
host = "host.docker.internal",
port = 8090
}
end)
Here is the link to the explanation: https://github.com/scrapy-plugins/scrapy-splash/issues/99#issuecomment-386158523
A websocket client (using Autobahn/Python and Twisted) needs to connect to a websocket server: the client needs to present its client certificate to the server and the client needs to check the server's certificate. These certificates have been created, for instance, during setup of a Kubernetes minikube installation. In particular:
server certificate ~/.minikube/ca.crt (in X509 format from what I understand).
client certificate ~/.minikube/client.crt with key ~/.minikube/client.key.
I've checked that I can successfully use these certificates+key to issue Kubernetes remote API calls using curl.
From Autobahn's echo_tls/client.py example I understand that I may need to use a ssl.ClientContextFactory(). ssl here refers to the pyopenssl package that twisted automatically imports.
However, I cannot figure out how to pass the certificates to the factory?
How do I tell the websocket factor to present the client certificate to the server?
How do I tell the websocket to check the server's certificate in order to detect MITM attacks?
After some trial and error I've now arrived at this solution below. To help others I'll not only show code, but also a reference setup to test drive the example code.
First, install minikube, then start a minikube instance; I've tested with minikube 1.0.0, which then runs Kubernetes 1.14 which was current at the time of this writing. Then start a simple websocket server that just shows what is sent to it and will sent back any input you made to the connected websocket client.
minikube start
kubectl run wsserver --generator=run-pod/v1 --rm -i --tty \
--image ubuntu:disco -- bash -c "\
apt-get update && apt-get install -y wget && \
wget https://github.com/vi/websocat/releases/download/v1.4.0/websocat_1.4.0_ssl1.1_amd64.deb && \
dpkg -i webso*.deb && \
websocat -vv -s 0.0.0.0:8000"
Next comes the Python code. It attempts to connect to the wsserver we've just started via Kubernetes' remote API from the minikube, using the remote API as its reverse proxy. The minikube setup usually uses mutual SSL/TLS authentication of client and server, so this is a "hard" test here. Please note that there are also other methods, such as server certificate and bearer token (instead of a client certificate).
import kubernetes.client.configuration
from urllib.parse import urlparse
from twisted.internet import reactor
from twisted.internet import ssl
from twisted.python import log
from autobahn.twisted.websocket import WebSocketClientFactory, WebSocketClientProtocol, connectWS
import sys
if __name__ == '__main__':
log.startLogging(sys.stdout)
class EchoClientProto(WebSocketClientProtocol):
def onOpen(self):
print('onOpen')
self.sendMessage('testing...\n'.encode('utf8'))
def onMessage(self, payload, isBinary):
print('onMessage')
if not isBinary:
print('message %s' % payload.decode('utf8'))
def onClose(self, wasClean, code, reason):
print('onClose', wasClean, code, reason)
print('stopping reactor...')
reactor.stop()
# Select the Kubernetes cluster context of the minikube instance,
# and see what client and server certificates need to be used in
# order to talk to the minikube's remote API instance...
kubernetes.config.load_kube_config(context='minikube')
ccfg = kubernetes.client.configuration.Configuration._default
print('Kubernetes API server CA certificate at %s' % ccfg.ssl_ca_cert)
with open(ccfg.ssl_ca_cert) as ca_cert:
trust_root = ssl.Certificate.loadPEM(ca_cert.read())
print('Kubernetes client key at %s' % ccfg.key_file)
print('Kubernetes client certificate at %s' % ccfg.cert_file)
with open(ccfg.key_file) as cl_key:
with open(ccfg.cert_file) as cl_cert:
client_cert = ssl.PrivateCertificate.loadPEM(cl_key.read() + cl_cert.read())
# Now for the real meat: construct the secure websocket URL that connects
# us with the example wsserver inside the minikube cluster, via the
# remote API proxy verb.
ws_url = 'wss://%s/api/v1/namespaces/default/pods/wsserver:8000/proxy/test' % urlparse(ccfg.host).netloc
print('will contact: %s' % ws_url)
factory = WebSocketClientFactory(ws_url)
factory.protocol = EchoClientProto
# We need to attach the client and server certificates to our websocket
# factory so it can successfully connect to the remote API.
context = ssl.optionsForClientTLS(
trust_root.getSubject().commonName.decode('utf8'),
trustRoot=trust_root,
clientCertificate=client_cert
)
connectWS(factory, context)
print('starting reactor...')
reactor.run()
print('reactor stopped.')
The tricky part here when attaching the client and server certificates using optionsForClientTLS is that Twisted/SSL expects to be told the server's name we're going to talk to. This is also needed to inform virtual servers which one of their multiple server certificates they need to present -- before there will be any HTTP headers!
Unfortunately, this is now ugly territory -- and I would be glad to get feedback here! Simply using urlparse(ccfg.host).hostname works on some minikube instances, but not on others. I haven't yet figured out why seemingly similar instances behave differently.
My current workaround here is to simply use the CN (common name) of the subject from the server's certificate. Maybe a more robust way might be to only resort to such tactics when the URL for the remote API server uses an IP address literal and not a DNS name (or at least a label).
Alas, run the Python 3 code above python3 wssex.py. If the script correctly connects, then you should see a log message similar to 2019-05-03 12:34:56+9600 [-] {"peer": "tcp4:192.168.99.100:8443", "headers": {"sec-websocket-accept": ...
Additionally, the websocket server that you've started before should show log messages such as [INFO websocat::net_peer] Incoming TCP connection from Some(V4(172.17.0.1:35222)), and some more.
This then is proof that the client script has successfully connected to minikube's remote API via a secure websocket, passing authentication and access control, and is now connected to the (insecure) websocket demo server inside minikube.
I have a Python 3 script that makes requests through a SOCKS5 proxy. I want to be able to run this script from an Azure VM. But when the request is being made I get the following error:
Not supported proxy scheme SOCKS5
I'm running Python 3.5.2 with requests 2.9.1 on an Ubuntu 16.10 LTS VM. I also installed pysocks to have requests works with SOCKS5.
The code that does the request is as follows:
server = 'socks5://u:p#proxy.server.com:1080'
proxies = { 'https': server, 'all': None }
response = requests.get(request_url, proxies=proxies)
Te script runs fine locally. So it seems that Azure won't allow me to make use of SOCKS5 proxies.
I've also added port 1080 as allowed outbound connection to the networking interface of the VM.
How do I configure my VM in such a way that it will allow SOCKS5 connections from the
Ok, it turns out that installing pysocks isn't enough.
When you use the following command:
pip3 install -U requests[socks]
It installs the required packages to work properly.
-U is the same as --upgrade. This flag is also required, without it you still won't be able to connect through SOCKS5.