right now i am using the following code to port my python through tor to send requests:
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 9450)
socket.socket = socks.socksocket
I put this at the front of my code and then start sending requests, and all my requests will be sent through tor.
In my code I need to send a lot of requests, but only 1 of them needs to actually go through tor. The rest of them don't have to go through tor.
Is there any way I can configure my code so that I can choose which requests to send through tor, and which to send through without tor?
Thanks
Instead of monkey patching sockets, you can use requests for just the Tor request.
import requests
import json
proxies = {
'http': 'socks5h://127.0.0.1:9050',
'https': 'socks5h://127.0.0.1:9050'
}
data = requests.get("http://example.com",proxies=proxies).text
Or if you must, save the socket.socket class prior to changing it to the SOCKS socket so you can set it back when you're done using Tor.
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 9450)
default_socket = socket.socket
socket.socket = socks.socksocket
# do stuff with Tor
socket.socket = default_socket
Related
I am trying to understand why a proxy is not connecting to the website, but displays my IP instead
import httpx
import asyncio
proxies = {"http": "http://34.91.135.38:80"}
async def main():
async with httpx.AsyncClient(proxies=proxies) as client:
s = await client.get('https://api.ipify.org')
print(s.text)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
When I try to do that, it will display my IP, not the IP I selected. I want to be able to use my proxy to connect to the website
we are using selenium on python and as a part of our automation, we need to capture the messages that a sample website sends and receives after the web page loaded completely.
I have check here and it is stated that what we want to do is achievable using BrowserMobProxy but after testing that, the websocket connection did not work on website and certificate errors were also cumbersome.
In another post here it is stated that, this can be done using loggingPrefs of Chrome but it seemed that we only get the logs up to the time when website loads and not the data after that.
Is it possible to capture websocket traffic using only selenium?
Turned out that it can be done using pyppeteer; In the following code, all the live websocket traffic of a sample website is being captured:
import asyncio
from pyppeteer import launch
async def main():
browser = await launch(
headless=True,
args=['--no-sandbox'],
autoClose=False
)
page = await browser.newPage()
await page.goto('https://www.tradingview.com/symbols/BTCUSD/')
cdp = await page.target.createCDPSession()
await cdp.send('Network.enable')
await cdp.send('Page.enable')
def printResponse(response):
print(response)
cdp.on('Network.webSocketFrameReceived', printResponse) # Calls printResponse when a websocket is received
cdp.on('Network.webSocketFrameSent', printResponse) # Calls printResponse when a websocket is sent
await asyncio.sleep(100)
asyncio.get_event_loop().run_until_complete(main())
Ive been trying to send a http post request using a client script to send a zpl string to our local ip printer. I keep getting "The host you requested null is unknown or cannot be found."
What am I doing wrong? Do I need to use a RESTlet?
I can also do POST prints with this printer outside of netsuite, the problem is through netsuite.
function print(tag) {
var zpl = "^XA^CF0,30^FO20,20^FD" + tag.company + "^FS^FO20,50^FDPO # " + tag.poNum + "^FS^FO325,50^FDORD # " + tag.ordNum + "^FS^FO630,50^FDQTY^FS^CF0,50^FO700,40^FD" + tag.quantity + "^FS^BY2,10,100^FO" + calculateBarcodeDistance(tag.item.length) + ",175^BY3^BCN,50,Y,N,N,N^FD" + tag.item + "^FS^XZ";
var printUrl = "http://192.168.0.0/pstprnt";
var response = http.request({
method: http.Method.POST,
url: printUrl,
body: zpl
});
}
According to SuiteAnswers #44853 the N/http module is only supported in server side scripts:
So you would probably have to use a suitelet and call it from your client. Unfortunately, this would mean you have to expose a port on your public IP address and port-forward from there to the printer.
Alternatively, you could use the fetch API as suggested by Jon in comments, or an XMLHttpRequest, to achieve the same thing through the browser.
About the easiest thing to do would be to run ngrok on a box that has access to the internet. You’d post to the public address and that would forward to you printer. There’s a way to set a credential on that so you don’t get random print jobs from port scanners.
Then you’d use your current sample to send to a remote https address and that would send to ngrok on your server which would forward to your printer
You would need to create a simple Suitelet that uses the N/http module. So your client script would post to the printer suitelet, which would make the actual post and then return the data back to your client script.
You can use jQuery.post(...) to in your client script.
Another "easy" way to do this would be to install apache/nginx/other https capable server on the machine you are generating your labels on.
You'd map some useful name to one of the loop back addresses
You'd generate a self signed certificate for the host name above and have your server use that host and listen on the loop back address.
You'd configure the server as a reverse proxy to forward requests to the printer's IP address
You could then use send print requests to the local server (Use an iframe post hack) and though it's more work than the ngrok solution there's no direct cost associated.
I would say that in order to do this you need to POST to a server with a static IP address whereas your ip address in the example looks like an ip produced by a home router aka a dynamic ip.
The first line in the Netsuite help for the http module states that it can be used in client and server scripts. I have successfully sent a POST request via the http module to a http server and got a success response back. I did not have to use https.
You can test this by changing the url to http://httpbin.org:80/post, they provide a service that just sends whatever you post back to you.
I have a question and my classmate can't fix it either:
how to set up a proxy using requests module?
I think it is easy and I can fix it very quickly. I can use :
proxy = {
'http':'http://74.125.204.103' #just an example
}
and
requests.get(www.youtube.com,proxies = proxy)
And we thought it will connect with 74.125.204.103
But WE ARE WRONG!
It still connects with my own IP address. We use youtube and connect on on video but the times watched is not changing. we also use grabify and IT's still the same.So how can I set up the proxy in other ways?
I believe you're getting this because you only have the proxy specified for HTTP and youtube will redirect you to HTTPS at which point requests has no proxy information.
proxies = {
'http':'http://proxy:port',
'https':'http://proxy:port'
}
Try adding the line for the https schema.
I'm trying to debug a HTTPS connection using mitmproxy and Python 3. The idea is the following: the proxy is running locally, and I tell Python to use it for a HTTPS connection; Python must accept the self-signed certificate created by mitmproxy for this to work, and if all goes fine, the console of mitmproxy will show me the decoded request/response pair.
I am able to do what I want with Python 3 and requests, but I have to do it using the standard urllib, alas, and I'm failing. Here is the code:
#!/usr/bin/env python3
import urllib.request
import ssl
proxy = urllib.request.ProxyHandler({'https': 'localhost:8080'})
opener = urllib.request.build_opener(proxy)
urllib.request.install_opener(opener)
ssl_ctx = ssl.create_default_context()
ssl_ctx.check_hostname = False
ssl_ctx.verify_mode = ssl.CERT_NONE
#ssl_ctx = ssl._create_unverified_context()
req = urllib.request.Request('https://github.com')
with urllib.request.urlopen(req) as res:
#with urllib.request.urlopen(req, context=ssl_ctx) as res:
print(res.read().decode('utf8'))
When the above code is executed, I get a
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)
error, which means the proxy is used, but its certificate doesn't validate. This is exactly what I expect to happen at this point.
When I use the commented out line to use the ssl_context (create in any of the ways), I get the contents of the page I request, but the proxy console never shows the decoded request information, it stays blank. It is as though using the ssl_context circumvents the proxy altogether.
Could someone please help me out and tell me what I'm doing wrong?
EDIT: just to make sure, I changed the proxy port to 8081, and now the code not using the ssl_ctx variable fails with 'connection refused' (as expected), and the code using the ssl_ctx works fine - this asserts my assumption that the proxy is not used at all.
Thanks for asking this and posting the code you tried. I'm not sure why the documentation claims that fetching HTTPS through a proxy is not supported, because it does work.
Instead of explicitly passing the SSL context, I created and installed an HTTPSHandler in addition to the ProxyHandler. This worked for me (Python 3.5 + mitmproxy):
import urllib.request
import ssl
ssl_ctx = ssl.create_default_context()
ssl_ctx.check_hostname = False
ssl_ctx.verify_mode = ssl.CERT_NONE
ssl_handler = urllib.request.HTTPSHandler(context=ssl_ctx)
proxy_handler = urllib.request.ProxyHandler({'https': 'localhost:8080'})
opener = urllib.request.build_opener(ssl_handler, proxy_handler)
urllib.request.install_opener(opener)
if __name__ == '__main__':
req = urllib.request.Request('https://...')
with urllib.request.urlopen(req) as res:
print(res.read().decode('utf8'))
Once this opener is installed, it's used as the default for all requests using urllib, even in third-party libraries -- which is what I needed.
I'm not sure, though, why your commented out line didn't work. Perhaps passing the context in urlopen installs a new opener that overrides the custom one.