Python pproxy, make sock5 http request after creating Proxy using SSH - python-3.x

I am using python-proxy package for converting my ssh connection to a sock5 proxy, facing issues in making sock5 http request after creating server.
Using python package:https://github.com/qwj/python-proxy & reference example:https://github.com/qwj/python-proxy/blob/master/tests/api_server.py
import asyncio
import pproxy
loop = asyncio.get_event_loop()
async def make_request():
import requests
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "localhost", 8081)
socket.socket = socks.socksocket
proxies = {'http': "socks5://127.0.0.1:8081"}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
}
url = u'https://api.ipgeolocationapi.com/geolocate/5.152.122.170'
print(requests.get(url,verify=False,headers=headers).text)
return "test"
async def ssh_handle():
print("1")
server = pproxy.Server('socks5://127.0.0.1:8081')
remote = pproxy.Connection('ssh://185.110.12.11/#root:test')
args = dict(rserver=[remote],
verbose=print)
await server.start_server(args)
print("server started now")
await asyncio.sleep(1)
await make_request() #-- after creating server calling function of making http request using proxy
return "done"
try:
loop.run_until_complete(ssh_handle())
loop.run_forever()
except Exception as e:
print(e)```
[1]: https://github.com/qwj/python-proxy
[2]: https://github.com/qwj/python-proxy/blob/master/tests/api_server.py

Related

Get response 403 when i'm trying to crawling, user agent doesn't work in Python 3

I'm trying to crawling this website and get the message:
"You don't have permission to access"
there is a way to bypass this ? already used user agents and urlopen
Here is my code:
import requests
from bs4 import BeautifulSoup
import json
import pandas as pd
from urllib.request import Request, urlopen
url = 'https://www.oref.org.il/12481-he/Pakar.aspx'
header = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36'}
res = requests.get(url, headers=header)
soup = BeautifulSoup(res.content, 'html.parser')
print(res)
output:
<Response [403]>
also tried to do that:
req = Request(url, headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36'})
webpage = urlopen(req).read()
output:
HTTP Error 403: Forbidden
still blocked and get response 403, Anyone who can help?

Aiohttp+Asyncio Seems To Be Inconsistent in Tripadvisor Travel Site

I was trying to asynchronously request page data from tripadvisor travel site using aiohttp+asyncio, but it seems that in multiple occasions, the get() method is stuck for almost a minute and then results in TimeoutError.
I created a similar script using the requests library and confirmed that there are times that the code with requests library works while the code with aiohttp+asyncio does not.
Here are the codes:
Using aiohttp + asyncio
from aiohttp import ClientSession
import asyncio
home_url = 'https://www.tripadvisor.com'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/93.0.4577.63 Safari/537.36'
}
async def main():
async with ClientSession(headers=headers) as session:
tourist_sites_url = home_url + '/Attractions-g294245-Activities-a_allAttractions.true-Philippines.html'
async with session.get(tourist_sites_url) as response:
print(f'{response.status=}\n')
print(await response.text())
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Using requests
from requests import Session
home_url = 'https://www.tripadvisor.com'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/93.0.4577.63 Safari/537.36'
}
def main():
with Session() as session:
tourist_sites_url = home_url + '/Attractions-g294245-Activities-a_allAttractions.true-Philippines.html'
response = session.get(tourist_sites_url, headers=headers)
print(f'{response.status_code=}\n')
print(response.text)
if __name__ == '__main__':
main()
What shall I do in order for the code with aiohttp+asyncio to work on tripadvisor website?
Thank you very much!

aiohttp: Trying to connect to a site

I'm making a Discord Bot in Python to scrape Hack The Box data.
This is already functional, but I want to use async with aiohttp for increase speed when I'm requesting each profile of each member.
So in the synchronous version, I made a login function that first make a get request, to get the token on the login page, then make a post request with the token, email and password.
And in the asynchronous version with aiohttp, when I do my post request, my session is not connected.
I shortened it a little bit just for performance testing:
import requests
import re
import json
from scrapy.selector import Selector
import config as cfg
from timeit import default_timer
class HTBot():
def __init__(self, email, password, api_token=""):
self.email = email
self.password = password
self.api_token = api_token
self.session = requests.Session()
self.headers = {
"user-agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.85 Safari/537.36"
}
self.payload = {'api_token': self.api_token}
if path.exists("users.txt"):
with open("users.txt", "r") as f:
self.users = json.loads(f.read())
else:
self.users = []
def login(self):
req = self.session.get("https://www.hackthebox.eu/login", headers=self.headers)
html = req.text
csrf_token = re.findall(r'type="hidden" name="_token" value="(.+?)"', html)
if not csrf_token:
return False
data = {
"_token": csrf_token[0],
"email": self.email,
"password": self.password
}
req = self.session.post("https://www.hackthebox.eu/login", data=data, headers=self.headers)
if req.status_code == 200:
print("Connecté à HTB !")
self.session.headers.update(self.headers)
return True
print("Connexion impossible.")
return False
def extract_user_info(self, htb_id):
infos = {}
req = self.session.get("https://www.hackthebox.eu/home/users/profile/" + str(htb_id), headers=self.headers)
if req.status_code == 200:
body = req.text
html = Selector(text=body)
infos["username"] = html.css('div.header-title > h3::text').get().strip()
infos["avatar"] = html.css('div.header-icon > img::attr(src)').get()
infos["points"] = html.css('div.header-title > small > span[title=Points]::text').get().strip()
infos["systems"] = html.css('div.header-title > small > span[title="Owned Systems"]::text').get().strip()
infos["users"] = html.css('div.header-title > small > span[title="Owned Users"]::text').get().strip()
infos["respect"] = html.css('div.header-title > small > span[title=Respect]::text').get().strip()
infos["country"] = Selector(text=html.css('div.header-title > small > span').getall()[4]).css('span::attr(title)').get().strip()
infos["level"] = html.css('div.header-title > small > span::text').extract()[-1].strip()
infos["rank"] = re.search(r'position (\d+) of the Hall of Fame', body).group(1)
infos["challs"] = re.search(r'has solved (\d+) challenges', body).group(1)
infos["ownership"] = html.css('div.progress-bar-success > span::text').get()
return infos
return False
def refresh_user(self, htb_id, new=False):
users = self.users
for user in users:
if user["htb_id"] == htb_id:
infos = self.extract_user_info(htb_id)
def refresh_all_users(self):
users = self.users
for user in users:
self.refresh_user(user["htb_id"])
elapsed = default_timer() - START_TIME
time_completed_at = "{:5.2f}s".format(elapsed)
print("{0:<30} {1:>20}".format(user["username"], time_completed_at))
print("Les users ont été mis à jour !")
htbot = HTBot(cfg.HTB['email'], cfg.HTB['password'], cfg.HTB['api_token'])
htbot.login()
START_TIME = default_timer()
htbot.refresh_all_users()
Then, my async rewrite only for the login function :
import asyncio
import re
import config as cfg
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.85 Safari/537.36"
}
LOGIN_LOCK = asyncio.Lock()
async def login():
async with LOGIN_LOCK:
async with aiohttp.TCPConnector(share_cookies=True) as connector:
async with aiohttp.ClientSession(connector=connector, headers=headers) as session:
async with session.get("https://www.hackthebox.eu/login") as req:
html = await req.text()
csrf_token = re.findall(r'type="hidden" name="_token" value="(.+?)"', html)
if not csrf_token:
return False
payload = {
"_token": csrf_token[0],
"email": cfg.HTB['email'],
"password": cfg.HTB['password']
}
async with session.post('https://www.hackthebox.eu/login', data=payload) as req:
print(await req.text())
exit()
async def main():
await login()
asyncio.run(main())
I think I'm going too far with this BaseConnector, Locks etc but I've been working on it for two days now and I'm running out of ideas, I'm already trying to connect with this post request.
I also did a comparison of the two requests with Requests and aiohttp in Wireshark.
The only difference is that the one with aiohttp doesn't send keepalive and has cookies. (I already tried to manually set the header "connection: keep-alive" but it doesn't change anything).
However, according to the documentation, keep-alive should be active by default, so I don't understand.
(In the screen the 301 status codes are normal, for seeing my HTTP requests I had to use http instead of https.)
Screen of Wireshark : https://files.catbox.moe/bignh0.PNG
Thank you if you can help me !
Since I'm new to asynchronous programming, I'll take all your advice.
Unfortunately almost everything I read about it on the internet is deprecated for Python 3.7+ and doesn't use the new syntaxes.
Okay, I have finally switched to httpx and it worked like a charm.
I really don't know why aiohttp wouldn't work.

Instagram Scraping with endpoints requires authentication for all requests

As you know, Instagram announced they has changed their endpoint apis this month.
Looks like in the wake of Cambridge Analytica instagram has changed up their endpoint formats and require a logged in user session for all requests.....
Not sure which endpoints need updating but I was specifically using the media/comments endpoints which are now as follows:
Media OLD:
https://www.instagram.com/graphql/query/?query_id=17888483320059182&id={0}&first=100&after={1}
Media NEW:
https://www.instagram.com/graphql/query/?query_hash=42323d64886122307be10013ad2dcc44&variables=%7B%22id%22%3A%2221575514%22%2C%22first%22%3A12%2C%22after%22%3A%22AQAHXuz1DPmI3FFLOzy5iKEhHOLKw3lt_ozVR40TphSdns0Vp5j_ZEU6Qj0CW6IqNtVGO5pmLCQoX0Y8RVS9aRTT2lWPp6vf8vFqjo1QfxRYmA%22%7D
The script that I used for avoiding this problem is as following:
#!/usr/bin/env python3
import requests
import urllib.parse
import hashlib
import json
#CHROME_UA = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'
CHROME_UA = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
def getSession_old(rhx_gis, csrf_token, variables):
""" Get session preconfigured with required headers & cookies. """
#"rhx_gis:csfr_token:user_agent:variables"
print(variables)
values = "%s:%s:%s:%s" % (
rhx_gis,
csrf_token,
CHROME_UA,
variables)
x_instagram_gis = hashlib.md5(values.encode()).hexdigest()
session = requests.Session()
session.headers = {
'user-agent': CHROME_UA,
'x-instagram-gis': x_instagram_gis
}
print(x_instagram_gis)
session.cookies.set('ig_pr', '2')
session.cookies.set('csrftoken', csrf_token)
return session
def getSession(rhx_gis, variables):
""" Get session preconfigured with required headers & cookies. """
#"rhx_gis:csfr_token:user_agent:variables"
values = "%s:%s" % (
rhx_gis,
variables)
x_instagram_gis = hashlib.md5(values.encode()).hexdigest()
session = requests.Session()
session.headers = {
'x-instagram-gis': x_instagram_gis
}
return session
if __name__ == '__main__':
session = requests.Session()
session.headers = { 'user-agent': CHROME_UA }
response = session.get("https://www.instagram.com/selenagomez")
data = json.loads(response.text.split("window._sharedData = ")[1].split(";</script>")[0])
csrf = data['config']['csrf_token']
rhx_gis = data['rhx_gis']
variables = '{"id":"460563723","first":10,"after":"AQBf8puhlt8nU2JzmYdMMTuH0FbMgUM1fnIOZIH7n94DM4VLWkVILUAKVB-5dqvxQEI-Wd0ttlEDzimaaqwC98jccQaDQT4tSF56c_NlWi_shg"}'
session = getSession(rhx_gis, variables)
query_hash = '42323d64886122307be10013ad2dcc44'
encoded_vars = urllib.parse.quote(variables, safe='"')
url = 'https://www.instagram.com/graphql/query/?query_hash=%s&variables=%s' % (query_hash, encoded_vars)
print(url)
print(session.get(url).text)
I am sure this script was working well before 11 days ago, but not working now.
Does anyone know the solution how to get user posts without authenticating?

Python 3.6.4, Scraping a website that requires login

Login Address: https://joffice.jeunesseglobal.com/login.asp.
Two data need to put: Username and pw.
Using cookie to access:https://joffice.jeunesseglobal.com/members/back_office.asp
Can't login.
#-*-coding:utf8-*-
import urllib
import http.cookiejar
url = 'https://joffice.jeunesseglobal.com/members/back_office.asp'
login_url = "https://joffice.jeunesseglobal.com/login.asp"
login_username = "jianghong181818"
login_password = "Js#168168!"
login_data = {
"Username" : login_username,
"pw" : login_password,
}
post_data = urllib.parse.urlencode(login_data).encode('utf-8')
headers = {'User-agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'}
req = urllib.request.Request(login_url, headers = headers, data = post_data)
cookie = http.cookiejar.CookieJar()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cookie))
resp = opener.open(req)
print(resp.read().decode('utf-8'))
Use requests
Simple way:
>>>import requests
>>>page = requests.get(" https://joffice.jeunesseglobal.com/login.asp", auth=
('username', 'password'))
Making requests with HTTP Basic Auth
>>> from requests.auth import HTTPBasicAuth
>>> requests.get(" https://joffice.jeunesseglobal.com/login.asp", auth=HTTPBasicAuth('user', 'pass'))

Resources