I buy web traffic from several sources (including the major names in the industry) and recently got reports from advertisers that there's quite a bit of "invalid" traffic. They won't share which filter they use so I can block it on my end. I tested all the navigator properties, resolution, window size, modernizr features, etc, and the bad traffic seems to be spoofing everything.
After some testing, I found that using this code:
document.addEventListener('click', function() {
window.open('/save?' + navigator.userAgent ,'_blank');
});
In some cases, the saved user agent is different from the one saved on the top window. Meaning, a visit hits a page, in that page the user agent could be something like:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/18.17763
Then that page uses window.open() to open a new window and reads the user agent again, and it will read something like this:
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)
HeadlessChrome/72.0.3617.0 Safari/537.36
I tried all the usual methods, window.chrome, webdriver, permissions, plugins, fonts, reading those vars in an iframe, etc, they pass all the tests, the only thing that works is the window.open, but I obviously can't open a popup to filter the traffic.
Is there any way to detect this type of traffic?
Related
In the past few years I've sometimes ran into websites that don't work in Firefox on Linux, and I'm trying to understand why so I can notify the owners with more than just a vague “it doesn't work”.
Now this happens of course. While most web developers do test in Firefox, not many will have tested their products in Firefox on Linux, and some really don't care. Some only target Chrome/Webkit and don't bother with Firefox at all. That is not what this question is about though.
There is something here that makes me suspect that there is an underlying cause that is repeated on seemingly unrelated websites, and I suspect some repeated bit of configuration code or web content serving library or application that does this. Something is fishy.
The problem
The websites affected return only a plain HTML message with a 403 HTTP status code for any resource requested; it looks like this:
Forbidden
You don't have permission to access / on this server.
These websites do work when:
The operating system is not a Linux distribution
or
The browser is not Firefox
Example websites
While I normally wouldn't include a link to someone else's website, in this case I do because it is the website of a doctors office. These websites should be available to any patient at all times for anything short of a imminently life threatening emergency (in which case the national emergency number should be called of course) to provide contact information in times of need.
This website displays the symptoms described above: https://www.huisartsenpraktijkdehaan.nl/
There are more websites though, but the pattern is always the same.
The user-agent string
Trying to figure out what is actually causing this seems simple enough though. If I change the user-agent string to that of Chrome, it works.
So my tentative conclusion is that this is purely a user-agent driven bug/feature.
Some further testing yields this:
These work
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36
Foo
Mozilla/5.0 (X11; Ubuntu; x86_64; rv:85.0) Gecko/20100101 Firefox/85.0
Mozilla/5.0 (X11; Ubuntu; inux x86_64; rv:85.0) Gecko/20100101 Firefox/85.0
Mozilla/5.0 (X11; Ubuntu; Linu x86_64; rv:85.0) Gecko/20100101 Firefox/85.0
Mozilla/5.0 (X11; Xubuntu; Linux x86_64; rv:85.0) Gecko/20100101 Firefox/85.0
X11; Ubuntu; Linu
X11;Ubuntu;Linux
11; Ubuntu; Linux
These do not work
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:85.0) Gecko/20100101 Firefox/85.0
X11; Ubuntu; Linux
x11; ubuntu; linux
Hypothesis
Having the literal string X11; Ubuntu; Linux (case insensitive but including spaces and semi-colons as-is) in the user-agent HTTP header of your request triggers the broken behaviour.
The conundrum
I could, of course, reach out the owners of these websites (and eventually I will), but there is a catch. They likely won't use Firefox on Linux (because you would notice your own website being broke), and if they pass on the message to whoever maintains or built the website, the response may very well be “well it works for you, and it works for me, that user must have some weird virus-ridden computer and an ancient browser with a Bonzy Buddy toolbar”, or something similar.
So I want some more ammunition, and preferably a cause I can explain to anyone with a website like this. Even better would be to find out why this happens, and fix it at the source.
So what is happening here? Some Apache of Nginx module/config/plugin written by someone who really hates people who use Firefox on Linux? Some weird bug repeated on multiple sites?
Does anyone recognize this peculiar website behaviour?
I saw the link to this post in forwarded emails.
ErrorDocument 503 "Your connection was refused"
SetEnvIfNoCase User-Agent "X11; Ubuntu; Linux" bad_user
Deny from env=bad_user
Was set in the .htaccess to block "WP-login bots", since I use Ubuntu myself this was rather easy to replicate.
Is there a way for Passport to check if request came from mobile or web app when doing authentication? Because if request came from the web I want to return a view otherwise return a json payload.
This is my opinion,you can check user-agent in the request header ,its look like this(came from windows):
user-agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36
and this is came from my iPhone
User-Agent:Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) CriOS/56.0.2924.75 Mobile/14E5239e Safari/602.1
and this is Android
User-Agent:Mozilla/5.0 (Linux; Android 5.0; SM-G900P Build/LRX21T) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36
so you can figer it out from user-agent,which request came from mobile or pc
If you have two different clients expecting different results, then you should explicitly send different requests, not try to guess which response is wanted from some header that isn't necessarily reliable. Plus, there's nothing keeping a mobile device from also accessing the web interface. You can either vary the path or vary a query string.
So, from web, you might use /login and from mobile, you might use /login-json or some different path that indicates you want json.
Or from web, you might use /login and from mobile, you might use /login?type=json.
I would NOT recommend using the user-agent header to detect the intent of the request. Instead, specify the intent directly in the request.
I am writing a web scraper that I am trying to proxy, but can't quite figure out how to do it in Elixir.
I am using Hound running on top of a headless ChromeDriver. I purchased some proxy IPs through https://luminati.io and they offer both a chrome extension and a user/password base proxy server.
The webscraper actions comprise of a GenServer that represent a user scraping the web. There is no front end of the app, it accepts commands that are sent to it through a bot I built on Telegram, so when a user sends the login command for instance it triggers the login function of the GS.
At that point the GenServer will change the ChromeDriver session using Hound.change_session_to/2 and then log the user in.
This works great, but now I want to send every request through the proxy server via username and password. When changing the session with Hound, it allows the chromeOptions to be set as well.
ua = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36"
change_session_to(String.to_atom(account.username), %{browserName: "chrome", chromeOptions: %{"args" => ["--user-agent=#{ua}", "--proxy-server=http://user:password#proxy.luminati.io:22225"]}})
navigate_to "https://www.website.com/"
Another thing that I have tried doing is loading luminati's ChromeExtension that I would be able to use to proxy the traffic through, but I can't get the extension to load for each session. I downloaded the packed CRM chrome extension and placed it within my priv folder. When the session loads it seems to load the User Agent just fine, but the extension never starts. When I am trying to load the extension I am not running headless.
ua = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36"
priv_dir = :code.priv_dir(:boost_buddy)
change_session_to(String.to_atom(account.username), %{browserName: "chrome",
chromeOptions: %{"extensions" => ['#{priv_dir}/luminati/3.2_1'], "args" => ["-
-user-agent=#{ua}", "--proxy-server=http://user:password#proxy.luminati.io:22225"]}})
navigate_to "https://www.website.com/"
Does anyone have experience using chrome driver with Elixir? With Ruby and Java setting up the extension is typically no problem.
https://github.com/GoogleChrome/puppeteer/issues/659
-1 because this was the top result for googling "chrome headless extension"
Regarding sending each request through the proxy, I think you either need to interface with the chrome driver yourself (hijacking hound) or skip hound and use either chrome directly or through a selenium grid.
I think the issue stems from the fact that hound will initiate one single chrome instance, where the proxy settings will be defined. Further requests are done using that proxy.
So in order to achieve multiple proxy connections for different sessions you either need a way to set them through navigational steps (visiting a proxy website that then serves as a hard proxy) or use different browser instances altogether (I might be wrong though and perhaps there's an easier way of proxying the requests)
I have a weird bot pummeling my site. It COULD be some sort of low-level denial-of-service attack, but I think that's unlikely. I'm looking for suggestions on blocking it because it's rapidly chewing through all of my CPU and bandwidth allotments.
Here's what it does:
Roughly 650 page requests per minute, like clockwork, constantly, for weeks
Large list of IPs -- hundreds, rotating, with Geolocations randomly scattered all around the world
Rotating user agent strings, many of which are for legit browsers
HTTP_REFERER is often, but not always, filled with a spam site
And weirdest of all, the GET requests almost always generate 404 errors because most are for fully-qualified URLs which are NOT MY SITE. When they are not full URLs, they are for pages or resources that don't exist, never have, and don't even appear to be exploit attempts.
Here are some sample records from my server logs:
80.84.53.26 - - [24/Feb/2015:06:15:43 -0600] "GET http://www.proxy-listen.de/azenv.php HTTP/1.1" 404 - "http://www.google.co.uk/search?q=HTTP_HOST" "Opera/9.20 (Windows NT 6.0; U; en)"
54.147.200.126 - - [24/Feb/2015:06:15:44 -0600] "GET http://www.pinterest.com/jadajuicy07/ HTTP/1.1" 404 - "-" "Mozilla/4.0 (compatible; Ubuntu; MSIE 9.0; Trident/5.0; zh-CN)"
91.121.161.167 - - [24/Feb/2015:06:15:44 -0600] "GET http://78.37.100.242/search?tbo=d&filter=0&nfpr=1&source=hp&num=100&btnG=Search&q=%221%22+%2b+intitle%3a%22contact%22+%7efossil HTTP/1.1" 404 - "http://78.37.100.242/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
185.2.101.78 - - [24/Feb/2015:06:15:43 -0600] "GET http://mail.yahoo.com/ HTTP/1.1" 200 269726 "-" "Mozilla/4.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.5.21022; .NET CLR 3.5.30729; MS-RTC LM 8; .NET CLR 3.0.30729)"
142.0.140.68 - - [24/Feb/2015:06:15:44 -0600] "GET http://ib.adnxs.com/ttj?id=4311122&cb=[CACHEBUSTER]&referrer=[REFERRER_URL] HTTP/1.0" 404 - "http://www.monetaryback.com/?p=1419" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/532.0 (KHTML, like Gecko) Chrome/4.0.206.1 Safari/532.0"
This is the third time I've dealt with these same conditions. It last happened about six months ago. For reference, my site is a blog about baseball (on a blogging platform I built myself) with a few hundred regular visitors. I'm in the US, but my site contains no state secrets!
For now I've redirected all 404 errors to a script which dynamically modifies my .htaccess file to instantly ban IPs that make incoherent requests. That works, but I don't think it's sustainable.
What is this thing? And what's the best practice method of blocking it? Thanks.
This question already has answers here:
Why do all browsers' user agents start with "Mozilla/"?
(6 answers)
Closed 4 years ago.
When I myself send many requests to the server I found it amazing that in IE if I choose opera user string that the value of user string was
User-Agent Opera/9.80 (Windows NT 6.1; U; en) Presto/2.2.15 Version/10.00
But if I choose another browser in Internet Explorer that it puts Mozilla 5.0 in the user string first.
When I send the ajax request from Chrome that I found same thing that they put user string
Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.20 (KHTML, like Gecko) Chrome/11.0.672.2 Safari/534.20
I found that Mozilla is an organization that doesn't have anything to do with Google and Microsoft. Perhaps it was a competitor for both. Why do MSFT and Google both put Mozilla in their user agent? Is there any reason for putting Mozilla in connection string?
Why do chrome and IE both put Mozilla in the userstring when they send the request? I do not know why but is there any specific reason for that?
See: user-agent-string-history
It all goes back to browser sniffing and making sure that the browsers are not blocked from getting content they can support. From the above article:
And Internet Explorer supported frames, and yet was not Mozilla, and so was not given frames. And Microsoft grew impatient, and did not wish to wait for webmasters to learn of IE and begin to send it frames, and so Internet Explorer declared that it was “Mozilla compatible” and began to impersonate Netscape, and called itself Mozilla/1.22 (compatible; MSIE 2.0; Windows 95), and Internet Explorer received frames, and all of Microsoft was happy, but webmasters were confused.