We're looking at creating a webapp that users can access via QR codes that are placed in semi-public places. We fear the risk of fraudsters replacing our QR-codes with theirs. Which might result in the user scanning their fake code, he will go trough their site with a fraudulent goal and then to be forwarded to our site without him even noticing the harm.
Is there a way we can detect these kind of redirects? (e.g. type of request)
And if so how hard would that be to circumvent again?
Thanks in advance
Related
I think my website is injected with some script that is using rawgit.com. Recently my website runs very slow with browser lower bar notification "Transferring data from rawgit.com.." or "Read rawgit.com"..." . I have never used RawGit to serve raw files directly from GitHub. I can see they are using https://cdn.rawgit.com/ domain to serve files.
I would like my website to block everything related to this domains, how can I achieve that ?
As I said in the comments, you are going about this problem in the wrong way. If your site already includes sources you do not recognise or allow, you are already compromised and your main focus should be on figuring out how you got compromised, and how much access an attacker may have gotten. Based on how much access they have gotten, you may need to scrap everything and restore a backup.
The safest thing to do is to bring the server offline while you investigate. Make sure that you still have access to the systems you need to access (e.g. ssh), but block any other remote ip. Just "blocking rawgit.com" blocks one of the symptoms you can see and allows an attacker to change their attack while you are fumbling with that.
I do not recommend to only block rawgit.com, not even when it's your first move to counter this problem, but if you want you can use the Content-Security-Policy header. You can whitelist the urls you do expect and thus block the urls you do not. See mdn for more information.
I inherited some code that was recently attacked by repeated remote form submissions.
Initially I implemented some protection by setting a unique session auth token (not the session id). While I realize this specific attack is not CSRF, I adapted my solution from these posts (albeit dated).
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29
http://tyleregeto.com/a-guide-to-nonce
http://shiflett.org/articles/cross-site-request-forgeries
I've also read existing posts on SO, such as Practical non-image based CAPTCHA approaches?
However, the attacker now requests the form page first, starting a valid session, and then passes the session cookie in the following POST request. Therefore having a valid session token. So fail on my part.
I need to put some additional preventative measures in place. I'd like to avoid CAPTCHA (do to poor user experience) and JavaScript solutions if possible. I've also considered referrer checks (can be faked), honeypots (hidden fields), as well as rate limiting (which can be overcome by throttling). This attacker is persistent.
With that said, what would be a more robust solution.
If you are having a human that attacks specifically your page, then you need to find what makes this attacker different from the regular user.
If he spams you with certain URLs or text or alike - block them after they are submitted.
You can also quarantine submissions - don't let them go for say 5 minutes. Within those 5 minutes if you receive another submission to the same form from the same IP - discard both posts and block the IP.
CAPTCHA is good if you use good CAPTCHA, cause many custom home-made captchas are now recognized automatically by specially crafted software.
To summarize - your problem needs not just technical, but more social solutions, aiming at neutralizing the botmaster rather than preventing the bot from posting.
CAPTCHAs were invented for this exact reason. Because there is NO WAY to differentiate 100% between human and bot.
You can throttle your users by increasing a server-side counter, and when it reaches X times, then you can consider it as a bot attack, and lock the site out. Then, when some time elapse (save the time of the attack as well), allow entry.
i've thought a little about this myself.
i had an idea to extend the session auth token to also store a set of randomized form variable names. so instead of
<input name="title" ... >
you'd get
<input name="aZ5KlMsle2" ... >
and then additionally add a bunch of traps fields, which are hidden via css.
if any of the traps are filled out, then it was not a normal user, but a bot examining your html source...
How about a hidden form field? If it gets filled automatically by the bot, you accept the request, but dismiss it.
I'm currently developing a web application that has one feature while allows input from anonymous users (No authorization required). I realize that this may prove to have security risks such as repeated arbitrary inputs (ex. spam), or users posting malicious content. So to remedy this I'm trying to create a sort of system that keeps track of what each anonymous user has posted.
So far all I can think of is tracking by IP, but it seems as though it may not be viable due to dynamic IPs, are there any other solutions for anonymous user tracking?
I would recommend requiring them to answer a captcha before posting, or after an unusual number of posts from a single ip address.
"A CAPTCHA is a program that protects websites against bots by generating and grading tests >that humans can pass but current computer programs cannot. For example, humans can read >distorted text as the one shown below, but current computer programs can't"
That way the spammers are actual humans. That will slow the firehose to a level where you can weed out any that does get through.
http://www.captcha.net/
There's two main ways: clientside and serverside. Tracking IP is all that I can think of serverside; clientside there's more accurate options, but they are all under user's control, and he can reanonymise himself (it's his machine, after all): cookies and storage come to mind.
Drop a cookie with an ID on it. Sure, cookies can be deleted, but this at least gives you something.
My suggestion is:
Use cookies for tracking of user identity. As you yourself have said, due to dynamic IP addresses, you can't reliably use them for tracking user identity.
To detect and curb spam, use IP + user browser agent combination.
I was looking for a way to protect a web service from "Synthetic queries". Read this security stack question for more details.
It seemed that I had little alternative, until I came across NSE India's website which implements a certain kind of measure against such synthetic queries.
I would like to know how could they have implemented a protection which works somewhat this way: You go to their website and search for a quote, lets say, RELIANCE, we get a page displaying the latest quote.
On analysis we find that the query being sent across is something like:
http://www.nseindia.com/marketinfo/equities/ajaxGetQuote.jsp?symbol=RELIANCE&series=EQ
But when we directly copy paste the query in the browser, it returns "referral denied".
I guess such a procedure may also help me. Any ideas how I may implement something similar?
It won't help you. Faking a referrer is trivial. The only protection against queries the attacker constructs is server side validation.
Referrers sometimes can be used to sometimes prevent hotlinking from other websites, but even that is rather hard to do since certain programs create fake referrers and you don't want to block those users.
The problems referrer validation could help against other websites trying to manipulate the users browser into accessing your site. Like some kinds of cross site request forgery. But it does never protect against malicious users. Against those the only thing that helps is server side validation. You can never trust the client if you don't trust the user of that client.
Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.