How do I block curl downloading of a specific page? - .htaccess

I read that spammers may be downloading a specific registration page on my site using curl. Is there any way to block that specific page from being CURLed, either through htaccess or other means?

I don't think this is possible to block curl, as curl has the ability to send user agents, cookies, etc. As far as I understand, it can completely emulate a normal user.
If you are worried about protecting a form, you can generate a random token which is submitted automatically when the form is submitted. That way, anyone who tries to make a script to automate registration will have to worry about scraping it first.

There is one weakness in CURL, which you can exploit, it can not run javascript like a browser. So you can take advantage of this fact, one first landing on the reg page, have your server side code check for a cookie, if it isnt there, send some javascript code to the browser, this code will set the cookie and do a redirect/reload ... after reload the server side again checks for the cookie, incase of browsers it will find it.. incase of curl the cookie generation and reload/redirect wont happen in the first place.
I hope i made some sense, bottom line .. utilize javascript to differentiate between curl and browser.

As Oren says, spammers can forge user-agents, so you can't just block the curl user-agent string. The typical solution here is some kind of CATPCHA. These are often jumbled images (though non-visual forms exist) sites (including StackOverflow) have you transcribe to prove you're human.

Related

What is cross site scripting?

On this site (archived snapshot) under “The Theory of XSS’, it says:
the hacker infects a legitimate web page with his malicious client-side script
My first question on reading this is: if the application is deployed on a server that is secure (as is the case with a bank for example), how can the hacker ever get access to the source code of the web page? Or can he/she inject the malicious script without accessing the source code?
With cross-site scripting, it's possible to infect the HTML document produced without causing the web server itself to be infected. An XSS attack uses the server as a vector to present malicious content back to a client, either instantly from the request (a reflected attack), or delayed though storage and retrieval (a stored attack).
An XSS attack exploits a weakness in the server's production of a page that allows request data to show up in raw form in the response. The page is only reflecting back what was submitted in a request... but the content of that request might hold characters that break out of ordinary text content and introduce HTML or JavaScript content that the developer did not intend.
Here's a quick example. Let's say you have some sort of templating language made to produce an HTML page (like PHP, ASP, CGI, or a Velocity or Freemarker script). It takes the following page and substitutes "<?=$name?>" with the unescaped value of the "name" query parameter.
<html>
<head><title>Example</title></head>
<body>Hi, <?=$name?></body>
</html>
Someone calling that page with the following URL:
http://example.com/unsafepage?name=Rumplestiltskin
Should expect to see this message:
Hi, Rumplestiltskin
Calling the same page with something more malicious can be used to alter the page or user experience substantially.
http://example.com/unsafepage?name=Rumplestiltskin<script>alert('Boo!')</script>
Instead of just saying, "Hi, Rumplestiltskin", this URL would also cause the page to pop up an alert message that says, "Boo!". That is, of course, a simplistic example. One could provide a sophisticated script that captures keystrokes or asks for a name and password to be verified, or clears the screen and entirely rewrites the page with shock content. It would still look like it came from example.com, because the page itself did, but the content is being provided somewhere in the request and just reflected back as part of the page.
So, if the page is just spitting back content provided by the person requesting it, and you're requesting that page, then how does a hacker infect your request? Usually, this is accomplished by providing a link, either on a web page or sent to you by e-mail, or in a URL-shortened request, so it's difficult to see the mess in the URL.
<a href="http://example.com?name=<script>alert('Malicious content')</script>">
Click Me!
</a>
A server with an exploitable XSS vulnerability does not run any malicious code itself-- its programming remains unaltered-- but it can be made to serve malicious content to clients.
That attacker doesn't need access to the source code.
A simple example would be a URL parameter that is written to the page. You could change the URL parameter to contain script tags.
Another example is a comment system. If the website doesn't properly sanitize the input/output, an attacker could add script to a comment, which would then be displayed and executed on the computers of anyone who viewed the comment.
These are simple examples. There's a lot more to it and a lot of different types of XSS attacks.
It's better to think of the script as being injected into the middle of the conversation between the badly coded web page and the client's web browser. It's not actually injected into the web page's code; but rather into the stream of data going to the client's web browser.
There are two types of XSS attacks:
Non-persistent: This would be a specially crafted URL that embeds a script as one of the parameters to the target page. The nasty URL can be sent out in an email with the intent of tricking the recipient into clicking it. The target page mishandles the parameter and unintentionally sends code to the client's machine that was passed in originally through the URL string.
Persistent: This attack uses a page on a site that saves form data to the database without handling the input data properly. A malicious user can embed a nasty script as part of a typical data field (like Last Name) that is run on the client's web browser unknowingly. Normally the nasty script would be stored to the database and re-run on every client's visit to the infected page.
See the following for a trivial example: What Is Cross-Site Scripting (XSS)?

How do I protect sensitive information from cross site access?

My web application displays some sensitive information to a logged in user. The user visits another site without explicitly logging out of my site first. How do I ensure that the other site can not access the sensitive information without accept from me or the user?
If for example my sensitive data is in JavaScript format, the other site can include it in a script tag and read the side effects. I could continue on building a blacklist, but I do not want to enumerate what is unsafe. I want to know what is safe, but I can not find any documentation of this.
UPDATE: In my example JavaScript from the victim site was executed on the attacker's site, not the other way around, which would have been Cross Site Scripting.
Another example is images, where any other site can read the width and height, but I don't think they can read the content, but they can display it.
A third example is that everything without an X-Frame-Options header can be loaded into an iframe, and from there it is possible to steal the data by tricking the user into doing drag-and-drop or copy-and-paste.
The key point of Cross Site Attack is to ensure that your input from user which is going to be displayed, is legal, not containing some scripts. You may stop it at the beginning.
If for example my sensitive data is in JavaScript format, the other site can include it in a script tag
Yep! So don't put it in JavaScript/JSONP format.
The usual fix for passing back JSON or JS code is to put something unexecutable at the front to cause a syntax error or a hang (for(;;); is popular). So including the resource as a <script> doesn't get the attacker anywhere. When you access it from your own site you can fetch it with an XMLHttpRequest and chop off the prefix before evaluating it.
(A workaround that doesn't work is checking window.location in the returned script: when you're being included in an attacker's page they have control of the JavaScript environment and could sabotage the built-in objects to do unexpected things.)
Since I did not get the answer I was looking for here, I asked in another forum an got the answer. It is here:
https://groups.google.com/forum/?fromgroups=#!topic/mozilla.dev.security/9U6HTOh-p4g
I also found this page which answers my question:
http://code.google.com/p/browsersec/wiki/Part2#Life_outside_same-origin_rules
First of all like superpdm states, design your app from the ground up to ensure that either the sensitive information is not stored on the client side in the first place or that it is unintelligible to a malicious users.
Additionally, for items of data you don't have much control over, you can take advantage of inbuilt HTTP controls like HttpOnly that tries to ensure that client-side scripts will not have access to cookies like your session token and so forth. Setting httpOnly on your cookies will go a long way to ensure malicious vbscripts, javascripts etc will not read or modify your client-side tokens.
I think some confusion is still in our web-security knowledge world. You are afraid of Cross Site Request Forgery, and yet describing and looking for solution to Cross Site Scripting.
Cross Site Scripting is a vulnerability that allows malicious person to inject some unwanted content into your site. It may be some text, but it also may be some JS code or VB or Java Applet (I mentioned applets because they can be used to circumvent protection provided by the httpOnly flag). And thus if your aware user clicks on the malicious link he may get his data stolen. It depends on amount of sensitive data presented to the user. Clicking on a link is not only attack vector for XSS attack, If you present to users unfiltered contents provided by other users, someone may also inject some evil code and do some damage. He does not need to steal someone's cookie to get what he wants. And it has notnig to do with visiting other site while still being logged to your app. I recommend:XSS
Cross Site Request Forgery is a vulnerability that allows someone to construct specially crafted form and present it to Logged in user, user after submitting this form may execute operation in your app that he didin't intended. Operation may be transfer, password change, or user add. And this is the threat you are worried about, if user holds session with your app and visits site with such form which gets auto-submited with JS such request gets authenticated, and operation executed. And httpOnly will not protect from it because attacker does not need to access sessionId stored in cookies. I recommend: CSRF

sharepoint reauthentication for sensitive data

I have a page that contains sensitive information that I would like to require reauthentication in order to load. I am using Classic authentication mode, not forms.
The first method i looked at was the PrincipalContext.ValidateCredentials method, but that would require sending login details in plain text (i think).
I have thought about using javascript to turn off cookies so they would have to log back in, but I haven't thought of a way of doing this well.
Has anyone done this before with SharePoint?
what i ended up with:
a web part on the page with sensitive material which forces an HTTP 401, and then redirects to another page.
this other page has a second web part, which then redirects back to the original page after setting some session variable.
You could use something along the lines of this if you're using IE6/8 but other browser may have issues with it (look into http-keep-alives).
<script type='text/javascript'>
document.execCommand("ClearAuthenticationCache");
</script>
That said, it doesn't seem like friendliest UI option to forcibly clear someones authentication. I suspect a better option would depend on the audience and if they are on a trusted domain or coming from an external source. If they are on the trusted domain and don't normally login anyway, this approach likely wont please them much.

Does it make a difference in security whether a form POSTs to its own file or a different one?

Assuming I do the same field validation in either case, is there any difference in terms of security whether you POST a form back to its own file or to another?
Note that I'm not referring to sensitive information or passwords within the form data, but to whether either method is better at avoiding various types of attacks.
It does not make a difference. The page accepting the form input has no idea where the data came from (the HTTP referrer is trivial to spoof) and any security effort would depend on things completely unrelated to page the form data came from.
It does make a difference actually- Mainly because if you post back to itself, it doesn't make a new history entry, but if you post to a different page it does make a new history entry in the browser. This is mainly of interest to public terminals, and in browsers that remember the contents of forms.
fill out a form
submit
leave computer
nefarious individual hits the back button and reads the contents of the form.
I also think to fully prevent that sort of attack, you'd need to involve a 301 redirect. That is you post to the url, and the url responds with a 301 sending you back to the original page.
If they're both files on your server, under your control, then it doesn't make a difference.
No, it does not matter at all. All you are doing is sending a HTTP request to a URL. Your server handles the request and sends the response back to the user. If the response happens to be the same page as the one sending the request, it does not make the application any more secure or vulnerable to any kind of attack over HTTP.

Cross-site scripting from an Image

I have a rich-text editor on my site that I'm trying to protect against XSS attacks. I think I have pretty much everything handled, but I'm still unsure about what to do with images. Right now I'm using the following regex to validate image URLs, which I'm assuming will block inline javascript XSS attacks:
"https?://[-A-Za-z0-9+&##/%?=~_|!:,.;]+"
What I'm not sure of is how open this leaves me to XSS attacks from the remote image. Is linking to an external image a serious security threat?
The only thing I can think of is that the URL entered references a resource that returns "text/javascript" as its MIME type instead of some sort of image, and that javascript is then executed.
Is that possible? Is there any other security threat I should consider?
Another thing to worry about is that you can easily embed PHP code inside an image and upload that most of the time. The only thing an attack would then have to be able to do is find a way to include the image. (Only the PHP code will get executed, the rest is just echoed). Check the MIME-type won't help you with this because the attacker can easily just upload an image with the correct first few bytes, followed by arbitrary PHP code. (The same is somewhat true for HTML and Javascript code).
If the end-viewer is in a password protected area and your app contains Urls that initiate actions based on GET requests, you can make a request on the user's behalf.
Examples:
src="http://yoursite.com/deleteuser.xxx?userid=1234"
src="http://yoursite.com/user/delete/1234"
src="http://yoursite.com/dosomethingdangerous"
In that case, look at the context around it: do users only supply a URL? In that case it's fine to just validate the URLs semantics and MIME-type. If the user also gets to input tags of some sort you'll have to make sure that they are not manipulatable to do anything other then display images.

Resources