This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Deny ajax file access using htaccess
I have a directory "AJAX" that has all my well AJAX content it is unformatted and ugly if you hit the pages directly. How do I stop someone from hitting http://www.site.com/AJAX/page1.php with the .htaccess file?
You don't really want to use .htaccess to block access to the content since you still need to be able to access this content from your application.
If you feel it's that important to keep people from directly loading this content you can route all traffic through a single php page like:
http://www.site.com/ajax.php?content=page1
and then in ajax.php you can restrict access to this content to only HTTP post and possibly through other means, like unique tokens. Like in the referring page create a unique token 123 and then use the url:
http://www.site.com/ajax.php?content=page1&token=123
and only serve the file if the token is still cached in memory.
While this will work, I don't see the point. If someone wants to load that page, who cares. You can't prevent them from accessing the content since you need it to run your application--they can always get it from browser cache or using a local http proxy.
Related
I have recently launched a website on GoDaddy hosting. I have keept some images and JavaScript files used in website, in separate folders. I want to prevent the users from browsing those images and files by simply appending the folder and file name in the website URL. For example
www.example.com/images/logo.png
If I understand correctly, you want to have html file with images, that shouldn't be accessible alone? If yes, then it cannot be done. You can watch for correct HTTP Referrer header, but it can be simply faked and it also makes it inaccessible for browsers that don't send referrer or having sending it forbidden for "privacy" reasons.
If you want hide files to be accessible only by server side scripts, ftp/scp, then you can try to use .htaccess (if GoDaddy runs on Apache) and correct configuration: https://httpd.apache.org/docs/2.2/howto/access.html
Another way could be hiding that files and creating one-shot token like this:
<img src=<?pseudocode GEN_TOKEN("file.jpg") ?> /> with another file serving these hidden files just for generated token, then deleting it from DB. Nevertheless, this will not protect anybody from downloading or accessing these files, if they want...
But, anyway, try to clarify your question better...
If you are keeping images/files in folder which is open to public, I guess you kept in that folder for purpose, you want public to access those images and files.
How public know images file name? Stop file content listing for your web site.
I am not aware which language you are using on web server, but in ASP.NET you may write module/ middle ware which can intercept in coming request and based on your logic (e.g. authentication and authorization) you can restrict access. All modern languages support this kind of functionality.
My question pertains specifically to the two pages below, but is also more generally relating to methods for using clean URLs without an .htaccess file.
http://www.decitectural.com/
and
http://www.decitectural.com/about/
The pages above are hosted on Amazon's S3, which does not allow for the use of htaccess files. As a result, I have found no easy way to create a clean url rewrite scheme that sends all requests to an index file which, in turn, interprets the URL using javascript and loads up the correct page (with AJAX, or, as is the case with decitectural, with simple div visibility toggling).
In order to circumvent this problem, I usually edit the amazon S3 bucket properties and set both the index page and the error page to the index.html file. In this case, the index.html file is served even when an invalid path (such as /about/) is requested. This has, for the most part, been a functioning solution... That is, until I realized that I was also getting a 404 with the index.html page which would stop Google from indexing it.
This has led me to seek out an alternative solution to this problem. Currently, as a temporary fix, I am actually creating the /about/ directory on the server with a duplicate of the index.html file in it. This works, but obviously is not a real solution to the problem.
I would appreciate any advice on how to set up a clean URL routing scheme on S3 or in any instance where an .htaccess file can't be used.
Here's a few solutions: Pretty URLs without mod_rewrite, without .htaccess
Also, I guess you can run a script to create the files dynamically from an array or database so it generates all your URLs:
/index.html
/about/index.html
/contact/index.html
...
And hook the script on every edit, in a cron or run manually. Not the best in terms of performance but hey, it should work.
I think you are going about it the wrong way. S3 gives you complete control of the page structure of your site. If you want your link to be "/about", just upload a file called "about", and you're done. (Set the headers so that the browser knows it's HTML.)
Yes, it will break if someone links to "/about/" or "/about.html". But pretty much any site will break if you mess with their links in odd ways. You will have to be vigilant when linking to your own site, because you won't have any rewrite rules to clean up for you. But you should have automation doing that.
I am stuck at the situation where I want the url, which contains a folder having some files (html, swf etc.), to be accessible after I validate the user.
For example.
The url to access is:
A - http://mysite.com/files/version/1/file.swf
And this above url is accessible from the link,
B - http://mysite.com/view/1
I have implemented a way to hide the URL A from a normal user but if the user somehow is a semi-techie person then he can know the swf file location from firebug or other tools. So, to make the access-to-file secure what should I do?
If a user somehow knows the first url(A) and then enters it in browser, i have to check if the user is logged-in and if validation is done it lets the url A to be loaded.
Since, in CI, the controller names cannot be named same as the folders in the root directory, in this case i cannot have a controller called “files”. So, the only option left to make this secure access to url work is to use htaccess rule/cond. If this is the only option, then how can it be achieved by htaccess and if not, then what other options do i have.
Will the codeigniter's URI Routes work because when i tried like this:
$route[‘files/version/1/(:any)’] = “view/$1”;
and it doesnt work, maybe because there is no controller/function/param as files/versions/1 ...
looking for quick help. Thanks
There isn't a sure-fire way to do it without, for example, using .htpasswd.
One thing you could implement is sort of "Security by Obscurity". In that case you could redirect all requests to a file to the URL http://mysite.com/view/file-id and then instead of loading the requested file directly, you would load a .php template with the appropriate headers - be it an image, a flash file or anything else.
But it really depends on how the files are going to be managed, since every file will need an entry in the database and you would have to output different headers for different types of files. And if someone still manages to guess the path to the file, it will be directly accessible.
On my website users can post stuff anonymously.
When they have posted something they will be redirected to their post, let's say:
http://example.com/post/2/title-of-the-anonymous-post
The user who submitted the post and the admins are the only ones with access to that post (until it is made public). Once it is made public the post would still be anonymous (i.e. people cannot see who submitted the post).
However, on that page there are also some external links. If the user decides to click an external link the target website has the ability to log the http referer (which would contain the link to the hidden page). This means it would be possible to find out who posted it once it is made public.
Is there a way to change the HTTP referer (/ referrer) when a users clicks on a link to another website?
By for example first redirecting the user to another url and let that page redirect to the external website:
user clicks on: http://example.com/referer-hider?url={urlencoded(url)}
and let the referer-hider redirect the user to the external page so that the referer will contain: http://example.com/referer-hider?url={urlencoded(url)}
Will this work? Or is there another solution for this (which doesn't require client side modifications)?
Since the referrer is provided by the browser to a web server, I only see two ways to insure that external sites don't get a view of this "hidden" URL.
First way would be (as you said) to remove the external links from your hidden page by running them through a redirector which uses header("location: ...");). Yes, that will work. You might just want to use this in general, so that you can track the exits from your site.
Second way would be to stop hiding this URL. It won't stay hidden forever, after all. A Google/Alexa/whatever toolbar hits it, and bam, it's indexed. So instead, build this hidden functionality into something session based. Make a script that changes its output depending on session variables, and only allow the hidden content to show up if people have logged in or previewed their post or whatever.
The third (and probably best) way would be to implement proper access control, so that anonymous users CANNOT visit the page with the restricted content. If you want an anonymous original poster to be able to visit THEIR OWN post, you can send them a cookie, then validate the cookie upon the visit to the unapproved post.
For example, upon submission for approval:
setcookie('postkey', mysql_insert_id());
Then:
$pieces=explode($_SERVER['PHP_SELF']);
$postid=$pieces[2]; // or whatever
if (!isset($_COOKIE['postkey'])) {
header("Location: http://example.org/");
} else if ($_COOKIE['postkey'] != $postid) {
header("Location: http://example.org/");
}
etc. You probably want better protection than this, but it should give you some ideas.
The HTTP referer is not transmitted by the browser when a link is going from HTTPS->HTTP. So a simple solution is to have an https redirect page: https://yoursite/redirect?url=... . However this page is also vulnerable to OWASP a10 - Unvalidated Redirects and Forwards, but that might not matter to you. Another solution that doesn't expose you to OWASP a10 is to use a free redirect service.
The Meta referrer proposal from Adam Barth would help with your case; in short you could tell browsers via a <meta> tag that the Referer header should be stripped on all outgoing links.
This isn't a complete answer since it's only implemented in Webkit thus far, but it's something to keep an eye on.
I want to implement https on only a selection of my web-pages. I have purchased my SSL certificates etc and got them working. Despite this, due to speed demands i cannot afford to place them on every single page.
Instead i want my server to serve up http or https depending on the page being viewed. An example where this has been done is ‘99designs’
The problem in slightly more detail:
When my visitors first visit my site they only have access to non-sensitive information and therefore i want them to be presented with simple http.
Then once they login they are granted access to more sensitive information, e.g. profile information for which HTTPS is used to deliver.
Despite being logged in, if the user goes back to a non-sensitive page such as the homepage then i want it delivered using HTTP.
One common solution seems to be using the .htaccess file. The problem is that my site is relatively large meaning that to use this would require me to write a rule for every page (several hundred) to determine whether it should be server up using http or https.
And then there is the problem of defining user generated content pages.
Please help,
Many thanks,
David
You've not mentioned anything about the architecture you are using. Assuming that the SSL termination is on the webserver, then you should set up separate virtual hosts with completely seperate and non-overlapping document trees, and for preference, use a path schema which does not overlap (to avoid little accidents).