I have a certain directory, /home/secret, to only be accessible by iframes from my website. How can I do this without allowing normal browsers and such access to it? I have tried setting permission to 750, but it doesn't work either.
I'd suggest maybe adding a GET variable to the end of the url and only allowing it that way, or checking the REFERRER and if that doesn't exist, simply killing the script.
If you would like to limit it to a user, I'd simply suggest making a login system and then passing the cookie to the file and having the file verify that the cookie is that user, THEN allowing access to the file.
The only problem with try to do this is that all of these settings can be spoofed with a cURL or some other form of a browser-mimic software.
Hope this helped!
Related
i have some problem with .htaccess file.
For prevent download or print of pdf documents , i am using PDF.js for reading contents.
Now i want to disable direct http connection to those files.
Inside the pdf.js folders, i put a directory called "doc", that contains all items and this .htaccess:
Order allow,deny
Deny from all
<Files ~ "viewer\.html$">
Allow from all
</Files>
Where viewer.html is the page that contains the documents reader.
So, when i try access from my browser to
localhost:8080/test/pdfjs/web/viewer.html?file=doc/mondia.pdf
i get:
Unexpected server response (403) while retrieving PDF "../test/pdfjs/web/mondia.pdf"
Where i am wrong?
If PDF.js is running inside the user's web browser, then the user needs to be able to download the PDF document. Apache can't (reliably) tell the difference between "PDF.js on the user's computer" and "Google Chrome on the user's computer" - both are HTTP requests from the user's computer for the resource.
If you really wanted to, you might be able to detect some header set by PDF.js when it requests the PDF, and refuse requests without that header. That would stop casual users directly accessing the file, but anyone who presses F12 in their browser could see the PDF being downloaded by PDF.js and save the contents from there.
Even if you served it in some form other than PDF, the user could copy and paste the resulting HTML, or take a screenshot of how it renders to the screen.
Stopping a user doing something with their own computer is fundamentally hard; if they can read something on their screen, you have sent it to them in some form. To really block them, you need a trusted "DRM" encryption system that renders directly to screen without ever making decrypted data accessible to the user. In the vast majority of cases, that would be completely overkill, and just annoy your users (for instance, blind users probably won't be able to access the content, as their screen reader software will not be trusted).
You can try with this plugin
https://it.wordpress.org/plugins/editionguard-for-woocommerce-ebook-sales-with-drm/#description
or similar,
DRM is the best solution for wordpress site.
Or try with this header in pdf-js
How to set range header from client with pdf.js?
Please edit the .htacess file present in Vtiger_root_location/storage
add 'pdf' option as follows:
In order to verify that I own a website, google asked me to do the following:
Download this HTML verification file. [googleXXX.html]
Upload the file to http://www.example.com/
Confirm successful upload by visiting http://www.example.com/googleXXX.html in your browser.
Click Verify below.
To stay verified, don't remove the HTML file, even after verification succeeds.
The file provided by google contains a single line:
google-site-verification: googleXXX.html
How that this work? How is that supposed to tell them that I actually own that domain?
It doesn't tell them that you own it, it tells them that you have write permission to it. That's considered enough.
It demonstrates that you have sufficient control of the web server at the domain to be able to add pages to the website. The assumption is that this level of control would only be available to the owner of the domain, or a delegated administrator.
I am not asking how. I am asking if. Is it possible to bypass a 403 error on the web?
Let me explain a bit in detail. On a web server the IIS has set up a directory for a project we are such that it is not accessable to the outside. So if you type the path to that directory in a web browser, the web browser will say that it is not accessable and it will throw a 403 error.
Now, here is the problem. Some files are placed there with some secure information. A programmer on our team has made a big deal about this and the fact that the files are placed on a server that is accessalbe to the outside world. On the other hand, I think this is not such a big deal since if a user on the outside tried to go to that directory, his web browser will throw the 403 error. But other people on the team say that a hacker can still somehow access it.
So that leads me here and to my question. Is it possible to bypass a 403 error on the web? I say no. Some network guys at work say maybe. I am not asking how to do it. I am only asking if it is really possible.
I gather from your information that there is a web server with a directory setup on the web like so
http://www.example.com/directory
Now, if you navigate to this URL you get a 403 Forbidden error? However, if you know the name of a file you can go to http://www.example.com/directory/MyImportantDocument.docx and it is possible to view the document at this location?
Unless there is a runnable script on your server that does this, it is not possible to view the directory contents via the web. However, URLs are not considered secure as they are logged in browser history, proxy and server logs and can also be leaked by browsers' referer header. I assume the files are stored here so they can be accessed by a remote application?
File names can be easily brute forced by an attacker. Tools such as dirbuster and dirb do this automatically. Therefore, if the files do not need to be readable remotely, they should be moved to an internal server, not accessible from the internet or DMZ.
If access is needed you should implement some sort of authentication. At the very least activate basic auth on IIS. This will prompt a web browser user for a username and password in order to view files, or the files can be accessed programmatically by setting the appropriate Authorization header, which is an encoded username and password.
Better would be something with comprehensive session management, like an application pre-built for this purpose. E.g. a CMS which is kept up-to-date and securely configured.
Also you should make sure that the IIS website is only configured to be accessed via HTTPS which will protect against traffic snooping of the credentials, URL path, headers and file contents.
In some cases (e.g. Back-end or web server mis-configuration) it's possible to bypass 403. For understanding those methods read this script:
https://github.com/lobuhi/byp4xx
this script contained well-known methods and collected from various bug bounty communities.
So if your back-end server not vulnerable to this script, probably it's safe.
So basically it is NOT possible if the server software itself doesn't has any bug. But if you have other parts of your website that are public and probably using a dynamic scripting language that may higher your risk if someone is able to find a hole with something like "access file from filesystem".
In general I would recommend you to NOT store any security relevant files on a public server that don't need to!
If you could avoid it, it's always the better way.
There is a simple exploit to bypass .httacess restrictions... Try to Google "bypass error 403" and you will find the method. As auditor I can confirm that it is not a good practice (and if I see it I will always raise it as an issue) if you store credentials (or any other sensitive information) in plain text on web server.
Hey. I need to prevent direct access to http://www.site.com/wp-content/uploads/folder/something.pdf through the browser.
However the Download Monitor plugin I am using, which allows logged in users to download the file, needs to be able to work.
Trying
Order Allow,Deny
Deny from all
http://www.site.com/wp-content/plugins/download-monitor/download.php">
Allow from all
but the download links do not now work... even though (I think) they are links produced by the script e.g.
http://www.site.com/wp-content/plugins/download-monitor/download.php?id=something.pdf
Enter that in the address bar and you correctly get a WordPress message, 'You must be logged in to download this file.'
However, if someone knows the URL where the file was uploaded
http://www.site.com/wp-content/uploads/folder/something.pdf
they can still access it directly.
I don't know how (guesswork?) they would find the direct URL anyway, but the client wants it stopped!
Thanks for any help.
You cannot set Deny in .htaccess because your WordPress and a standard file request has the same server user - www-data/apache/http/or something.
You can for example sat folder's chmod to 700 and it will allow access for script but not for direct file call.
And accept your recent questions.
I have a site that is password-protected using a .htaccess and .htpasswd file. I'd like for users to bypass the login prompt ONLY if they come from a certain domain. Can this be done by embedding the .htaccess credentials as parameters in the link somehow?
I do manage the domain I'd like to whitelist, so how can I pass GET parameters in the link that the .htaccess file will process?
You should rethink this as it is trivial to spoof the referring domain (or any information from the client).
You users can easily select to save their username / password if they wish to.
That would be highly insecure, the http referrer can be easily manipulated and your login bypassed.
If you own the other sites you can add some http header or GET var. If you don't, start thinking another solution for what you want to do.