I am having a web program.
Some of my clients, are saving pages into their hard drive instead of creating shortcuts.
When they are trying to use the system they are opening the html from their local hard drive.
This creates a lot of bugs because the js files and the data they are using is not up to date, and a lot of time they are not logged in.
We had enough getting customers calling about bugs in our system, while they are using this local html file.
I can't force them to re-download the files, so I wish to block requests to my server that are reffered by 'file://' protocol.
The users will have html page with no data cuss or images and we hopes this will make the go to the website...
And here are my questions:
How do you block any requests that originated by "file://" protocol?
Do you have better solution?
Related
I'm new to web development and i want to ask that why some website have the "/"?
for example https://www.roblox.com/home, notice the "/home" what does that called
I have tried to search on google and i can't find the answer
And some website have like "/login.php", "/index.html" it can also be html?
These are URLs (https://en.wikipedia.org/wiki/URL) and they identify the resource you are trying to reach. I would suggest reading more about how web pages works to get a better general overview of things(e.g.: https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/How_the_Web_works)
How these resources are actually interpreted depends on the server side implementation:
.php are usually processed by PHP web server
Other static files such as images (*.png , *.jpg, etc), html files, svgs, CSS, js, etc - Are usually located in the local server by the web server (httpd, tomcat, IIS, nodejs, and many many others) and the files as transmitted to the client 'as-is'
When using online tools to build websites, these complexities are usually abstracted away, and in the end URLs will just mean a resource identifier.
[domain]/[section]/[page(.html|.php)|resource(.js|.css)]
domain: the address of the website
section: a way to navigate inside the website itself
page: the user interface that might be rendered server side of client side hold the controls shown to user
resource: files that changes how the content in the pages looks and behaves like
I have recently launched a website on GoDaddy hosting. I have keept some images and JavaScript files used in website, in separate folders. I want to prevent the users from browsing those images and files by simply appending the folder and file name in the website URL. For example
www.example.com/images/logo.png
If I understand correctly, you want to have html file with images, that shouldn't be accessible alone? If yes, then it cannot be done. You can watch for correct HTTP Referrer header, but it can be simply faked and it also makes it inaccessible for browsers that don't send referrer or having sending it forbidden for "privacy" reasons.
If you want hide files to be accessible only by server side scripts, ftp/scp, then you can try to use .htaccess (if GoDaddy runs on Apache) and correct configuration: https://httpd.apache.org/docs/2.2/howto/access.html
Another way could be hiding that files and creating one-shot token like this:
<img src=<?pseudocode GEN_TOKEN("file.jpg") ?> /> with another file serving these hidden files just for generated token, then deleting it from DB. Nevertheless, this will not protect anybody from downloading or accessing these files, if they want...
But, anyway, try to clarify your question better...
If you are keeping images/files in folder which is open to public, I guess you kept in that folder for purpose, you want public to access those images and files.
How public know images file name? Stop file content listing for your web site.
I am not aware which language you are using on web server, but in ASP.NET you may write module/ middle ware which can intercept in coming request and based on your logic (e.g. authentication and authorization) you can restrict access. All modern languages support this kind of functionality.
I am not asking how. I am asking if. Is it possible to bypass a 403 error on the web?
Let me explain a bit in detail. On a web server the IIS has set up a directory for a project we are such that it is not accessable to the outside. So if you type the path to that directory in a web browser, the web browser will say that it is not accessable and it will throw a 403 error.
Now, here is the problem. Some files are placed there with some secure information. A programmer on our team has made a big deal about this and the fact that the files are placed on a server that is accessalbe to the outside world. On the other hand, I think this is not such a big deal since if a user on the outside tried to go to that directory, his web browser will throw the 403 error. But other people on the team say that a hacker can still somehow access it.
So that leads me here and to my question. Is it possible to bypass a 403 error on the web? I say no. Some network guys at work say maybe. I am not asking how to do it. I am only asking if it is really possible.
I gather from your information that there is a web server with a directory setup on the web like so
http://www.example.com/directory
Now, if you navigate to this URL you get a 403 Forbidden error? However, if you know the name of a file you can go to http://www.example.com/directory/MyImportantDocument.docx and it is possible to view the document at this location?
Unless there is a runnable script on your server that does this, it is not possible to view the directory contents via the web. However, URLs are not considered secure as they are logged in browser history, proxy and server logs and can also be leaked by browsers' referer header. I assume the files are stored here so they can be accessed by a remote application?
File names can be easily brute forced by an attacker. Tools such as dirbuster and dirb do this automatically. Therefore, if the files do not need to be readable remotely, they should be moved to an internal server, not accessible from the internet or DMZ.
If access is needed you should implement some sort of authentication. At the very least activate basic auth on IIS. This will prompt a web browser user for a username and password in order to view files, or the files can be accessed programmatically by setting the appropriate Authorization header, which is an encoded username and password.
Better would be something with comprehensive session management, like an application pre-built for this purpose. E.g. a CMS which is kept up-to-date and securely configured.
Also you should make sure that the IIS website is only configured to be accessed via HTTPS which will protect against traffic snooping of the credentials, URL path, headers and file contents.
In some cases (e.g. Back-end or web server mis-configuration) it's possible to bypass 403. For understanding those methods read this script:
https://github.com/lobuhi/byp4xx
this script contained well-known methods and collected from various bug bounty communities.
So if your back-end server not vulnerable to this script, probably it's safe.
So basically it is NOT possible if the server software itself doesn't has any bug. But if you have other parts of your website that are public and probably using a dynamic scripting language that may higher your risk if someone is able to find a hole with something like "access file from filesystem".
In general I would recommend you to NOT store any security relevant files on a public server that don't need to!
If you could avoid it, it's always the better way.
There is a simple exploit to bypass .httacess restrictions... Try to Google "bypass error 403" and you will find the method. As auditor I can confirm that it is not a good practice (and if I see it I will always raise it as an issue) if you store credentials (or any other sensitive information) in plain text on web server.
I need to know how to prevent repetitive file downloads using .htaccess, or if not via .htaccess than some other method.
A site I maintain had over 9,000 hits on a single PDF file, accounting for over 80% of the site's total bandwidth usage, and I believe that most of the hits were from the same IP address. I've banned the IP, but that's obviously not an effective solution because there are always proxies and besides, I can only do that after the fact.
So what I want to do is cap the number of times a single IP can attempt to download a designated file or file type over a given period of time. Can this be done with .htaccess? If not, what other options do I have?
EDIT: The suggestion that I redirect requests to a server-side script that would track requests by IP via database sounds like a good option. Can anyone recommend an existing script or library?
If you have some server side code stream out your files, you have the opportunity to control what's being sent. I'm not aware of a .htaccess solution (which doesn't mean there isn't one).
My background is in Microsoft products, so I'd write a bit of ASP.NET code that would accept the filename as a parameter & stream back the result. Since it would be my code, I could easily access a database to track which IP I was serving, how often the file was sent, etc.
This could easily be done using any server side tech - PHP, etc.
In Firefox or Chrome I'd like to prevent a private web page from making outgoing connections, i.e. if the URL starts with http://myprivatewebpage/ or https://myprivatewebpage/ in a browser tab, then that browser tab must be restricted so that it is allowed to load images, CSS, fonts, JavaScript, XmlHttpRequest, Java applets, flash animations and all other resources only from http://myprivatewebpage/ or https://myprivatewebpage/, i.e. an <img src="http://www.google.com/images/logos/ps_logo.png"> (or the corresponding <script>new Image(...) must not be able to load that image, because it's not on myprivatewebpage. I need a 100% and foolproof solution: not even a single resource outside myprivatewebpage can be accessible, not even at low probability. There must be no resource loading restrictions on Web pages other than myprivatewebpage, e.g. http://otherwebpage/ must be able to load images from google.com.
Please note that I assume that the users of myprivatewebpage are willing to cooperate to keep the web page private unless it's too much work for them. For example, they would be happy to install a Chrome or Firefox extension once, and they wouldn't be offended if they see an error message stating that access is denied to myprivatewebpage until they install the extension in a supported browser.
The reason why I need this restriction is to keep myprivatewebpage really private, without exposing any information about its use to webmasters of other web pages. If http://www.google.com/images/logos/ps_logo.png was allowed, then the use of myprivatewebpage would be logged in the access.log of Google's ps_logo.png, so Google's webmasters would have some information how myprivatewebpage is used, and I don't want that. (In this question I'm not interested in whether the restriction is reasonable, but I'm only interested in the technical solutions and its strengths and weaknesses.)
My ideas how to implement the restriction:
Don't impose any restrictions, just rely on the same origin policy. (This doesn't provide the necessary protection, the same origin policy lets all images pass through.)
Change the web application on the server so it generates HTML, JavaScript, Java applets, flash animations etc. which never attempt to load anything outside myprivatewebpage. (This is almost impossibly hard to foolproof everywhere on a complicated web application, especially with user-generated content.)
Over-sanitize the web page using a HTML output filter on the server, i.e. remove all <script>, <embed> and <object> tags, restrict the target of <img src=, <link rel=, <form action= etc. and also restrict the links in the CSS files. (This can prevent all unwanted resources if I can remember all HTML tags properly, e.g. I mustn't forget about <video>. But this is too restrictive: it removes all dyntamic web page functionality like JavaScript, Java applets and flash animations; without these most web applications are useless.)
Sanitize the web page, i.e. add an HTML output filter into the webserver which removes all offending URLs from the generated HTML. (This is not foolproof, because there can be a tricky JavaScript which generates a disallowed URL. It also doesn't protect against URLs loaded by Java applets and flash animations.)
Install a HTTP proxy which blocks requests based on the URL and the HTTP Referer, and force all browser traffic (including myprivatewebpage, otherwebpage, google.com) through that HTTP proxy. (This would slow down traffic to other than myprivatewebpage, and maybe it doesn't protect properly if XmlHttpRequest()s, Java applets or flash animations can forge the HTTP Referer.)
Find or write a Firefox or Chrome extension which intercepts all outgoing connections, and blocks them based on the URL of the tab and the target URL of the connection. I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
Some links I've found for the Chrome extension:
http://www.chromium.org/developers/design-documents/extensions/notifications-of-web-request-and-navigation
https://groups.google.com/a/chromium.org/group/chromium-extensions/browse_thread/thread/90645ce11e1b3d86?pli=1
http://code.google.com/chrome/extensions/trunk/experimental.webRequest.html
As far as I can see, only the Firefox or Chrome extension is feasible from the list above. Do you have any other suggestions? Do you have some pointers how to write or where to find such an extension?
I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
I am the author of the latter extension, though I have yet to update it to support newer versions of Firefox. My initial guess is that, yes, it will do what you want:
User visits your web page without plugin. Web page contains ThinkAhead block that would send a simple version header to the server, but this is ignored as plugin is not installed.
Since the server does not see that header, it redirects the client to a page to install the plugin.
User installs plugin.
User visits web page with plugin. Page sends version header to server, so server allows access.
The ThinkAhead block matches all pages that are not myprivatewebpage, and does something like set the HTTP status to 403 Forbidden. Thus:
When the user visits any webpage that is in myprivatewebpage, there is normal behaviour.
When the user visits any webpage outside of myprivatewebpage, access is denied.
If you want to catch bad requests earlier, instead of modifying incoming headers, you could modify outgoing headers, perhaps screwing up "If-Match" or "Accept" so that the request is never honoured.
This solution is extremely lightweight, but might not be strong enough for your concerns. This depends on what you want to protect: given the above, the client would not be able to see blocked content, but external "blocked" hosts might still notice that a request has been sent, and might be able to gather information from the request URL.