How to safely allow web-server to write to its own docroot? - security

As a follow up to an earlier question that attracted a whole zero answers, I'm wondering about the possibilities of allowing a web server (apache) to write to its own document root (Linux), in order to dynamically create meta-redirect files.
Of course, this sounds incredibly dangerous, and I'm wary of going the whole hog and granting the web-server user full write-access to its own docroot. Is there a more appropriate way of achieving this?

Use mod-rewrite, mapping to a program that you write to do the rewrites based on database records or some other mechanism.
Instructions here:
http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html
Look for: "External Rewriting Program" on the page
Edit (from Vinko in the comments, 2.2 docs)
http://httpd.apache.org/docs/2.2/mod/mod_rewrite.html#rewritemap

What's usually done is to allow writes only to subdirectories, hopefully located in a noexec mounted partition.
That said, it seems to me that you should just create a set of RewriteMap directives to do your dynamic redirection, there's no need to write files in the document root to accomplish that.
I answered similarly in the other question, just for completeness.

This is incredibly dangerous if you are trying to achieve what your previous question was getting at.
If you are going to go this route, you'll want a ton of testing to prevent people from forcing webserver instructions into htaccess files.

Related

htaccess-owner is www-data, is this secure?

I do write my htaccess-file by PHP and now I have read, that this is a scurity issue, because the htaccess-file's owner is www-data, when the file is created via PHP.
I also create a config.php which contains the mysql-creditentials.. this is also www-data then..
my question is, is this really a security-issue? how could this be exploited?
If .htaccess is writable by PHP, as it is purposefully in your case, that means if anyone is able to leverage any security problem in your PHP code they may be able to write to the .htaccess file, which might give them even more leverage to execute more arbitrary code.
For instance, some vulnerable file-uploading PHP code is tricked into writing an .htaccess file which configures Apache to execute .jpg files as PHP; and then another uploaded JPG file which actually contains PHP code is saved into the webroot folder where it can now be executed as PHP code. Et voilĂ , arbitrary PHP code execution.
Another nice scenario would be a rewrite rule turning your server into a reverse proxy forwarding requests to some other server and lending a hand in some DDoS attack against a 3rd party.
The point is that your web server wields a lot of power with its configuration, and .htaccess files allow you to change that configuration, and allowing PHP to change .htaccess files moves that power and the responsibility to use that power correctly into PHP. Which means you now need to be 100% certain that there are no exploitable bugs in your PHP code which could lead to somebody abusing that power.
It's always better to segregate powers and give individual pieces as little power as possible. There are probably much better approaches for whatever you're trying to do there that do not require dynamic reconfiguration of your web server by programmatically generating .htaccess files.

Webdav authentication performance / alternatives

I am trying to find out how suitable Webdav is for a product by the company i am working at.
Our needs seem to exceed what Webdav has to offer and i'm trying to find out if my theory is correct and if so how we could work around it.
I am using the Webdav-package which you can install through the "add/remove windows features"-dialog.
The problem is that we want to be able to set permissions for each file and since we can access and change authoring-rules by code this is more or less possible.
Authoring-rules seem to apply to folders and not individual files but this could be worked around by giving each file it's own folder (although it's a bit ugly).
To me this solution seems very inefficient mainly because the authoring-rules are all placed in a list which means that for all file-requests the server has to loop through the entire list which gets larger for every file added to the server.
My thought is that we could build some kind of "proxy" that checks permissions in a more efficient way and if the user has permission to access the file we just forward the request to the webdav-server.
This might also be inefficient though since we have to have an application managing the connection between the user and the Webdav-server but at least the inefficiency isn't exponential.
I guess this leads to the questions:
Is Webdav at all suitable for more complex permissions?
Is there some part of Webdav that i have missed which solves this problem?
If so, would it be better to go with the internal solution or should we do an external solution?
If not Webdav, is there a better solution? (We want all the nice file-locking, version-control and office-integration stuff)
use an HttpModule to apply your authorization rules.
system.webServer/modules has an attribute runManagedModulesForWebDavRequests
(!not the same as runAllManagedModulesForAllRequests)
Forget about IIS
Forget about pure WebDAV
Build|get Apache+mod_svn
Use path-based authorization in SVN, which can enable (if needed) rules on per-file basis

Security in TYPO3 by accessing File System

I've readen a book (in german) named cookbook typo3 and typoscript http://www.amazon.de/TYPO3-TypoScript-Kochbuch-TYPO3-Programmierung/dp/3446410465
In this book the autor suggest in regards to security that the typo3_src directory should be moved out of the root-directory of the web-server, but he didn't say why should we do that?
Can someone explain to me the reason of this suggestion? What vulnerablity would exist if we do not move it?
Many thanks
You should not make public what doesn't need to be.
Not making the directory publicly accessible reduces one possible attack vector.
It might be possible that a file in that directory can be made to do things bad when called directly.
It is important to do that if you want to secure your system as much as possible.
The main reason is, that you do not need to be able to access typo3_src via the webserver. So do not put things in the public, which do not need to be. If there is a vulnerability via direct access to the source, you would not be vulnerable.
It is just a small stepp. IMHO it is not important and you can ignore it.

Best way to stop "xss"-hackers and similar

I've noticed yesterday by looking into my apache error log that someone tried to get access to the website via calling a lot of sites like:
mywebsite.com/phpmyadmin
mywebsite.com/dbadmin
mywebsite.com/mysqladmin
mywebsite.com/foo.php#some-javascript
...
This caused a lot of 404 errors. What's the best way to stop them doing so?
I thought about creating a fake-phpmyadmin dir with some php code that bans their ip address from my website when accessing this dir for about 12 to 24 h.
Is there a better way to deal with this kind of guys?
You should take a look at Fail2ban, it's pretty easy to set up in Apache.
You can't really prevent people from trying these sorts of attacks. The best you can do is log all these sorts of attempts like you're currently doing and maybe implement some sort of temporary blacklisting.
The security of your site shouldn't depend on people not trying to do these sorts of attacks, since you will never be able to fully prevent them.
If none of those exist, they're not gonna be able to do anything. You just have to worry about them being able to access parts that do exist and that you don't want them to access. Or using your poorly written scripts with XSS holes in it.
You could make it harder on them by checking if they're trying to access a common XSS path (like phpMyAdmin's normal path) and use an alternate 404 page that has malicious javascript on it or something.

how to find all the urls/ pages on mysite.com

i have a website that i now support and need to list all live pages/ url's.
is there a crawler i can use to point to my homepage and have it list all the pages/url's that it finds.
then i can delete any that dont make their way into this listing as they will be orphan pages/url's that have never been cleaned up?
I am using DNN and want to kill un-needed pages.
Since you're using a database-driven CMS, you should be able to do this either via the DNN admin interface or by looking directly in the database. Far more reliable than a crawler.
Back in the old days I used wget for this exact purpose, using its recursive retrieval functionality. It might not be the most efficient way, but it was definitely effective. YMMV, of course, since some sites will return a lot more content than others.

Resources