I've readen a book (in german) named cookbook typo3 and typoscript http://www.amazon.de/TYPO3-TypoScript-Kochbuch-TYPO3-Programmierung/dp/3446410465
In this book the autor suggest in regards to security that the typo3_src directory should be moved out of the root-directory of the web-server, but he didn't say why should we do that?
Can someone explain to me the reason of this suggestion? What vulnerablity would exist if we do not move it?
Many thanks
You should not make public what doesn't need to be.
Not making the directory publicly accessible reduces one possible attack vector.
It might be possible that a file in that directory can be made to do things bad when called directly.
It is important to do that if you want to secure your system as much as possible.
The main reason is, that you do not need to be able to access typo3_src via the webserver. So do not put things in the public, which do not need to be. If there is a vulnerability via direct access to the source, you would not be vulnerable.
It is just a small stepp. IMHO it is not important and you can ignore it.
Related
I am trying to find out how suitable Webdav is for a product by the company i am working at.
Our needs seem to exceed what Webdav has to offer and i'm trying to find out if my theory is correct and if so how we could work around it.
I am using the Webdav-package which you can install through the "add/remove windows features"-dialog.
The problem is that we want to be able to set permissions for each file and since we can access and change authoring-rules by code this is more or less possible.
Authoring-rules seem to apply to folders and not individual files but this could be worked around by giving each file it's own folder (although it's a bit ugly).
To me this solution seems very inefficient mainly because the authoring-rules are all placed in a list which means that for all file-requests the server has to loop through the entire list which gets larger for every file added to the server.
My thought is that we could build some kind of "proxy" that checks permissions in a more efficient way and if the user has permission to access the file we just forward the request to the webdav-server.
This might also be inefficient though since we have to have an application managing the connection between the user and the Webdav-server but at least the inefficiency isn't exponential.
I guess this leads to the questions:
Is Webdav at all suitable for more complex permissions?
Is there some part of Webdav that i have missed which solves this problem?
If so, would it be better to go with the internal solution or should we do an external solution?
If not Webdav, is there a better solution? (We want all the nice file-locking, version-control and office-integration stuff)
use an HttpModule to apply your authorization rules.
system.webServer/modules has an attribute runManagedModulesForWebDavRequests
(!not the same as runAllManagedModulesForAllRequests)
Forget about IIS
Forget about pure WebDAV
Build|get Apache+mod_svn
Use path-based authorization in SVN, which can enable (if needed) rules on per-file basis
I am doing homework and having a hard time finding the information I need; I am just looking for some guidance. I need to identify some administrative IT tasks that use scripting, but the script used causes some type of security issue. What would be the issue and how would the issue be solved? Summary, keywords, links, anything would be great. Thanks
This is a sample of something I could imagine some ignorant it guy doing...
Write a php script where you pass a path of where you a database backed up to. Then an adversary could pass a path inside the HTML document root. I could then download the entire database to my computer.
Might not be the best example but it happens.
I've noticed yesterday by looking into my apache error log that someone tried to get access to the website via calling a lot of sites like:
mywebsite.com/phpmyadmin
mywebsite.com/dbadmin
mywebsite.com/mysqladmin
mywebsite.com/foo.php#some-javascript
...
This caused a lot of 404 errors. What's the best way to stop them doing so?
I thought about creating a fake-phpmyadmin dir with some php code that bans their ip address from my website when accessing this dir for about 12 to 24 h.
Is there a better way to deal with this kind of guys?
You should take a look at Fail2ban, it's pretty easy to set up in Apache.
You can't really prevent people from trying these sorts of attacks. The best you can do is log all these sorts of attempts like you're currently doing and maybe implement some sort of temporary blacklisting.
The security of your site shouldn't depend on people not trying to do these sorts of attacks, since you will never be able to fully prevent them.
If none of those exist, they're not gonna be able to do anything. You just have to worry about them being able to access parts that do exist and that you don't want them to access. Or using your poorly written scripts with XSS holes in it.
You could make it harder on them by checking if they're trying to access a common XSS path (like phpMyAdmin's normal path) and use an alternate 404 page that has malicious javascript on it or something.
For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".
As a follow up to an earlier question that attracted a whole zero answers, I'm wondering about the possibilities of allowing a web server (apache) to write to its own document root (Linux), in order to dynamically create meta-redirect files.
Of course, this sounds incredibly dangerous, and I'm wary of going the whole hog and granting the web-server user full write-access to its own docroot. Is there a more appropriate way of achieving this?
Use mod-rewrite, mapping to a program that you write to do the rewrites based on database records or some other mechanism.
Instructions here:
http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html
Look for: "External Rewriting Program" on the page
Edit (from Vinko in the comments, 2.2 docs)
http://httpd.apache.org/docs/2.2/mod/mod_rewrite.html#rewritemap
What's usually done is to allow writes only to subdirectories, hopefully located in a noexec mounted partition.
That said, it seems to me that you should just create a set of RewriteMap directives to do your dynamic redirection, there's no need to write files in the document root to accomplish that.
I answered similarly in the other question, just for completeness.
This is incredibly dangerous if you are trying to achieve what your previous question was getting at.
If you are going to go this route, you'll want a ton of testing to prevent people from forcing webserver instructions into htaccess files.