Why web sites run under less privileged user account (IUSR_ComputerName)? - security

Usually we define iis web sites which allow anonymous authentication to run under the IUSR_ComputerName account which has very limited privileges. For example we may decide it cannot access the file system. How does that make our web site any more secured? The user cannot run code on it anyway - only our website code runs and we make sure it does not cause any harm.
Edit: I understand why it is good to be on the safe side (e.g. iis exploit). My question is if there is any direct reason. For example, if I would never give a guest account full privileges on a sql server as it would immediately allow him full control over my server. This does not seem to be the case with iis.

we make sure it does not cause any
harm
You can be never sure about it doesn't cause any harm. One day, it might be exploited, and probably the less privileged user would save your data. No offense, but no one writes perfect code, therefore no code is vulnerability free.

If you have any network service you should assume that some random person on the internet has a command prompt on your machine running as that services's owner.
Now ask what damage that user good do?

Typically, you may need to run your web site in a way that is a little less hardened from a security standpoint than, say, a domain server or exchange. For example, you may need to permit FTP access. Obviously, Internet web sites need to be accessed from the Internet so you cannot simply block all access with your firewall.
Because of the higher vulnerability, it is prudent to run your service with an account that has limited permissions. In the case where a malicious user does succeed in copying their own programs to be run on your server, those programs will have severe limitations as to what they can do.

You can run code on the server, for example you can delete files in a directory if the permissions are not set.

Related

Is it (in)secure to give the ApplicationPoolIdentity write access to a folder inside your web application folder?

Recently we implemented a feature which dynamically generates a LESS file in our App_Themes folder. This is done on application start.
This requires us to give the #ApplicationPoolIdentity# write access to the App_Themes folder.
Our system administrator, however, does not want us to give the #ApplicationPoolIdentity# that write access. For security reasons.
Is it insecure to do that? What are the security risks?
If there were any remote code execution vulnerabilities in your application, or within ASP.NET or IIS itself, anyone compromising your system through your application or web server will likely get a command shell, and be logged in as e.g. DefaultAppPool on your server.
If there is write access to a folder, then the attacker could write to this folder themselves.
For example, they could host their own content on your site at example.com/App_Themes/index.html, or they could upload an exploit that allowed priveledge escalation to that of administrator. In the latter case they would probably need executable permissions too, unless they could in someway make the webserver execute it, for example by requesting the URL of the dropped exploit.
Of course, the vulnerability has to be there in the first place for this to happen. Preventing write access too can be viewed as "defence in depth", however if this is needed by your application then it may be an acceptable risk. An alternative is to find another way to implement your desired functionality.

In node.js how would I follow the Principle of Least Privilege?

Imagine a web application that performs two main functions:
Serves data from a file that requires higher privileges to read from
Serves data from a file that requires lower privileges to read from
My Assumption: To allow both files to be read from, I would need to run node using an account that could read both files.
If node is running under an account that can access both files, then a user who should not be able to read any file that requires higher privileges could potentially read those files due to a security flaw in the web application's code. This would lead to disastrous consequences in my imaginary web application world.
Ideally the node process could run using a minimal set of rights and then temporarily escalate those rights before accessing a system resource.
Questions: Can node temporarily escalate privileges? Or is there a better way?
If not, I'm considering running two different servers (one with higher privileges and one with lower) and then putting them both behind a proxy server that authenticates/authorizes before forwarding the request.
Thanks.
This is a tricky case indeed. In the end file permissions are a sort of meta-data. Instead of directly accessing the files, my recommendation would be to have some layer between the files in the form of a database table, or anything that could map the type of user to the file, and stream the file to the user if it exists.
That would mean that the so called web application couldn't just circumvent the file system permissions as easy. You could even set it up so that said files did not have server readable permissions, and instead were only readable by the in between layer. All it could do is make a call, and see if the user with given permissions could access the files. This lets you also share between multiple web applications should you choose. Also because of the very specific nature of what the in between layer does, you can enforce a very restricted set of calls.
Now, if a lower privileged user somehow gains access to a higher privileged user's account, they'll be able to see the file, and there's no way to really get around that short of locking the user's account. However that's part of the development process.
No, I doubt node.js-out of the box-could guarantee least privilege.
It is conceivable that, should node.js be run as root, it could twiddle its operating system privileges via system calls to permit or limit access to certain resources, but then again running as root would defeat the original goal.
One possible solution might be running three instances of node, a proxy (with no special permissions) to direct calls to one or the other two servers run at different privilege levels. (Heh, as you already mention. I really need to read to the end of posts before leaping into the fray!)

when should I use "apache:apache" or "nobody:nobody" on my web server files?

Background: I remember at my old place of employment how the web server admin would always make me change the httpd-accessible file upload directories so that they were owned by apache:apache or nobody:nobody.
He said this was for security reasons.
Question: Can you tell me what specifically were the security implications of this? Also is there a way to get apache to run as nobody:nobody, and are there security implications for that as well?
TIA
There is a valid reason, supposing the httpd (Apache) was owned by root and belongs to the group root also, and that there was a vulnerability that was found in the code itself, for example, a malicious user requested a URL that is longer than expected and the httpd seg-faulted. Now, that exploit has uncovered root access which means, it has control over the system and hence a malicious user would ultimately seize control and create havoc on the box.
That is a reason why the ownership of the httpd daemon runs under nobody:nobody or apache:apache. It is effectively a preventative measure to ensure that no exploit/vulnerability will expose root access. Imagine the security implications if that was to happen.
Fortunately, now, depending on the Linux distribution, BSD variants (OpenBSD/FreeBSD/NetBSD) or the commercial Unix variants, the httpd daemon runs under a user group that has the least privileges. And furthermore, it would be safe to say that a lot of the Apache code has been well tested enough and stable. About 49% of servers across all domains are running Apache. Microsoft's IIS runs at 29% of the domains. This is according the the netcraft survey site here.
In another context, it shows that having a program running under least privileges would be deemed 'safe' and mitigates any possible chances of exploits, vulnerabilites.
This is the wrong site for this question. Ordinarily you would not want the source code to be owned by the same user as Apache. Should a security flaw in Apache or your server-side scripts arise, an attacker could maliciously modify your web site's files without privilege escalation.
The one exception would be file upload directories, as you said. In this case, you want Apache to make changes to that directory.

initiating a program that has more user privilages in restricted user

I have users with limited access granted to one of my hard drives. Those users are not given the permittion to delete the files in that drive. but I have a application that should allow those users to delete files in the above mentioned drive.
1) How can I do this?
2) When a low priviliaged user loged to my application, can I write a hidden thread/ program that that gives high privileged user authority (only for this application), as in impersonating another user, so that he will be abel to delete files via this appliction in the restricted hard disk?
Thanks
Depending on your OS you can do various things.
In a UNIX like environment you can write a program and use setuid or setguid so that it runs with priviledges of another (more priviledged) user.
Alternatively in Windows or UNIX you can run a service as the more priviledged user and let it take requests from other users/processes to carry out the operation on their behalf. You'd have to look into ways to communicate with the service.
Hope that helps.
Probably the easiest way is to write a service which exposes a named pipe, and create a client application which talks to the pipe and issues instructions to your service. The service runs under LocalSystem or a nominated higher-privilege account, and carries out instructions from the app running under a user account with lower privilege. You'd need some sort of handshake to establish bona-fides when you connect to the pipe, but it's not hard to do. You could use WCF instead of pipes, but I don't think you get much advantage from that in this scenario.

how do you manage servers' root passwords

In our administration team everyone has root passwords for all client servers.
But what should we do if one of the team members is not longer working with us?
He still has our passwords and we have to change them all, every time someone leave us.
Now we are using ssh keys instead of passwords, but this is not helpful if we have to use something other than ssh.
The systems I run have a sudo-only policy. i.e., the root password is * (disabled), and people have to use sudo to get root access. You can then edit your sudoers file to grant/revoke people's access. It's very granular, and has lots of configurability---but has sensible defaults, so it won't take you long to set up.
I would normally suggest the following:
Use a blank root password.
Disable telnet
Set ssh for no-root-login (or root login by public key only)
Disable su to root by adding this to the top of /etc/suauth: 'root:ALL:DENY'
Enable secure tty for root login on console only (tty1-tty8)
Use sudo for normal root access
Now then, with this setting, all users must use sudo for remote admin,
but when the system is seriously messed up, there is no hunting for
the root password to unlock the console.
EDIT: other system administration tools that provide their own logins will also need adjusting.
While it is a good idea to use a sudo only policy like Chris suggested depending on the the size of your system an ldap approach may also be helpful. We complement that by a file that contains all the root passwords but the root passwords are really long and unmemorable. While that may be considered a security flaw it allows us to still log in if the ldap server is down.
Aside from the sudo policy, which is probably better, there is no reason why each admin couldn't have their own account with UID 0, but named differently, with a different password and even different home directory. Just remove their account when they're gone.
We just made it really easy to change the root passwords on every machine we admininster so when people left we just ran the script. I know not very savvy but it worked. Before my time, everyone in the company had access to root on all server. luckily we moved away from that.
Generally speaking, if someone leaves our team, we don't bother changing root passwords. Either they left the company (and have no way to access the machines anymore as their VPN has been revoked, as has their badge access to the building, and their wireless access to the network), or they're in another department inside the company and have the professionalism to not screw with our environment.
Is it a security hole? Maybe. But, really, if they wanted to screw with our environment, they would have done so prior to moving on.
So far, anyone leaving the team who wants to gain access to our machines again has always asked permission, even though they could get on without the permission. I don't see any reason to impede our ability to get work done, i.e., no reason to believe anyone else moving onwards and upwards would do differently.
Reasonably strong root password. Different on each box. No remote root logins, and no passwords for logins, only keys.
If you have ssh access via your certificates, can't you log in via ssh and change the root password via passwd or sudo passwd when you need to do something else that requires the password?
We use the sudo only policy where I work, but root passwords are still kept. The root passwords are only available to a select few employees. We have a program called Password Manager Pro that stores all of our passwords, and can provide password audits as well. This allows us to go back and see what passwords have been accessed by which users. Thus, we're able to only change the passwords that actually need to be changed.
SSH keys have no real alternative.
For management of many authorized_keys files on many servers you have to implement your own solution, if you do not want the same file on every server. Either via an own tool, or with some configuration management solution like puppet, ansible or something like that.
Else a for loop in bash or some clush action will suffice.
Anything besides SSH logins:
For services you run that are login-based, use some sort of authentication with a central backend. Keep in mind that noone will do any work if this backend is unavailable!
Run the service clustered.
Don't do hacks with a super-duper-service backdoor account, to always have access in case something breaks (like admin access is broken due to a misconfiguration). No matter how much you monitor access or configuration changes affecting this account, this is 'just bad'(TM).
Instead of getting this backdoor right, you might as well just cluster the application, or at the very least have a spare system periodically mirroring the setup at hand if the main box dies, which then can be activated easily through routing changes in the network. If this sounds too complicated, your business is either too small and you can live with half a day to two days downtime. Or you really hate clusters due to lacking knowledge and are just saving on the wrong things.
In general: If you do use software unusable with some sort of Active Directory or LDAP integration you have to jump the shark and change passwords for these manually.
Also a dedicated password management database that can only be accessed by a very selected few directly, and is read-only to all the others, is very nice. Don't bother with excel files, these lack proper rights management. Working with version control on .csv files doesn't really cut it either after a certain treshold.

Resources