Best security practices in Linux - linux

What security best-practices would you strongly recommend in maintaining a Linux server? (i.e. bring up a firewall, disable unnecessary services, beware of suid executables, and so on.)
Also: is there a definitive reference on Selinux?
EDIT: Yes, I'm planning to put the machine on the Internet, with at least openvpn, ssh and apache (at the moment, without dynamic content), and to provide shell access to some people.

For SELinux I've found SELinux By Example to be really useful. It goes quite in-depth into keeping a sever as secure as possible and is pretty well written for such a wide topic.
In general though:
Disable anything you don't need. The wider the attack domain, the more likely you'll have a breach.
Use an intrusion detection system (IDS) layer in front of any meaningful servers.
Keep servers in a different security zone from your internal network.
Deploy updates as fast as possible.
Keep up to date on 0-day attacks for your remotely-accessible apps.

The short answer is, it depends. It depends on what you're using it for, which in turn influences how much effort you should put into securing the thing.
There are some handy hints in the answers to this question:
Securing a linux webserver for public access
If you're not throwing the box up onto the internet, some of those answers won't be relevant. if you're throwing it up onto the internet and hosting something even vaguely interesting on it, those answers are far too laissez-faire for you.

There's an NSA document "NSA Security Guide for RHEL5" available at:
http://www.nsa.gov/ia/_files/os/redhat/rhel5-guide-i731.pdf
which is pretty helpful and at least systematic.

Limit the software to the only ones you really use
Limit the rights of the users, through sudo, ACLs, kernel capabilities and SELinux/AppArmor/PaX policies
Enforce use of hard passwords (no human understandable words, no birthday dates, etc.)
Make LXC countainers, chroot or vserver jails for the "dangerous" applications
Install some IDS, e.g. Snort for the network traffic and OSSEC for the log analysis
Monitor the server
Encrypt your sensible datas (truecrypt is a gift of the gods)
Patch your kernel with GRSecurity : this add a really nice level of paranoïa
That's more or less what I would do.
Edit : I added some ideas that I previously forgot to name ...

1.) Enabling only necessary and relevant ports.
2.) Regular scan of the network data in - out
3.) Regular Scan of ip addresses accessing the server and verify if any unusual data activity associated with those ip address as found from logs/traces
4.) If some some critical and confidential data and code, needs to be present on the server , may be it can be encrypted
-AD

Goals:
The hardest part is always defining your security goals. Everything else is relatively easy at that point.
Probing/research:
Consider the same approach that attackers would take, ie network reconnaissance (namp is pretty helpful for that).
More information:
SELinux by example is a helpful book, finding a good centralized source for SELinux information is still hard.
I have a small list of helpful links that I find useful time to time http://delicious.com/reverand_13/selinux
Helpful solution/tools:
As with what most people will say less is more. For an out of the box stripped down box with SELinux I would suggest clip (http://oss.tresys.com/projects/clip). A friend of mine used it in an academic simulation in which the box was under direct attack from other participants. I recall the story concluded very favorably for said box.
You will want to become familiar with writing SELinux policy. Modular policy can also become helpful. such tools as SLIDE and seedit (have not tried) may help you.

Don't use a DNS Server unless you have to . BIND has been a hotspot of security issues and exploits.

Hardening a Linux server is a vast topic and it primarily depend on your needs.
In general, you need to consider the following groups of concern (I'll give example of best practices in each group):
Boot and Disk
Ex1: Disable booting from external devices.
Ex2: Set a password for the GRUB bootloader - Ref.
File system partitioning
Ex1: Separate user Partitions (/home, /tmp, /var) from OS Partitions.
Ex2: Setup nosuid on partitions – in order to prevent privilege escalation with the setuid bit.
Kernel
Ex1: Update security patches.
Ex2: Read more in here.
Networking
Ex1: Close unused ports.
Ex2: Disable IP forwarding.
Ex3: Disable send packet redirects.
Users / Accounts
Ex1 : Enforce strong passwords (SHA512).
Ex2: Set up password aging and expiration.
Ex3: Restrict Users to Use Old Passwords.
Auditing and logging
Ex1: Configure auditd - ref.
Ex2: Configure logging with journald - ref.
Services
Ex1: Remove unused services like: FTP, DNS, LDAP, SMB, DHCP, NFS, SNMP, etc'.
EX2: If you're using a web server like Apache or nginx - don't run them as root - read more here.
Ex3: Secure SSH ref.
Software
Make sure you remove unused packages.
Read more:
https://www.computerworld.com/article/3144985/linux-hardening-a-15-step-checklist-for-a-secure-linux-server.html
https://www.tecmint.com/linux-server-hardening-security-tips/
https://cisofy.com/checklist/linux-security/
https://www.ucd.ie/t4cms/UCD%20Linux%20Security%20Checklist.pdf
https://www.cyberciti.biz/faq/linux-kernel-etcsysctl-conf-security-hardening/
https://securecompliance.co/linux-server-hardening-checklist/
Now specifically for SELinux:
First of all, make sure that SELinux is enabled in your machine.
Continue with the following guides:
https://www.digitalocean.com/community/tutorials/an-introduction-to-selinux-on-centos-7-part-1-basic-concepts
https://linuxtechlab.com/beginners-guide-to-selinux/
https://www.computernetworkingnotes.com/rhce-study-guide/selinux-explained-with-examples-in-easy-language.html

Related

Client Server Security Architecture

I would like go get my head around how is best to set up a client server architecture where security is of up most importance.
So far I have the following which I hope someone can tell me if its good enough, or it there are other things I need to think about. Or if I have the wrong end of the stick and need to rethink things.
Use SSL certificate on the server to ensure the traffic is secure.
Have a firewall set up between the server and client.
Have a separate sql db server.
Have a separate db for my security model data.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Passwords generated using a salt which is stored in a different db to the passwords.
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
I have tried searching for some diagrams which could help me understand but I cannot find any which seem to be appropriate.
Thanks in advance
Hardening your architecture can be a challenging task and sharding your services across multiple servers and over-engineering your architecture for semblance security could prove to be your largest security weakness.
However, a number of questions arise when you come to design your IT infrastructure which can't be answered in a single SO answer (will try to find some good white papers and append them).
There are a few things I would advise which is somewhat opinionated backed up with my own thought around it.
Your Questions
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
Settle for the cloud. You do not need to store things on physical servers anymore unless you have current business processes running core business functions that are already working on local physical machines.
Running physical servers increases your system administration requirements for things such as HDD encryption and physical security requirements which can be misconfigured or completely ignored.
Use SSL certificate on the server to ensure the traffic is secure.
This is normally a no-brainer and I would go with a straight, "Yes"; however you must take into consideration the context. If you are running something such as a blog site or documentation-related website that does not transfer any sensitive information at any point in time through HTTP then why use HTTPS? HTTPS has it's own overhead, it's minimal, but it's still there. That said, if in doubt, enable HTTPS.
Have a firewall set up between the server and client.
That is suggested, you may also want to opt for a service such as CloudFlare WAF, I haven't personally used it though.
Have a separate sql db server.
Yes, however not necessarily for security purposes. Database servers and Web Application servers have different hardware requirements and optimizing both simultaneously is not very feasible. Additionally, having them on separate boxes increases your scalability quite a bit which will be beneficial in the long run.
From a security perspective; it's mostly another illusion of, "If I have two boxes and the attacker compromises one [Web Application Server], he won't have access to the Database server".
At foresight, this might seem to be the case but is rarely so. Compromising the Web Application server is still almost a guaranteed Game Over. I will not go into much detail into this (unless you specifically ask me to) however it's still a good idea to keep both services separate from eachother in their own boxes.
Have a separate db for my security model data.
I'm not sure I understood this, what security model are you referring to exactly? Care to share a diagram or two (maybe an ERD) so we can get a better understanding.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Obvious yes; what I am about to say however is controversial and may be flagged by some people (it's a bit of a hot debate)—I recommend using BCrypt instead of PKBDF2 due to BCrypt being slower to compute (resulting in slower to crack).
See - https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage
Passwords generated using a salt which is stored in a different db to the passwords.
If you use BCrypt I would not see why this is required (I may be wrong). I go into more detail regarding the whole username and password hashing into more detail in the following StackOverflow answer which I would recommend you to read - Back end password encryption vs hashing
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
This purely depends on your goals, budget and requirements. I would personally go for AWS, however you should read some more on alternative platforms such as Google Cloud Platform before making your decision.
Last Remarks
All of the things you mentioned are important and it's good that you are even considering them (most people just ignore such questions or go with the most popular answer) however there are a few additional things I want to point:
Internal Services - Make sure that no unrequired services and processes are running on server especially in productions. These services will normally be running old versions of their software (since you won't be administering them) that could be used as an entrypoint for your server to be compromised.
Code Securely - This may seem like another no-brainer yet it is still overlooked or not done properly. Investigate what frameworks you are using, how they handle security and whether they are actually secure. As a developer (and not a pen-tester) you should at least use an automated web application scanner (such as Acunetix) to run security tests after each build that is pushed to make sure you haven't introduced any obvious, critical vulnerabilities.
Limit Exposure - Goes somewhat hand-in-hand with my first point. Make sure that services are only exposed to other services that depend on them and nothing else. As a rule of thumb, keep everything entirely closed and open up gradually when strictly required.
My last few points may come off as broad. The intention is to keep a certain philosophy when developing your software and infrastructure rather than a permanent rule to tick on a check-box.
There are probably a few things I have missed out. I will update the answer accordingly over time if need be. :-)

What security risks are posed by using a local server to provide a browser-based gui for a program?

I am building a relatively simple program to gather and sort data input by the user. I would like to use a local server running through a web browser for two reasons:
HTML forms are a simple and effective means for gathering the input I'll need.
I want to be able to run the program off-line and without having to manage the security risks involved with accessing a remote server.
Edit: To clarify, I mean that the application should be accessible only from the local network and not from the Internet.
As I've been seeking out information on the issue, I've encountered one or two remarks suggesting that local servers have their own security risks, but I'm not clear on the nature or severity of those risks.
(In case it is relevant, I will be using SWI-Prolog for handling the data manipulation. I also plan on using the SWI-Prolog HTTP package for the server, but I am willing to reconsider this choice if it turns out to be a bad idea.)
I have two questions:
What security risks does one need to be aware of when using a local server for this purpose? (Note: In my case, the program will likely deal with some very sensitive information, so I don't have room for any laxity on this issue).
How does one go about mitigating these risks? (Or, where I should look to learn how to address this issue?)
I'm very grateful for any and all help!
There are security risks with any solution. You can use tools proven by years and one day be hacked (from my own experience). And you can pay a lot for security solution and never be hacked. So, you need always compare efforts with impact.
Basically, you need protect 4 "doors" in your case:
1. Authorization (password interception or, for example improper, usage of cookies)
2. http protocol
3. Application input
4. Other ways to access your database (not using http, for example, by ssh port with weak password, taking your computer or hard disk etc. In some cases you need properly encrypt the volume)
1 and 4 are not specific for Prolog but 4 is only one which has some specific in a case of local servers.
Protect http protocol level means do not allow requests which can take control over your swi-prolog server. For this purpose I recommend install some reverse-proxy like nginx which can prevent attacks on this level including some type of DoS. So, browser will contact nginx and nginx will redirect request to your server if it is a correct http request. You can use any other server instead of nginx if it has similar features.
You need install proper ssl key and allow ssl (https) in your reverse proxy server. It should be not in your swi-prolog server. Https will encrypt all information and will communicate with swi-prolog by http.
Think about authorization. There are methods which can be broken very easily. You need study this topic, there are lot of information. I think it is most important part.
Application input problem - the famose example is "sql injection". Study examples. All good web frameworks have "entry" procedures to clean all possible injections. Take an existing code and rewrite it with prolog.
Also, test all input fields with very long string, different charsets etc.
You can see, the security is not so easy, but you can select appropriate efforts considering with the impact of hacking.
Also, think about possible attacker. If somebody is very interested particulary to get your information all mentioned methods are good. But it can be a rare case. Most often hackers just scan internet and try apply known hacks to all found servers. In this case your best friend should be Honey-Pots and prolog itself, because the probability of hacker interest to swi-prolog internals is extremely low. (Hacker need to study well the server code to find a door).
So I think you will found adequate methods to protect all sensitive data.
But please, never use passwords with combinations of dictionary words and the same password more then for one purpose, it is the most important rule of security. For the same reason you shouldn't give access for your users to all information, but protection should be on the app level design.
The cases specific to a local server are a good firewall, proper network setup and encription of hard drive partition if your local server can be stolen by "hacker".
But if you mean the application should be accessible only from your local network and not from Internet you need much less efforts, mainly you need check your router/firewall setup and the 4th door in my list.
In a case you have a very limited number of known users you can just propose them to use VPN and not protect your server as in the case of "global" access.
I'd point out that my post was about a security issue with using port forwarding in apache
to access a prolog server.
And I do know of a successful prolog injection DOS attack on a SWI-Prolog http framework based website. I don't believe the website's author wants the details made public, but the possibility is certainly real.
Obviously this attack vector is only possible if the site evaluates Turing complete code (or code which it can't prove will terminate).
A simple security precaution is to check the Request object and reject requests from anything but localhost.
I'd point out that the pldoc server only responds by default on localhost.
- Anne Ogborn
I think SWI_Prolog http package is an excellent choice. Jan Wielemaker put much effort in making it secure and scalable.
I don't think you need to worry about SQL injection, indeed would be strange to rely on SQL when you have Prolog power at your fingers...
Of course, you need to properly manage the http access in your server...
Just this morning there has been an interesting post in SWI-Prolog mailing list, about this topic: Anne Ogborn shares her experience...

Obfuscating server headers

I have a WSGI application running in PythonPaste. I've noticed that the default 'Server' header leaks a fair amount of information ("Server: PasteWSGIServer/0.5 Python/2.6").
My knee jerk reaction is to change it...but I'm curious what others think.
Is there any utility in the server header, or benefit in removing it? Should I feel uncomfortable about giving away information on my infrastructure?
Thanks
Well "Security through Obscurity" is never a best practice; your equipment should be able to maintain integrity against an attacker that has extensive knowledge of your setup (barring passwords, console access, etc). Can't really stop a DDOS or something similar, but you shouldn't have to worry about people finding out you OS version, etc.
Still, no need to give away information for free. Fudging the headers may discourage some attackers, and, in cases like this where you're running an application that may have a known exploit crop up, there are significant benefits in not advertising that you're running it.
I say change it. Internally, you shouldn't see much benefit in leaving it alone, and externally you have a chance of seeing benefits if you change it.
Given the requests I find in my log files (like requests for IIS-specific bugs in Apache logs, and I'm sure IIS server logs will show Apache-specific requests as well), there's many bots out there that don't care about any such header at all. I guess almost everything is brute force nowadays.
(And actually, as for example I've set up quite a few instances of Tomcat sitting behind IIS, I guess I would not take the headers into account either, if I were to try to hack my way into some server.)
And above all: when using free software I kind of find it appropriate to give the makers some credits in statistics.
Masking your version number is a very important security measure. You do not want to give the attacker any information about what software you are running. This security feature is available in the mod_security, the Open Source Web Application Firewall for Apache:
http://www.modsecurity.org/
Add this line to your mod_security configuration file:
SecServerSignature "IIS/6.0"

Which subversion server type is best?

Subversion has multiple server types:
svnserve daemon
svnserve via xinetd
svn over ssh
http-based server
direct access via file:/// URLs
Which one of these is best for a small Linux system (one to two users)?
http:
very flexible and easy for administration
no network problems (Port 80)
3rd party authentication (eg. LDAP, Active Directory)
Unix + Win native support
webdav support for editing without svn client
slow, as each action triggers a new http-action approx. 5-8 times slower than svn://
especially slow on history
no encryption of transferred data
https:
same as http
encryption of transferred data
svn:
fastest transfer
no password encryption in std. setup: pw are readable by admin
firewall problems as no std.port is used
daemon service has to be started
no encryption of transferred data
svn+ssh
nearly as fast as svn://
no windows OS comes with build in ssh components, so 3rd party tools are essentiell
no daemon service needed
encryption of passwords
encryption of transfer
1 of those options is definitely a 'worst' one: file access. Don't use it, use one of the server-based methods instead.
However, whether to use HTTP or Svnserve is entirely a matter of preference. In fact, you can use both simultaneously, the write-lock on the repo ensures that you won't corrupt anything if you use one and then use another.
My preference is simply to use apache though - http is more firewall and internet friendly, it is also easier to hook into ldap or other authentication mechanisms, and you get features like webdav too. The performance may be less than svnserve, but its not particularly noticeable (the transferring of data across the network makes up the bulk of any performance issues)
If you need security for file transfers, then svnserve+ssh, or apache over https is your choice.
Check out FLOSS Weekly Episode 28. Greg Stein is one of the inventors of the WebDAV protocol for SVN and discusses the tradeoffs. My takeaway is that SVN: is faster but the http/webdav implementation is just fine for almost all purposes.
I've always used XInetD and HTTP.
HTTP also had WebDAV going on, so I could browse the source online if I wanted (or you can require a VPN if you wanted encryption and a dark-net type thing).
It really depends on what restrictions (if any) you're under.
Is it only going to be on a LAN? Will you need access outside of your LAN?
If so, will you have a VPN?
Do you have a static IP address and are you allowed to forward ports?
If you aren't under any restriction, I would then suggest going with xinetd (if you have xientd installed, daemon if you don't) and then (if you need remote access) use http-based server if you need remote access (you can also encrypt using HTTPS if you don't want plain text un/pw sent across).
Most other options are more effort with less benefit.
It's an SVN Repo -- you can always pack your bags and change things if you don't like it.
For ease of administration and security, we use svn+ssh for anything that requires commit access. We have set up HTTP based access for anonymous (read only) access to some open-source code, and it is much faster; the problem with svn+ssh is that it has to start up an ssh connection and a whole new svnserve for each user for every operation, which can get to be pretty slow after a while.
So, I'd recommend:
http for anonymous connections
svn+ssh if you need something secure and relatively quick and easy (assuming your users already have ssh set up and your users have access to the server)
https if you need something faster, secure, and you don't mind the extra overhead of administering it (or if you don't already have ssh set up or don't want to deal with Unix permissions)
I like sliksvn runs as a service in Windows, 2mins to setup and then forget about it.
It also comes with the client tools but download tortoise as well.
If you are going to be using the server only on the local machine and understand unix permissions, using file:// urls will be fast, simple and secure. Likewise, if you understand unix permissions and ssh and need to access it remotely, ssh will work great. While I see somebody else mentions it as "worst", I'm pretty sure that's simply due to the need to understand unix permissions.
If you do not like or understand unix permissions, you need to go with svnserve or http. I would probably choose to run it in xinetd, personally.
Finally, if you have firewall or proxy issues, you may need to consider using http. It's much more complicated, and i don't think you're going to see the benefits, so I'd put it last on your list.
I would recommend the http option, since I'm currently using svn+ssh and it appears to be the red-headed stepchild of the available protocols: 3rd-party tool support is consistently worse for svn+ssh than it is for http.
I've been responsible for administering both svnserve and Apache+SVN for my development teams, and I prefer the http-based solution for its flexibility. I know next to little about system administration, I'm a software guy after all, and I liked being able to hand authentication and authorization over to Apache.
Both tines the teams were about 10~15 people and both methods worked equally well. If you're planning for any expansion in the future, you might consider the http-based solution. It's just as easy to configure as svnserve, and if you're not going to expose the server to the Internet then you don't have to worry too much about securing and administering Apache either.
As a user of SVN, I prefer the http-based integration with Apache. I like being able to browse the repository with my web browser.
I am curious why NOT FSFS?? Important information - I am managing Windows systems.
I have done many projects with SVN and almost all of them were running from FSFS. Biggest repository was around 70GB (extreme), biggest ammount of repositories was around 700.
We never had any issues, even though we hosted it on Windows, NetApp and many other storage systems. Most of the time when I asked why NOT using FSFS only problem was that people simply didn't trust it.
Advantages:
No backend required (or dedicated server)
Fast and reliable
Hook scripts are supported
NTFS permissions are used
Easy to understand, easy to support, easy to manage
Disadvantages:
Not so easy access from outside your network (VPN)
Permissions only on repository-level (Read, Read/Write)
Hook scripts are running under current user credentials (which is sometimes advantage, sometimes disadvantage)
Martin

Do you disable SELinux? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I want to know if people here typically disable SELinux on installations where it is on by default? If so can you explain why, what kind of system it was, etc?
I'd like to get as many opinions on this as possible.
I did, three or four years ago when defined policies had many pitfalls and creating policies was too hard and I had 'no time' to learn. This was on not critical machines, of course.
Nowadays with all the work done to ship distros with sensible policies, and the tools and tutorials that exist which help you create, fix and define policies there's no excuse to disable it.
I worked for a company last year where we were setting it enforcing with the 'targeted' policy enabled on CentOS 5.x systems. It did not interfere with any of the web application code our developers worked on because Apache was in the default policy. It did cause some challenges for software installed from non-Red Hat (or CentOS) packages, but we managed to get around that with the configuration management tool, Puppet.
We used Puppet's template feature to generate our policies. See SELinux Enhancements for Puppet, heading "Future stuff", item "Policy Generation".
Here's some basic steps from the way we implemented this. Note other than the audit2allow, this was all automated.
Generate an SELinux template file for some service named ${name}.
sudo audit2allow -m "${name}" -i /var/log/audit/audit.log > ${name}.te
Create a script, /etc/selinux/local/${name}-setup.sh
SOURCE=/etc/selinux/local
BUILD=/etc/selinux/local
/usr/bin/checkmodule -M -m -o ${BUILD}/${name}.mod ${SOURCE}/${name}.te
/usr/bin/semodule_package -o ${BUILD}/${name}.pp -m ${BUILD}/${name}.mod
/usr/sbin/semodule -i ${BUILD}/${name}.pp
/bin/rm ${BUILD}/${name}.mod ${BUILD}/${name}.pp
That said, most people are better off just disabling SELinux and hardening their system through other commonly accepted consensus based best practices such as The Center for Internet Security's Benchmarks (note they recommend SELinux :-)).
My company makes a CMS/integration platform product. Many of our clients have legacy 3rd party systems which still have important operative data in them, and most want to go on using these systems because they just work. So we hook our system to pull data out for publishing or reporting etc. through diverse means. Having a ton of client spesific stuff running on each server makes configuring SELinux properly a hard, and consequentially, expensive task.
Many clients initially want the best in security, but when they hear the cost estimate for our integration solution, the words 'SELinux disabled' tend to appear in the project plan pretty fast.
It's a shame, as defense in depth is a good idea. SELinux is never required for security, though, and this seems to be its downfall. When the client asks 'So can you make it secure without SELinux?', what are we supposed to answer? 'Umm... we're not sure'?
We can and we will, but when the hell freezes over, and some new vulnerability is found, and the updates just aren't there in time, and your system is unlucky enough to be the ground zero... SELinux just might save your ass.
But that's a tough sell.
I used to work for a major computer manufacturer in 3rd level support for RedHat Linux (as well as two other flavors) running on that company's servers. In the vast majority of cases, we had SELinux turned off. My feeling is that if you REALLY NEED SeLinux, you KNOW that you need it and can state specifically why you need it. When you don't need it, or can't clearly articulate why, and it is enabled by default, you realize pretty quickly that it is a pain in the rear end. Go with your gut instinct.
SELinux requires user attention and manual permission granting whenever (oh, well) you don't have a permission for something. Many people such find that it gets in the way and turn it off.
In recent version, SELinux is more user friendly, and there are even talks about removing the possibility to turn it off, or hide it so only knowledgeable users would know how to do it - and it is assumed just users are precisely those who understand the consequences.
With SELinux, there's a chicken and egg problem: in order to have it all the time, you as a user need to report problems to developers, so they can improve it. But users don't like to use it until it's improved, and it won't get improved if not many users are using it.
So, it's left ON by default in hope that most people would use it long enough to report at least some problems before they turn it off.
In the end, it's your call: do you look for a short-term fix, or a long-term improvement of the software, which will lead to removing the need to ask such question one day.
Sadly, I turn SELinux off most of the time too, because a good amount of third-party applications, like Oracle, do not work very well with SELinux turned on and / or are not supported on platforms running SELinux.
Note that Red Hat's own Satellite product requires you to turn off SELinux too, which - again, sadly - says a lot about difficulties people are having running complex applications on SELinux enabled platforms.
Usage tips that may or may not be useful to you: SELinux can be turned on and off at runtime by using setenforce (use getenforce to check current status). restorecon can be helpful in situations where chcon is cumbersome, but ymmv.
I hear it's getting better, but I still disable it. For servers, it doesn't really make any sense unless you're an ISP or large corporation wanting to implement fine-grain access level controls across multiple local users.
Using it on a web server, I had a lot of problems with apache permissions. I'd constantly have to run,
chcon -R -h -t httpd_sys_content_t /var/www/html
to update the ACLs when new files were added. I'm sure this has been solved by now, but still, SELinux is a lot of pain for the limited reward that you get from enabling it on a standard web site deployment.
I don't have a lot to contribute here, but since its gone unanswered, I figured I would throw my two cents in.
Personally, I disable it on dev boxes and when I'm dealing with unimportant things. When I am dealing with anything production, or that requires better security, I leave it on and/or spend the time tweaking it to handle things how I need.
Weather or not you use it really comes down to your needs, but it was created for a reason, so consider using it rather than always shut it off.
Yes. It's brain dead. It can introduce breakage to standard daemons that's nearly impossible to diagnose. It also can close a door, but leave a window open. That is, for some reason on fresh CentOS installs it was blocking smbd from starting from "/etc/init.d/smb". But it didn't block it from starting when invoked as "sh /etc/init.d/smb" or "smbd -D" or from moving the init.d/smb file to another directory, from which it would start smbd just fine.
So whatever it thought it was doing to secure our systems - by breaking them - it wasn't even doing consistently. Consulting some serious CentOS gurus, they don't understand the inconsistencies of its behavior either. It's designed to make you feel secure. But it's a facade of security. It's a substitute for doing the real work of locking your system security down.
I turn it off on all my cPanel boxes, since cPanel won't run with it on.
I do not disable it, but there are some problems.
Some applications don't work particularly well with it.
For example, I believe I enabled smartd to try and keep track of my
raid disks s.m.a.r.t. status, but selinux would get confused about the
new /dev/sda* nodes created at boot (I think that's what the problem was)
You have to download the source to the rules to understand things.
Just check /var/log/messages for the "avc denied" messages and you
can decode what is being denied.
google "selinux faq" and you'll find a fedora selinux faq that will
tell you how to work through these problems.
I never disabled selinux, my contractor HAVE to use it. And, if/when, some daemon (with OSS license btw) don't have a security policy it is mandatory to write a (good) one. This is not because i believe that selinux is an invulnerable MAC on Linux - useless to put example - but because it augment much the operating system security anyway. For the web app the OSS security better solution is mod_security : so i use both. Most the problem with selinux are on the little or comprensible docu, although the situation is much improved in recent years.
A CENTOS box I had as a development machine had it on and I turned it off. It was stopping some things I was trying to do in testing the web app I was developing. The system was (of course ) behind a firewall which completely blocked access from outside our LAN and had a lot of other security in place, so I felt reasonably secure even with SELinux off.
If it's on by default I'll leave it on until it breaks something then off it goes.
Personally I see it as not providing any security and I'm not going to bother with it.
Under Red-hat, you can edit /etc/sysconfig/selinux and set SELINIX=disabled.
I think under all versions of Linux you can add selinux=0 noselinux to the boot line in lilo.conf or grub.conf.

Resources