Do you disable SELinux? [closed] - linux

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I want to know if people here typically disable SELinux on installations where it is on by default? If so can you explain why, what kind of system it was, etc?
I'd like to get as many opinions on this as possible.

I did, three or four years ago when defined policies had many pitfalls and creating policies was too hard and I had 'no time' to learn. This was on not critical machines, of course.
Nowadays with all the work done to ship distros with sensible policies, and the tools and tutorials that exist which help you create, fix and define policies there's no excuse to disable it.

I worked for a company last year where we were setting it enforcing with the 'targeted' policy enabled on CentOS 5.x systems. It did not interfere with any of the web application code our developers worked on because Apache was in the default policy. It did cause some challenges for software installed from non-Red Hat (or CentOS) packages, but we managed to get around that with the configuration management tool, Puppet.
We used Puppet's template feature to generate our policies. See SELinux Enhancements for Puppet, heading "Future stuff", item "Policy Generation".
Here's some basic steps from the way we implemented this. Note other than the audit2allow, this was all automated.
Generate an SELinux template file for some service named ${name}.
sudo audit2allow -m "${name}" -i /var/log/audit/audit.log > ${name}.te
Create a script, /etc/selinux/local/${name}-setup.sh
SOURCE=/etc/selinux/local
BUILD=/etc/selinux/local
/usr/bin/checkmodule -M -m -o ${BUILD}/${name}.mod ${SOURCE}/${name}.te
/usr/bin/semodule_package -o ${BUILD}/${name}.pp -m ${BUILD}/${name}.mod
/usr/sbin/semodule -i ${BUILD}/${name}.pp
/bin/rm ${BUILD}/${name}.mod ${BUILD}/${name}.pp
That said, most people are better off just disabling SELinux and hardening their system through other commonly accepted consensus based best practices such as The Center for Internet Security's Benchmarks (note they recommend SELinux :-)).

My company makes a CMS/integration platform product. Many of our clients have legacy 3rd party systems which still have important operative data in them, and most want to go on using these systems because they just work. So we hook our system to pull data out for publishing or reporting etc. through diverse means. Having a ton of client spesific stuff running on each server makes configuring SELinux properly a hard, and consequentially, expensive task.
Many clients initially want the best in security, but when they hear the cost estimate for our integration solution, the words 'SELinux disabled' tend to appear in the project plan pretty fast.
It's a shame, as defense in depth is a good idea. SELinux is never required for security, though, and this seems to be its downfall. When the client asks 'So can you make it secure without SELinux?', what are we supposed to answer? 'Umm... we're not sure'?
We can and we will, but when the hell freezes over, and some new vulnerability is found, and the updates just aren't there in time, and your system is unlucky enough to be the ground zero... SELinux just might save your ass.
But that's a tough sell.

I used to work for a major computer manufacturer in 3rd level support for RedHat Linux (as well as two other flavors) running on that company's servers. In the vast majority of cases, we had SELinux turned off. My feeling is that if you REALLY NEED SeLinux, you KNOW that you need it and can state specifically why you need it. When you don't need it, or can't clearly articulate why, and it is enabled by default, you realize pretty quickly that it is a pain in the rear end. Go with your gut instinct.

SELinux requires user attention and manual permission granting whenever (oh, well) you don't have a permission for something. Many people such find that it gets in the way and turn it off.
In recent version, SELinux is more user friendly, and there are even talks about removing the possibility to turn it off, or hide it so only knowledgeable users would know how to do it - and it is assumed just users are precisely those who understand the consequences.
With SELinux, there's a chicken and egg problem: in order to have it all the time, you as a user need to report problems to developers, so they can improve it. But users don't like to use it until it's improved, and it won't get improved if not many users are using it.
So, it's left ON by default in hope that most people would use it long enough to report at least some problems before they turn it off.
In the end, it's your call: do you look for a short-term fix, or a long-term improvement of the software, which will lead to removing the need to ask such question one day.

Sadly, I turn SELinux off most of the time too, because a good amount of third-party applications, like Oracle, do not work very well with SELinux turned on and / or are not supported on platforms running SELinux.
Note that Red Hat's own Satellite product requires you to turn off SELinux too, which - again, sadly - says a lot about difficulties people are having running complex applications on SELinux enabled platforms.
Usage tips that may or may not be useful to you: SELinux can be turned on and off at runtime by using setenforce (use getenforce to check current status). restorecon can be helpful in situations where chcon is cumbersome, but ymmv.

I hear it's getting better, but I still disable it. For servers, it doesn't really make any sense unless you're an ISP or large corporation wanting to implement fine-grain access level controls across multiple local users.
Using it on a web server, I had a lot of problems with apache permissions. I'd constantly have to run,
chcon -R -h -t httpd_sys_content_t /var/www/html
to update the ACLs when new files were added. I'm sure this has been solved by now, but still, SELinux is a lot of pain for the limited reward that you get from enabling it on a standard web site deployment.

I don't have a lot to contribute here, but since its gone unanswered, I figured I would throw my two cents in.
Personally, I disable it on dev boxes and when I'm dealing with unimportant things. When I am dealing with anything production, or that requires better security, I leave it on and/or spend the time tweaking it to handle things how I need.
Weather or not you use it really comes down to your needs, but it was created for a reason, so consider using it rather than always shut it off.

Yes. It's brain dead. It can introduce breakage to standard daemons that's nearly impossible to diagnose. It also can close a door, but leave a window open. That is, for some reason on fresh CentOS installs it was blocking smbd from starting from "/etc/init.d/smb". But it didn't block it from starting when invoked as "sh /etc/init.d/smb" or "smbd -D" or from moving the init.d/smb file to another directory, from which it would start smbd just fine.
So whatever it thought it was doing to secure our systems - by breaking them - it wasn't even doing consistently. Consulting some serious CentOS gurus, they don't understand the inconsistencies of its behavior either. It's designed to make you feel secure. But it's a facade of security. It's a substitute for doing the real work of locking your system security down.

I turn it off on all my cPanel boxes, since cPanel won't run with it on.

I do not disable it, but there are some problems.
Some applications don't work particularly well with it.
For example, I believe I enabled smartd to try and keep track of my
raid disks s.m.a.r.t. status, but selinux would get confused about the
new /dev/sda* nodes created at boot (I think that's what the problem was)
You have to download the source to the rules to understand things.
Just check /var/log/messages for the "avc denied" messages and you
can decode what is being denied.
google "selinux faq" and you'll find a fedora selinux faq that will
tell you how to work through these problems.

I never disabled selinux, my contractor HAVE to use it. And, if/when, some daemon (with OSS license btw) don't have a security policy it is mandatory to write a (good) one. This is not because i believe that selinux is an invulnerable MAC on Linux - useless to put example - but because it augment much the operating system security anyway. For the web app the OSS security better solution is mod_security : so i use both. Most the problem with selinux are on the little or comprensible docu, although the situation is much improved in recent years.

A CENTOS box I had as a development machine had it on and I turned it off. It was stopping some things I was trying to do in testing the web app I was developing. The system was (of course ) behind a firewall which completely blocked access from outside our LAN and had a lot of other security in place, so I felt reasonably secure even with SELinux off.

If it's on by default I'll leave it on until it breaks something then off it goes.
Personally I see it as not providing any security and I'm not going to bother with it.

Under Red-hat, you can edit /etc/sysconfig/selinux and set SELINIX=disabled.
I think under all versions of Linux you can add selinux=0 noselinux to the boot line in lilo.conf or grub.conf.

Related

Jenkins security as an open-source tool

I work in a corporate development environment that is fairly risk-averse where management is often afraid of change. I've prototyped out how a Jenkins solution for our development team might work, and highlighted some success stories where the pilot implementation has helped, but the time has now come to get it approved to a wider audience and in a more permanent way, and some security concerns have been raised.
Primarily, the concerns so far have focused on the fact that the tool is open-sourced and the plugins are open-sourced and made by community contributors, so management is concerned that somebody could insert malicious code that would go unnoticed by us when we update. My opinion is that if so many other places can make Jenkins work, we probably can too, but that is not necessarily a very compelling argument to our security testing team.
My question is, can anybody tell me how they have secured their own Jenkins implementations, or how what specific Jenkins capabilities (sandboxing, etc) are in place to prevent malicious code from being executed on our systems?
Using 3rd party components either in your software or your infrastructure will always have risks. One very important thing to note is that open source is not less secure than closed source. While probably anybody might contribute code to an open source project, in most cases there is review before it actually makes its way into the project. Of course, a vulnerability may slip through, but how is that different from a software company with lots of developers? A vulnerability may slip through there too, and based on the experience of many of us, it quite often does. :) And in case of closed source, you don't even have the power of a diverse community to spot such security flaws, the best you can rely on are 3rd party penetration tests or code scans, both of which miss many issues.
In case of such a well established project like Jenkins, you can be pretty sure that there is lots of scrutiny on its security, probably more so than any closed source commercial tool you may currently have.
As with any 3rd party component, you should exercise due diligence though. Have a look at online vulnerability databases like NVD regularly to find security issues. Install updates as they come out to mitigate the risk. You should do these for closed source components too.
As for how to secure a Jenkins installation, an answer here is not the right format I think, but there is a whole set of pages on their website dedicated to the topic.
Having said all this and looking at past vulnerabilities in Jenkins, there are quite a few. It's up to you (and your security department) to assess how exactly you would want to deploy Jenkins, and whether those past vulnerabilities are serious enough for you to think the whole tool is not adequate for your environment considering the way you want to deploy it. Again, it's the same process you would follow with a closed source tool too.

Obfuscating server headers

I have a WSGI application running in PythonPaste. I've noticed that the default 'Server' header leaks a fair amount of information ("Server: PasteWSGIServer/0.5 Python/2.6").
My knee jerk reaction is to change it...but I'm curious what others think.
Is there any utility in the server header, or benefit in removing it? Should I feel uncomfortable about giving away information on my infrastructure?
Thanks
Well "Security through Obscurity" is never a best practice; your equipment should be able to maintain integrity against an attacker that has extensive knowledge of your setup (barring passwords, console access, etc). Can't really stop a DDOS or something similar, but you shouldn't have to worry about people finding out you OS version, etc.
Still, no need to give away information for free. Fudging the headers may discourage some attackers, and, in cases like this where you're running an application that may have a known exploit crop up, there are significant benefits in not advertising that you're running it.
I say change it. Internally, you shouldn't see much benefit in leaving it alone, and externally you have a chance of seeing benefits if you change it.
Given the requests I find in my log files (like requests for IIS-specific bugs in Apache logs, and I'm sure IIS server logs will show Apache-specific requests as well), there's many bots out there that don't care about any such header at all. I guess almost everything is brute force nowadays.
(And actually, as for example I've set up quite a few instances of Tomcat sitting behind IIS, I guess I would not take the headers into account either, if I were to try to hack my way into some server.)
And above all: when using free software I kind of find it appropriate to give the makers some credits in statistics.
Masking your version number is a very important security measure. You do not want to give the attacker any information about what software you are running. This security feature is available in the mod_security, the Open Source Web Application Firewall for Apache:
http://www.modsecurity.org/
Add this line to your mod_security configuration file:
SecServerSignature "IIS/6.0"

Designing a Linux-based system for transferability of ownership/admin rights without total trust

Inspired by a much more specific question on ServerFault.
We all have to trust a huge number of people for the security and integrity of the systems we use every day. Here I'm thinking of all the authors of all the code running on your server or PC, and everyone involved in designing and building the hardware. This is mitigated by reputation and, where source is available, peer review.
Someone else you might have to trust, who is mentioned far less often, is the person who previously had root on a system. Your predecessor as system administrator at work. Or for home users, that nice Linux-savvy friend who configured your system for you. The previous owner of your phone (can you really trust the Factory Reset button?)
You have to trust them because there are so many ways to retain root despite the incoming admin's best efforts, and those are only the ones I could think of in a few minutes. Anyone who has ever had root on a system could have left all kinds of crazy backdoors, and your only real recourse under any Linux-based system I've seen is to reinstall your OS and all code that could ever run with any kind of privilege. Say, mount /home with noexec and reinstall everything else. Even that's not sufficient if any user whose data remains may ever gain privilege or influence a privileged user in sufficient detail (think shell aliases and other malicious configuration). Persistence of privilege is not a new problem.
How would you design a Linux-based system on which the highest level of privileged access can provably be revoked without a total reinstall? Alternatively, what system like that already exists? Alternatively, why is the creation of such a system logically impossible?
When I say Linux-based, I mean something that can run as much software that runs on Linux today as possible, with as few modifications to that software as possible. Physical access has traditionally meant game over because of things like keyloggers which can transmit, but suppose the hardware is sufficiently inspectable / tamper-evident to make ongoing access by that route sufficiently difficult, just because I (and the users of SO?) find the software aspects of this problem more interesting. :-) You might also assume the existence of a BIOS that can be provably reflashed known-good, or which can't be flashed at all.
I'm aware of the very basics of SELinux, and I don't think it's much help here, but I've never actually used it: feel free to explain how I'm wrong.
First and foremost, you did say design :) My answer will contain references to stuff that you can use right now, but some of it is not yet stable enough for production. My answer will also contain allusions to stuff that would need to be written.
You can not accomplish this unless you (as user9876 pointed out) fully and completely trust the individual or company that did the initial installation. If you can't trust this, your problem is infinitely recursive.
I was very active in a new file system several years ago called ext3cow, a copy on write version of ext3. Snapshots were cheap and 100% immutable, the port from Linux 2.4 to 2.6 broke and abandoned the ability to modify or delete files in the past.
Pound for pound, it was as efficient as ext3. Sure, that's nothing to write home about, but it was (and for a large part) still is the production standard FS.
Using that type of file system, assuming a snapshot was made of the pristine installation after all services had been installed and configured, it would be quite easy to diff an entire volume to see what changed and when.
At this point, after going through the diff, you can decide that nothing is interesting and just change the root password, or you can go inspect things that seem a little odd.
Now, for the stuff that has to be written if something interesting is found:
Something that you can pipe the diff though that investigates each file. What you're going to see is a list of revisions per file, at which time they would have to be recursively compared. I.e. , present against former-present, former-present against past1, past1 against past2, etc , until you reach the original file or the point that it no longer exists. Doing this by hand would seriously suck. Also, you need to identify files that were never versioned to begin with.
Something to inspect your currently running kernel. If someone has tainted VFS, none of this is going to work, CoW file systems use temporal inodes to access files in the past. I know a lot of enterprise customers who modify the kernel quite a bit, up to and including modules, VMM and VFS. This may not be such an easy task - comparing against 'pristine' may not be tenable since the old admin may have made good modifications to the kernel since it was installed.
Databases are a special headache, since they change typically each second or more, including the user table. That's going to need to be checked manually, unless you come up with something that can check to be sure that nothing is strange, such a tool would be very specific to your setup. Classic UNIX 'root' is not your only concern here.
Now, consider the other computers on the network. How many of them are running an OS that is known to be easily exploited and bot infested? Even if your server is clean, what if this guy joins #foo on irc and starts an attack on your servers via your own LAN? Most people will click links that a co-worker sends, especially if its a juicy blog entry about the company .. social engineering is very easy if you're doing it from the inside.
In short, what you suggest is tenable, however I'm dubious that most companies could enforce best practices needed for it to work when needed. If the end result is that you find a BOFH in your work force and need to can him, you had better of contained him throughout his employment.
I'll update this answer more as I continue to think about it. Its a very interesting topic. What I've posted so far are my own collected thoughts on the same.
Edit:
Yes, I know about virtual machines and checkpointing, a solution assuming that brings on a whole new level of recursion. Did the (now departed) admin have direct root access to the privileged domain or storage server? Probably, yes, which is why I'm not considering it for the purposes of this question.
Look at Trusted Computing. The general idea is that the BIOS loads the bootloader, then hashes it and sends that hash to a special chip. The bootloader then hashes the OS kernel, which in turn hashes all the kernel-mode drivers. You can then ask the chip whether all the hashes were as expected.
Assuming you trust the person who originally installed and configured the system, this would enable you to prove that your OS hasn't had a rootkit installed by any of the later sysadmins. You could then manually run a hash over all the files on the system (since there is no rootkit the values will be accurate) and compare these against a list provided by the original installer. Any changed files will have to be checked carefully (e.g. /etc/passwd will have changed due to new users being legitimately added).
I have no idea how you'd handle patching such a system without breaking the chain of trust.
Also, note that your old sysadmin should be assumed to know any password typed into that system by any user, and to have unencrypted copies of any private key used on that system by any user. So it's time to change all your passwords.

issue/defect tracking software [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Our group is currently reviewing our toolset and looking for new defect/issue tracking software in additional to source control, and project management software.
For issue tracking, we've looked at bugzilla, fogbugz, bugtracker.net, sourcegear fortres, and bugnet.
I'm not satisfied with the list we've come up with, so I'm curios to know what others are using.
We're looking for Active directory integration for security, although we'd settle for a windows app, a web interface may be preferential, visual studio integration is also a bonus. We need to prioritize defects, mark the version the defect was found in, mark the version the defect was fixed in, and hopefully be able to maintain a discussion around each issue/defect. We'd also like to categorize items as defect, enhancement request, etc. and document workarounds for defects.
Very similar question:
https://stackoverflow.com/questions/101774/what-is-your-bug-task-tracking-tool
Try Unfuddle. If you use their version control hosting (SVN and Git options) with their issue tracker, you get some good integration stuff going on. For example, you can enter a note in your commit message such as "fixes #384: Too much foo in the bar"*, and you not only get that turned into a hyperlink to the issue, but it also marks the ticket as fixed with a link back to the changeset. All good stuff. This is a web-based solution that is hosted by Unfuddle themselves, in a SaaS-type fashion.
Other than that, +1 for Trac which I've used in the past and like very much. It's quite an immature project feature-wise, although it's got a very healthy community around it that has developed plug-ins to do a lot of extra stuff (like the AD authentication you wanted). It also has similar integration with a number of source control systems, but it's much less feature-rich than the Unfuddle stuff. That is to say, you get to use an extended wiki syntax in your commit messages which is parsed by Trac when it's display to create links. It doesn't do any of the two-way stuff that Unfuddle does. Trac is available to host in-house; alternatively, if you want it hosted, there's a list of places that will do so on Trac's wiki.
*I can't remember the exact format off the top of my head.
On our current project, we've amazingly used 6 different tracking tools (2 versions of PVCS), mostly commercial. Here's my opinion on the ones that we've used. I've listed them in order of my most favored to least.
Serena Teamtrack - We use a web client. The interface is intuitive. Performance will vary across installations, but comparing with our same data in each tool, this works the fastest. It also works in Firefox.
HP Quality Center - This is also web based, but it is IE only. On the upside, it's well organized, easy to use, and full-featured. It has reasonable performance for us as well.
It has an odd feature where there isn't a save button. It saves automatically for you. To force a save, you have to navigate to another ticket. Also when you first use it, it has to install so many DLLs that it is practically a thick client. That being the case, IE sometimes gets locked up (usually when trying to reinitialize a session after session expiration). Once locked up, you occasionally have to kill IE to regain control.
Bugzilla - I didn't use this as thoroughly as the other tools, so this isn't a fair comparison. We used it briefly for some internal tickets. I suppose the big upside is the (lack of) cost. IMO, I just didn't find the interface as nice and easy to use as the other tools. Its been awhile so I apologize for lack of specifics for why I'm relegating it below the others.
Siebel - There wasn't much to like about their defect tracking tool apart from that it is better than PVCS. The interface seems hokey. It's as if the Siebel interface has a set of user interface controls and it tries to force all square pegs into its round holes. Another downside is that it uses lengthy generated IDs so its hard to reference them or search by them. Along with that, the ticket IDs aren't sequenced.
Merant PVCS - We had separate databases and used both the web client and thick client. Its been awhile now, so the details are fading. I recall there were bugs in the tool and they weren't getting fixed, for instance reports couldn't display certain fields. Performance was bad. It took a long time to load. It was slow to navigate through tickets.
Issue tracking for support is a different problem from tracking issues during development.
Trac http://trac.edgewall.org/ is a very capable tool which supports a number of large open source projects. You can find Trac hosting at places like http://www.wush.net
If you need more workflow and custom security, you'll want to look at JIRA which is from Atlassian http://www.atlassian.com. Atlassian has a number of products which you might also find useful.
For Issue tracking in a support setting, try RT http://bestpractical.com/rt. RT is deceptively simple, but I've seen it used in the largest environments and it does a good job making sure you are accountable to every you make a support commitment to.
An off-site (www) hosted solution with all the features you mentioned is NetResults Tracker
We use bugzilla, it suits us perfectly. We haven't investigated too many others because honestly it does everything we need and then some.
We don't use Visual Studio so I can't speak for integration compatibility.
Try out HappyFox ( http://www.happyfox.com), an issue and bug tracking software. The clean interface and automation features help you track and resolve bugs smoothly. HappyFox is free for a 2 member and priced economically for larger teams.

Best security practices in Linux

What security best-practices would you strongly recommend in maintaining a Linux server? (i.e. bring up a firewall, disable unnecessary services, beware of suid executables, and so on.)
Also: is there a definitive reference on Selinux?
EDIT: Yes, I'm planning to put the machine on the Internet, with at least openvpn, ssh and apache (at the moment, without dynamic content), and to provide shell access to some people.
For SELinux I've found SELinux By Example to be really useful. It goes quite in-depth into keeping a sever as secure as possible and is pretty well written for such a wide topic.
In general though:
Disable anything you don't need. The wider the attack domain, the more likely you'll have a breach.
Use an intrusion detection system (IDS) layer in front of any meaningful servers.
Keep servers in a different security zone from your internal network.
Deploy updates as fast as possible.
Keep up to date on 0-day attacks for your remotely-accessible apps.
The short answer is, it depends. It depends on what you're using it for, which in turn influences how much effort you should put into securing the thing.
There are some handy hints in the answers to this question:
Securing a linux webserver for public access
If you're not throwing the box up onto the internet, some of those answers won't be relevant. if you're throwing it up onto the internet and hosting something even vaguely interesting on it, those answers are far too laissez-faire for you.
There's an NSA document "NSA Security Guide for RHEL5" available at:
http://www.nsa.gov/ia/_files/os/redhat/rhel5-guide-i731.pdf
which is pretty helpful and at least systematic.
Limit the software to the only ones you really use
Limit the rights of the users, through sudo, ACLs, kernel capabilities and SELinux/AppArmor/PaX policies
Enforce use of hard passwords (no human understandable words, no birthday dates, etc.)
Make LXC countainers, chroot or vserver jails for the "dangerous" applications
Install some IDS, e.g. Snort for the network traffic and OSSEC for the log analysis
Monitor the server
Encrypt your sensible datas (truecrypt is a gift of the gods)
Patch your kernel with GRSecurity : this add a really nice level of paranoïa
That's more or less what I would do.
Edit : I added some ideas that I previously forgot to name ...
1.) Enabling only necessary and relevant ports.
2.) Regular scan of the network data in - out
3.) Regular Scan of ip addresses accessing the server and verify if any unusual data activity associated with those ip address as found from logs/traces
4.) If some some critical and confidential data and code, needs to be present on the server , may be it can be encrypted
-AD
Goals:
The hardest part is always defining your security goals. Everything else is relatively easy at that point.
Probing/research:
Consider the same approach that attackers would take, ie network reconnaissance (namp is pretty helpful for that).
More information:
SELinux by example is a helpful book, finding a good centralized source for SELinux information is still hard.
I have a small list of helpful links that I find useful time to time http://delicious.com/reverand_13/selinux
Helpful solution/tools:
As with what most people will say less is more. For an out of the box stripped down box with SELinux I would suggest clip (http://oss.tresys.com/projects/clip). A friend of mine used it in an academic simulation in which the box was under direct attack from other participants. I recall the story concluded very favorably for said box.
You will want to become familiar with writing SELinux policy. Modular policy can also become helpful. such tools as SLIDE and seedit (have not tried) may help you.
Don't use a DNS Server unless you have to . BIND has been a hotspot of security issues and exploits.
Hardening a Linux server is a vast topic and it primarily depend on your needs.
In general, you need to consider the following groups of concern (I'll give example of best practices in each group):
Boot and Disk
Ex1: Disable booting from external devices.
Ex2: Set a password for the GRUB bootloader - Ref.
File system partitioning
Ex1: Separate user Partitions (/home, /tmp, /var) from OS Partitions.
Ex2: Setup nosuid on partitions – in order to prevent privilege escalation with the setuid bit.
Kernel
Ex1: Update security patches.
Ex2: Read more in here.
Networking
Ex1: Close unused ports.
Ex2: Disable IP forwarding.
Ex3: Disable send packet redirects.
Users / Accounts
Ex1 : Enforce strong passwords (SHA512).
Ex2: Set up password aging and expiration.
Ex3: Restrict Users to Use Old Passwords.
Auditing and logging
Ex1: Configure auditd - ref.
Ex2: Configure logging with journald - ref.
Services
Ex1: Remove unused services like: FTP, DNS, LDAP, SMB, DHCP, NFS, SNMP, etc'.
EX2: If you're using a web server like Apache or nginx - don't run them as root - read more here.
Ex3: Secure SSH ref.
Software
Make sure you remove unused packages.
Read more:
https://www.computerworld.com/article/3144985/linux-hardening-a-15-step-checklist-for-a-secure-linux-server.html
https://www.tecmint.com/linux-server-hardening-security-tips/
https://cisofy.com/checklist/linux-security/
https://www.ucd.ie/t4cms/UCD%20Linux%20Security%20Checklist.pdf
https://www.cyberciti.biz/faq/linux-kernel-etcsysctl-conf-security-hardening/
https://securecompliance.co/linux-server-hardening-checklist/
Now specifically for SELinux:
First of all, make sure that SELinux is enabled in your machine.
Continue with the following guides:
https://www.digitalocean.com/community/tutorials/an-introduction-to-selinux-on-centos-7-part-1-basic-concepts
https://linuxtechlab.com/beginners-guide-to-selinux/
https://www.computernetworkingnotes.com/rhce-study-guide/selinux-explained-with-examples-in-easy-language.html

Resources