I am aware of the Hardened Linux from Scratch project which is a project that provides you with step-by-step instructions for building your own customized and hardened Linux system entirely from source. I would like to know what is the equivalent in BSD?
As Richard said OpenBSD is definitely worth a go, it is my #1 choice for everything that is dedicated for firewalls and gateways. For other services I tend to stick to FreeBSD although there is no obvious reason for it just a personal preference.
But I would like to point out that the from 'scratch part' concept if you want to do more secure hosting of a service can be much better done using Jails. In essence you create a limited FreeBSD environment on an a full FreeBSD install. In that limited environment you only copy/link those binaries and files that the service requires to run.
Because the hosted service has no access to any other files/binaries, all the potential security flaws in those things aren't open to exploit. If by chance your application gets 'rooted' it will not go beyond the boundaries of the jail.
See it like a sandbox on steroids with neglectable performance penalties.
OpenBSD is hardened "by default" from the installation. Only the admin opens it up... component by component.
[UPDATE] while I have not read the document for hardening linux... some of the same things might apply... for example they both use OpenSSH so the strategies would be the same. So where there is module overlap the same would apply.
You don't really do bsd 'from scratch'. All of the major projects come with a complete system in a single source repository so you're not grabbing a kernel from here, binutils and compiler from over there and c libraries and standard utilities from somewhere else and X from yet another place.
They are generally easier to get all the source for and to rebuild the entire system than your average linux distro, but that's not really customizing anything.
You could try to do something nuts, like perhaps trying to get the OpenBSD userland to run on a NetBSD kernel with FreeBSD ports, but you'd be on your own and it certainly wouldn't be 'hardened'.
HardenedBSD is a fork of the FreeBSD project with the aim of implementing PIE, RELRO, SAFESTACK, CFIHARDEN. Some goals are there, others are extreme-WIP. I wouldn't consider it as "ready for production" yet, but usable as desktop (also depends on production env requirements).
Repo: https://github.com/HardenedBSD
Everything, including "make buildworld/buildkernel" is the same as on FreeBSD and the Handbook does a good job of explaining this. You'll have a bit of reading to do though even coming from linux-land. Building your own ports is an entire topic in it's self.
Re jails, the statement is not entirely correct. While certainly adding an important security layer, Unix systems (IDK about Linux) [quoting here] "lack kernel exploit mitigations. If an attacker gains access to a jail, it's not too much work to pivot to other jails or escalate privileges via a kernel exploit." Don't misunderstand me, I place almost every service in a jail as much possible.
As to "Hardened by default" comment: It's all in the sysctl settings which can be tweaked on every *BSD flavor, but sec measures are pretty much useless if the sysadmin does not take time to read the docs.
If you are interested, your homework: https://www.freebsd.org/doc/handbook/
Related
I'm looking for a way to output traces to a log file in my code, which runs on linux.
I don't want to include the printing information in the binary, in every place I deploy it.
It windows, I simply used WPP to trace without putting the actual traces strings in my binary.
How can this by achieved in Linux?
I'm not very familiar with Linux tools in this area, so maybe there is a better system. However, since nobody else has made any good suggestions, I'll make a suggestion. (Probably not a very good suggestion, but the best I can think of right now.)
In theory, you could continue to use wpp. Wpp is simply a template system. It scans the configuration and input files to create data structures. Then it runs a template, fills in the data values it got from the scan, producing the tmh files. You could create a new set of templates that would use Linux apis instead of Windows apis, and would record the message strings in a way that works with some other log decoder system.
I noticed this question only now and would like to add my two cents to the story just for a case. Personally, I truly appreciate Windows WPP Tracing and consider it probably the best engineering solution for practical development troubleshooting among similar tools.
It happened I extended WPP use to Unix-like platforms twice. We wanted to use strong sides of WPP concept in general and yet use it in a multi-platform pieces of code. This was not a porting but rather a wrapper to specific WPP use we configured on Windows. One time we had a web service to perform actual WPP pre-processing on Windows; it may sound a bit insane but it worked fine and effective within the local network. A wrapper script that was executed before each compilation sent a web request, got a processed file and post-processed the generated include file to make it suitable for Unix-like platforms. The second time we implemented a simplified WPP pre-processor of our own (we found yet additional use for it - we could generate the tracing statements differently for production and unit testing, for example). This was a harsh solution: you anyway need to use some physical tracing framework behind the wrapper on non-Windows platform (well, the first time we apparently implemented our own lower level).
I do not think the Linux world has a framework comparable to WPP. Once I even thought it could be a great idea to make an open source porting project for WPP. I am not sure it would be much requested though. I said it is a great engineering solution. But who wants to do dirty engineering work? Open source community prefer abstract object-oriented and generic solutions, streaming and less necessity in corresponding tools (WPP requires special management tools and OS support).Ease of code writing is the today's choice.
There could be Microsoft fault (or unwillingness) in the lack of WPP popularity too. They kept it as an internal framework that came out just by a case with Windows DDK because they have to offer some logging/tracing solution for driver developers. Nobody even noticed much that WPP is well suitable for the user-space code too. And WPP pre-processor for C#, for example, has never been exposed to public at all.
Nevertheless, I still think that WPP porting to Unix/Linux work can be a challenging, interesting and maybe even useful attempt. If someone decides to lead it. :)
This is a general good-to-know query not directly related to programming.
I have been asked to find a linux host which is exactly same in specifications to our current production host.
What excatly does 'same' specification mean?
What are the parameters/factors i should equate(if possible,with commands on how to dig that up in linux).
I make know quite a few of them - but would not harm to consolidate at a place.
Exactly same in specifications means that the linux host should have the same environment that is relevant to the system you're hosting in your production environment.
What is relevant depends on the system. For example if you're testing a security system, then both environments must have the same users configured and the same firewall configuration (and all other variables affecting security). But if you're testing a system that performs heavy graphics manipulation, probably the firewall is not relevant, so this might be allowed to have a different configuration, but you would need to have the same processor and memory.
So in essence, it means that you must use your best judgement to decide what 'same specifications' means for your particular use.
What I am looking specifically for is software thats runs on Linux (CentOS) that can do the following:
Show human readable CPU, Memory, Disk, Apache, MySQL utilization/performance.
Provide historic reports on the above metrics (today, week, month, year etc...)
Provide this data in an easy to view web based report or at least exportable to excel/csv.
I have looked at Cacti and I don't think its really an enterprise solution. I don't care if this is free or paid for software, though open source would be nice I am really just looking for the best solution.
Does anything like this exist for Linux? The problem this company is faced with is we have no way of measuring how the changes we make in our code and server configurations impact overall performance. So when I saw lets do this - then do it, I can't shows the benefits or revert back cause it was a negative in terms of performance. I am not a linux guru, just a developer with some linux skills, but am open to all suggestions. Thanks for reading.
Even though there are lot of open source projects but the main drawback they suffer is that they are away harder to configure. I have some across a free to called SeaLion which is way easier to install and configure. And it has awesome timeline base to representing outputs. Also there are different paid tools line new relic, server density, solar wind which you can also give a look.
Check out the eginnovations monitoring tool
http://www.eginnovations.com
Monitors Linux, Apache, mySQL and other applications and is web-based, so you dont have to be a linux expert.
M.
Cacti is a simple one. OpenNMS is more complete.
You are not limited to linux, using SNMP you can fetch this data from a remote host and use any NMS you like.
IMHO one of the best "freemium" tools is Zenoss (http://community.zenoss.org/).
The community edition is free. It will do everything you need, and comes with a simple RPM based installation process. It's a lot easier than Cacti or Nagios to setup and use. I would give it a try.
I use munin. I'ts much much simpler to set up than cacti. It's better to compile it yourself than pull it with apt-get (or other) because that way it has more built-in data-gathering scripts.
Basically there is no single dashboard where you can get all reports metrics. There are a range of opensource softwares which and can serve your need.
For server performance many people recommends munin, you will have to learn how to read teh report data. You can also write custom scripts to get certain report parameters of Mysql. Additionally if your server host provides an API, you can then do lot more related to reports in your admin panel.
you have a look at following url which can provide you more idea about choosing best fit to your need.
https://serverfault.com/questions/44/what-tool-do-you-use-to-monitor-your-servers
http://sixrevisions.com/tools/10-free-server-network-monitoring-tools-that-kick-ass/
Inspired by a much more specific question on ServerFault.
We all have to trust a huge number of people for the security and integrity of the systems we use every day. Here I'm thinking of all the authors of all the code running on your server or PC, and everyone involved in designing and building the hardware. This is mitigated by reputation and, where source is available, peer review.
Someone else you might have to trust, who is mentioned far less often, is the person who previously had root on a system. Your predecessor as system administrator at work. Or for home users, that nice Linux-savvy friend who configured your system for you. The previous owner of your phone (can you really trust the Factory Reset button?)
You have to trust them because there are so many ways to retain root despite the incoming admin's best efforts, and those are only the ones I could think of in a few minutes. Anyone who has ever had root on a system could have left all kinds of crazy backdoors, and your only real recourse under any Linux-based system I've seen is to reinstall your OS and all code that could ever run with any kind of privilege. Say, mount /home with noexec and reinstall everything else. Even that's not sufficient if any user whose data remains may ever gain privilege or influence a privileged user in sufficient detail (think shell aliases and other malicious configuration). Persistence of privilege is not a new problem.
How would you design a Linux-based system on which the highest level of privileged access can provably be revoked without a total reinstall? Alternatively, what system like that already exists? Alternatively, why is the creation of such a system logically impossible?
When I say Linux-based, I mean something that can run as much software that runs on Linux today as possible, with as few modifications to that software as possible. Physical access has traditionally meant game over because of things like keyloggers which can transmit, but suppose the hardware is sufficiently inspectable / tamper-evident to make ongoing access by that route sufficiently difficult, just because I (and the users of SO?) find the software aspects of this problem more interesting. :-) You might also assume the existence of a BIOS that can be provably reflashed known-good, or which can't be flashed at all.
I'm aware of the very basics of SELinux, and I don't think it's much help here, but I've never actually used it: feel free to explain how I'm wrong.
First and foremost, you did say design :) My answer will contain references to stuff that you can use right now, but some of it is not yet stable enough for production. My answer will also contain allusions to stuff that would need to be written.
You can not accomplish this unless you (as user9876 pointed out) fully and completely trust the individual or company that did the initial installation. If you can't trust this, your problem is infinitely recursive.
I was very active in a new file system several years ago called ext3cow, a copy on write version of ext3. Snapshots were cheap and 100% immutable, the port from Linux 2.4 to 2.6 broke and abandoned the ability to modify or delete files in the past.
Pound for pound, it was as efficient as ext3. Sure, that's nothing to write home about, but it was (and for a large part) still is the production standard FS.
Using that type of file system, assuming a snapshot was made of the pristine installation after all services had been installed and configured, it would be quite easy to diff an entire volume to see what changed and when.
At this point, after going through the diff, you can decide that nothing is interesting and just change the root password, or you can go inspect things that seem a little odd.
Now, for the stuff that has to be written if something interesting is found:
Something that you can pipe the diff though that investigates each file. What you're going to see is a list of revisions per file, at which time they would have to be recursively compared. I.e. , present against former-present, former-present against past1, past1 against past2, etc , until you reach the original file or the point that it no longer exists. Doing this by hand would seriously suck. Also, you need to identify files that were never versioned to begin with.
Something to inspect your currently running kernel. If someone has tainted VFS, none of this is going to work, CoW file systems use temporal inodes to access files in the past. I know a lot of enterprise customers who modify the kernel quite a bit, up to and including modules, VMM and VFS. This may not be such an easy task - comparing against 'pristine' may not be tenable since the old admin may have made good modifications to the kernel since it was installed.
Databases are a special headache, since they change typically each second or more, including the user table. That's going to need to be checked manually, unless you come up with something that can check to be sure that nothing is strange, such a tool would be very specific to your setup. Classic UNIX 'root' is not your only concern here.
Now, consider the other computers on the network. How many of them are running an OS that is known to be easily exploited and bot infested? Even if your server is clean, what if this guy joins #foo on irc and starts an attack on your servers via your own LAN? Most people will click links that a co-worker sends, especially if its a juicy blog entry about the company .. social engineering is very easy if you're doing it from the inside.
In short, what you suggest is tenable, however I'm dubious that most companies could enforce best practices needed for it to work when needed. If the end result is that you find a BOFH in your work force and need to can him, you had better of contained him throughout his employment.
I'll update this answer more as I continue to think about it. Its a very interesting topic. What I've posted so far are my own collected thoughts on the same.
Edit:
Yes, I know about virtual machines and checkpointing, a solution assuming that brings on a whole new level of recursion. Did the (now departed) admin have direct root access to the privileged domain or storage server? Probably, yes, which is why I'm not considering it for the purposes of this question.
Look at Trusted Computing. The general idea is that the BIOS loads the bootloader, then hashes it and sends that hash to a special chip. The bootloader then hashes the OS kernel, which in turn hashes all the kernel-mode drivers. You can then ask the chip whether all the hashes were as expected.
Assuming you trust the person who originally installed and configured the system, this would enable you to prove that your OS hasn't had a rootkit installed by any of the later sysadmins. You could then manually run a hash over all the files on the system (since there is no rootkit the values will be accurate) and compare these against a list provided by the original installer. Any changed files will have to be checked carefully (e.g. /etc/passwd will have changed due to new users being legitimately added).
I have no idea how you'd handle patching such a system without breaking the chain of trust.
Also, note that your old sysadmin should be assumed to know any password typed into that system by any user, and to have unencrypted copies of any private key used on that system by any user. So it's time to change all your passwords.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I want to know if people here typically disable SELinux on installations where it is on by default? If so can you explain why, what kind of system it was, etc?
I'd like to get as many opinions on this as possible.
I did, three or four years ago when defined policies had many pitfalls and creating policies was too hard and I had 'no time' to learn. This was on not critical machines, of course.
Nowadays with all the work done to ship distros with sensible policies, and the tools and tutorials that exist which help you create, fix and define policies there's no excuse to disable it.
I worked for a company last year where we were setting it enforcing with the 'targeted' policy enabled on CentOS 5.x systems. It did not interfere with any of the web application code our developers worked on because Apache was in the default policy. It did cause some challenges for software installed from non-Red Hat (or CentOS) packages, but we managed to get around that with the configuration management tool, Puppet.
We used Puppet's template feature to generate our policies. See SELinux Enhancements for Puppet, heading "Future stuff", item "Policy Generation".
Here's some basic steps from the way we implemented this. Note other than the audit2allow, this was all automated.
Generate an SELinux template file for some service named ${name}.
sudo audit2allow -m "${name}" -i /var/log/audit/audit.log > ${name}.te
Create a script, /etc/selinux/local/${name}-setup.sh
SOURCE=/etc/selinux/local
BUILD=/etc/selinux/local
/usr/bin/checkmodule -M -m -o ${BUILD}/${name}.mod ${SOURCE}/${name}.te
/usr/bin/semodule_package -o ${BUILD}/${name}.pp -m ${BUILD}/${name}.mod
/usr/sbin/semodule -i ${BUILD}/${name}.pp
/bin/rm ${BUILD}/${name}.mod ${BUILD}/${name}.pp
That said, most people are better off just disabling SELinux and hardening their system through other commonly accepted consensus based best practices such as The Center for Internet Security's Benchmarks (note they recommend SELinux :-)).
My company makes a CMS/integration platform product. Many of our clients have legacy 3rd party systems which still have important operative data in them, and most want to go on using these systems because they just work. So we hook our system to pull data out for publishing or reporting etc. through diverse means. Having a ton of client spesific stuff running on each server makes configuring SELinux properly a hard, and consequentially, expensive task.
Many clients initially want the best in security, but when they hear the cost estimate for our integration solution, the words 'SELinux disabled' tend to appear in the project plan pretty fast.
It's a shame, as defense in depth is a good idea. SELinux is never required for security, though, and this seems to be its downfall. When the client asks 'So can you make it secure without SELinux?', what are we supposed to answer? 'Umm... we're not sure'?
We can and we will, but when the hell freezes over, and some new vulnerability is found, and the updates just aren't there in time, and your system is unlucky enough to be the ground zero... SELinux just might save your ass.
But that's a tough sell.
I used to work for a major computer manufacturer in 3rd level support for RedHat Linux (as well as two other flavors) running on that company's servers. In the vast majority of cases, we had SELinux turned off. My feeling is that if you REALLY NEED SeLinux, you KNOW that you need it and can state specifically why you need it. When you don't need it, or can't clearly articulate why, and it is enabled by default, you realize pretty quickly that it is a pain in the rear end. Go with your gut instinct.
SELinux requires user attention and manual permission granting whenever (oh, well) you don't have a permission for something. Many people such find that it gets in the way and turn it off.
In recent version, SELinux is more user friendly, and there are even talks about removing the possibility to turn it off, or hide it so only knowledgeable users would know how to do it - and it is assumed just users are precisely those who understand the consequences.
With SELinux, there's a chicken and egg problem: in order to have it all the time, you as a user need to report problems to developers, so they can improve it. But users don't like to use it until it's improved, and it won't get improved if not many users are using it.
So, it's left ON by default in hope that most people would use it long enough to report at least some problems before they turn it off.
In the end, it's your call: do you look for a short-term fix, or a long-term improvement of the software, which will lead to removing the need to ask such question one day.
Sadly, I turn SELinux off most of the time too, because a good amount of third-party applications, like Oracle, do not work very well with SELinux turned on and / or are not supported on platforms running SELinux.
Note that Red Hat's own Satellite product requires you to turn off SELinux too, which - again, sadly - says a lot about difficulties people are having running complex applications on SELinux enabled platforms.
Usage tips that may or may not be useful to you: SELinux can be turned on and off at runtime by using setenforce (use getenforce to check current status). restorecon can be helpful in situations where chcon is cumbersome, but ymmv.
I hear it's getting better, but I still disable it. For servers, it doesn't really make any sense unless you're an ISP or large corporation wanting to implement fine-grain access level controls across multiple local users.
Using it on a web server, I had a lot of problems with apache permissions. I'd constantly have to run,
chcon -R -h -t httpd_sys_content_t /var/www/html
to update the ACLs when new files were added. I'm sure this has been solved by now, but still, SELinux is a lot of pain for the limited reward that you get from enabling it on a standard web site deployment.
I don't have a lot to contribute here, but since its gone unanswered, I figured I would throw my two cents in.
Personally, I disable it on dev boxes and when I'm dealing with unimportant things. When I am dealing with anything production, or that requires better security, I leave it on and/or spend the time tweaking it to handle things how I need.
Weather or not you use it really comes down to your needs, but it was created for a reason, so consider using it rather than always shut it off.
Yes. It's brain dead. It can introduce breakage to standard daemons that's nearly impossible to diagnose. It also can close a door, but leave a window open. That is, for some reason on fresh CentOS installs it was blocking smbd from starting from "/etc/init.d/smb". But it didn't block it from starting when invoked as "sh /etc/init.d/smb" or "smbd -D" or from moving the init.d/smb file to another directory, from which it would start smbd just fine.
So whatever it thought it was doing to secure our systems - by breaking them - it wasn't even doing consistently. Consulting some serious CentOS gurus, they don't understand the inconsistencies of its behavior either. It's designed to make you feel secure. But it's a facade of security. It's a substitute for doing the real work of locking your system security down.
I turn it off on all my cPanel boxes, since cPanel won't run with it on.
I do not disable it, but there are some problems.
Some applications don't work particularly well with it.
For example, I believe I enabled smartd to try and keep track of my
raid disks s.m.a.r.t. status, but selinux would get confused about the
new /dev/sda* nodes created at boot (I think that's what the problem was)
You have to download the source to the rules to understand things.
Just check /var/log/messages for the "avc denied" messages and you
can decode what is being denied.
google "selinux faq" and you'll find a fedora selinux faq that will
tell you how to work through these problems.
I never disabled selinux, my contractor HAVE to use it. And, if/when, some daemon (with OSS license btw) don't have a security policy it is mandatory to write a (good) one. This is not because i believe that selinux is an invulnerable MAC on Linux - useless to put example - but because it augment much the operating system security anyway. For the web app the OSS security better solution is mod_security : so i use both. Most the problem with selinux are on the little or comprensible docu, although the situation is much improved in recent years.
A CENTOS box I had as a development machine had it on and I turned it off. It was stopping some things I was trying to do in testing the web app I was developing. The system was (of course ) behind a firewall which completely blocked access from outside our LAN and had a lot of other security in place, so I felt reasonably secure even with SELinux off.
If it's on by default I'll leave it on until it breaks something then off it goes.
Personally I see it as not providing any security and I'm not going to bother with it.
Under Red-hat, you can edit /etc/sysconfig/selinux and set SELINIX=disabled.
I think under all versions of Linux you can add selinux=0 noselinux to the boot line in lilo.conf or grub.conf.