I don't know if what I am asking now is possible, but if it is, that would be great.
I have a public folder where some users have access. I want to prevent all these users from creating subfolders into it. They should be able to create files like touch note.txt but not being able to create other folders.
I was thinking if I could disable mkdir command locally for the folder would do, but I don't know even if its possible.
First, this is not an programming question - so http://superuser.com is the better place to ask.
AFAIK (i'm not an Centos guru) - it is not possible to do with CentOS. For this type of permissions you need OS what supporting extended ACL. For example: Solaris ZFS, or Mac OS X and so on. Changing the underlaying OS is probably not a solution for you, so here is one another possibility - but not easy.
You can use FUSE and make a program what will act as filesystem bridge and simple would not allow creating directories. As I told - not a trivial solution, but possible. For the low volume usage you can use perl for implementing filesystems in perl, through the FUSE kernel/lib interface. See Fuse. For some basic tutorial you can check this site.
Sounds silly? Yes, it is. Maybe someone know an easy way setting up ACL on Centos. At least it is an "programmming solution". :).
In principle, SELinux should allow that level of control, but don't ask me how to configure it.
Related
I am trying to allow ssh users to be defined in Radius, but share a home directory, shell, etc. The idea is that all users share the same home directory and default shell (an application). I would like to avoid creating numerous accounts on the local machine (really a docker container) since their activity is constrained by the application. I think that I just need to replace the user database information, but I don't understand how to just override that part of the login activity. Has anyone else done this or should I be solving this a different way?
Ok, I am going to answer my own question. If you have better information, please contribute. This question might have been better in ServerFault, but as a programmer I spend more time on StackOverflow so I did not think of that.
The PAM library is useful for single sign-on, but it cannot replace the /etc/passwd file and related files. PAM and the other assets it brings in supplement the internal Linux info. So, while you can authenticate with a remote server like Radius, you will still have entries in /etc/passwd. The control flow is a list of rules in pam.conf and the top-level library works its way down the list letting each module (plug-in) do its work. Read 'man pam.conf' and 'man pam_mkhomedir' for good information on how this works.
A module implements 6 functions so it is very approachable to add new modules. See pam_deny.c for the simplest module.
Also, getpwnam is a function you may need in whatever it is you are trying to do. You can read about that using 'man getpwnam', but you probably already knew that.
Is there a Linux function that is equivalent to the InetIsOffline function in Windows (provided by url.dll) that can tell me whether the system is connected to the Internet, or do I have to cook up something myself?
The reason I ask is that I am an early-adopter of Lhogho. I found out how to do this in Windows and wanted to develop something to offer the same functionality in Linux.
You can talk to Network Manager over D-Bus to see if anything is connected, but other than that there's no specific way of doing so. And even NM isn't always accurate.
You could also parse some file under /proc/net/ such as /proc/net/if_inet6 or /proc/net/tcp
But why do you want to do that? If you want to check that some site is accessible, just access it programmatically (e.g. with libcurl).
And it does happen that some sites are inaccessible and others still work.
It might mean "do I have a default route?", or at least that would be a reasonable implementation, IMHO. So, just check the routing table (/proc/net/route) for it :).
That of course will not work with IPv6 (you would need to parse ipv6_route), but it's complicated to decide how that should be treated. Maybe Wine source code, or MSDN documentation can shed light on the matter.
If i want to develop a registry-like System for Linux, which Windows Registry design failures should i avoid?
Which features would be absolutely necessary?
What are the main concerns (security, ease-of-configuration, ...)?
I think the Windows Registry was not a bad idea, just the implementation didn't fullfill the promises. A common place for configurations including for example apache config, database config or mail server config wouldn't be a bad idea and might improve maintainability, especially if it has options for (protected) remote access.
I once worked on a kernel based solution but stopped because others said that registries are useless (because the windows registry is)... what do you think?
I once worked on a kernel based solution but stopped because others said that registries are useless (because the windows registry is)... what do you think?
A kernel-based registry? Why? Why? A thousand times, why? Might as well ask for a kernel-based musical postcard or inetd for all the point it is putting it in there. If it doesn't need to be in the kernel, it shouldn't be in. There are many other ways to implement a privileged process that don't require deep hackery like that...
If i want to develop a registry-like System for Linux, which Windows Registry design failures should i avoid?
Make sure that applications can change many entries at once in an atomic fashion.
Make sure that there are simple command-line tools to manipulate it.
Make sure that no critical part of the system needs it, so that it's always possible to boot to a point where you can fix things.
Make sure that backup programs back it up correctly!
Don't let chunks of executable data be stored in your registry.
If you must have a single repository, at least use a proper database so you have tools to restore, backup, recover it etc and you can interact with it without having a new set of custom APIs
the first one that come to my mind is somehow you need to avoid orphan registry entries. At the moment when you delete program you are also deleting the configuration files which are under some directory but after having a registry system you need to make sure when a program is deleted its configuration in registry should be deleted as well.
IMHO, the main problems with the windows registry are:
Binary format. This loses you the availability of a huge variety of very useful tools. In a binary format, tools like diff, search, version control etc. have to be specially implemented, rather than use the best of breed which are capable of operating on the common substrate of text. Text also offers the advantage of trivially embedded documentation / comments (also greppable), and easy programatic creation and parsing by external tools. It's also more flexible - sometimes configuration is better expressed with a full turing complete language than trying to shoehorn it into a structure of keys and subkeys.
Monolithic. It's a big advantage to have everything for application X contained in one place. Move to a new computer and want to keep your settings for it? Just copy the file. While this is theoretically possible with the registry, so long as everything is under a single key, in practice it's a non-starter. Settings tend to be diffused in various places, and it is generally difficult to find where. This is usually given as a strength of the registry, but "everything in one place" generally devolves to "Everything put somewhere in one huge place".
Too broad. Its easy to think of it as just a place for user settings, but in fact the registry becomes a dumping ground for everything. 90% of what's there is not designed for users to read or modify, but is in fact a database of the serialised form of various structures used by programs that want to persist information. This includes things like the entire COM registration system, installed apps, etc. Now this is stuff that needs to be stored, but the fact that its mixed in with things like user-configurable settings and stuff you might want to read dramatically lowers its value.
Inspired by a much more specific question on ServerFault.
We all have to trust a huge number of people for the security and integrity of the systems we use every day. Here I'm thinking of all the authors of all the code running on your server or PC, and everyone involved in designing and building the hardware. This is mitigated by reputation and, where source is available, peer review.
Someone else you might have to trust, who is mentioned far less often, is the person who previously had root on a system. Your predecessor as system administrator at work. Or for home users, that nice Linux-savvy friend who configured your system for you. The previous owner of your phone (can you really trust the Factory Reset button?)
You have to trust them because there are so many ways to retain root despite the incoming admin's best efforts, and those are only the ones I could think of in a few minutes. Anyone who has ever had root on a system could have left all kinds of crazy backdoors, and your only real recourse under any Linux-based system I've seen is to reinstall your OS and all code that could ever run with any kind of privilege. Say, mount /home with noexec and reinstall everything else. Even that's not sufficient if any user whose data remains may ever gain privilege or influence a privileged user in sufficient detail (think shell aliases and other malicious configuration). Persistence of privilege is not a new problem.
How would you design a Linux-based system on which the highest level of privileged access can provably be revoked without a total reinstall? Alternatively, what system like that already exists? Alternatively, why is the creation of such a system logically impossible?
When I say Linux-based, I mean something that can run as much software that runs on Linux today as possible, with as few modifications to that software as possible. Physical access has traditionally meant game over because of things like keyloggers which can transmit, but suppose the hardware is sufficiently inspectable / tamper-evident to make ongoing access by that route sufficiently difficult, just because I (and the users of SO?) find the software aspects of this problem more interesting. :-) You might also assume the existence of a BIOS that can be provably reflashed known-good, or which can't be flashed at all.
I'm aware of the very basics of SELinux, and I don't think it's much help here, but I've never actually used it: feel free to explain how I'm wrong.
First and foremost, you did say design :) My answer will contain references to stuff that you can use right now, but some of it is not yet stable enough for production. My answer will also contain allusions to stuff that would need to be written.
You can not accomplish this unless you (as user9876 pointed out) fully and completely trust the individual or company that did the initial installation. If you can't trust this, your problem is infinitely recursive.
I was very active in a new file system several years ago called ext3cow, a copy on write version of ext3. Snapshots were cheap and 100% immutable, the port from Linux 2.4 to 2.6 broke and abandoned the ability to modify or delete files in the past.
Pound for pound, it was as efficient as ext3. Sure, that's nothing to write home about, but it was (and for a large part) still is the production standard FS.
Using that type of file system, assuming a snapshot was made of the pristine installation after all services had been installed and configured, it would be quite easy to diff an entire volume to see what changed and when.
At this point, after going through the diff, you can decide that nothing is interesting and just change the root password, or you can go inspect things that seem a little odd.
Now, for the stuff that has to be written if something interesting is found:
Something that you can pipe the diff though that investigates each file. What you're going to see is a list of revisions per file, at which time they would have to be recursively compared. I.e. , present against former-present, former-present against past1, past1 against past2, etc , until you reach the original file or the point that it no longer exists. Doing this by hand would seriously suck. Also, you need to identify files that were never versioned to begin with.
Something to inspect your currently running kernel. If someone has tainted VFS, none of this is going to work, CoW file systems use temporal inodes to access files in the past. I know a lot of enterprise customers who modify the kernel quite a bit, up to and including modules, VMM and VFS. This may not be such an easy task - comparing against 'pristine' may not be tenable since the old admin may have made good modifications to the kernel since it was installed.
Databases are a special headache, since they change typically each second or more, including the user table. That's going to need to be checked manually, unless you come up with something that can check to be sure that nothing is strange, such a tool would be very specific to your setup. Classic UNIX 'root' is not your only concern here.
Now, consider the other computers on the network. How many of them are running an OS that is known to be easily exploited and bot infested? Even if your server is clean, what if this guy joins #foo on irc and starts an attack on your servers via your own LAN? Most people will click links that a co-worker sends, especially if its a juicy blog entry about the company .. social engineering is very easy if you're doing it from the inside.
In short, what you suggest is tenable, however I'm dubious that most companies could enforce best practices needed for it to work when needed. If the end result is that you find a BOFH in your work force and need to can him, you had better of contained him throughout his employment.
I'll update this answer more as I continue to think about it. Its a very interesting topic. What I've posted so far are my own collected thoughts on the same.
Edit:
Yes, I know about virtual machines and checkpointing, a solution assuming that brings on a whole new level of recursion. Did the (now departed) admin have direct root access to the privileged domain or storage server? Probably, yes, which is why I'm not considering it for the purposes of this question.
Look at Trusted Computing. The general idea is that the BIOS loads the bootloader, then hashes it and sends that hash to a special chip. The bootloader then hashes the OS kernel, which in turn hashes all the kernel-mode drivers. You can then ask the chip whether all the hashes were as expected.
Assuming you trust the person who originally installed and configured the system, this would enable you to prove that your OS hasn't had a rootkit installed by any of the later sysadmins. You could then manually run a hash over all the files on the system (since there is no rootkit the values will be accurate) and compare these against a list provided by the original installer. Any changed files will have to be checked carefully (e.g. /etc/passwd will have changed due to new users being legitimately added).
I have no idea how you'd handle patching such a system without breaking the chain of trust.
Also, note that your old sysadmin should be assumed to know any password typed into that system by any user, and to have unencrypted copies of any private key used on that system by any user. So it's time to change all your passwords.
I am aware of the Hardened Linux from Scratch project which is a project that provides you with step-by-step instructions for building your own customized and hardened Linux system entirely from source. I would like to know what is the equivalent in BSD?
As Richard said OpenBSD is definitely worth a go, it is my #1 choice for everything that is dedicated for firewalls and gateways. For other services I tend to stick to FreeBSD although there is no obvious reason for it just a personal preference.
But I would like to point out that the from 'scratch part' concept if you want to do more secure hosting of a service can be much better done using Jails. In essence you create a limited FreeBSD environment on an a full FreeBSD install. In that limited environment you only copy/link those binaries and files that the service requires to run.
Because the hosted service has no access to any other files/binaries, all the potential security flaws in those things aren't open to exploit. If by chance your application gets 'rooted' it will not go beyond the boundaries of the jail.
See it like a sandbox on steroids with neglectable performance penalties.
OpenBSD is hardened "by default" from the installation. Only the admin opens it up... component by component.
[UPDATE] while I have not read the document for hardening linux... some of the same things might apply... for example they both use OpenSSH so the strategies would be the same. So where there is module overlap the same would apply.
You don't really do bsd 'from scratch'. All of the major projects come with a complete system in a single source repository so you're not grabbing a kernel from here, binutils and compiler from over there and c libraries and standard utilities from somewhere else and X from yet another place.
They are generally easier to get all the source for and to rebuild the entire system than your average linux distro, but that's not really customizing anything.
You could try to do something nuts, like perhaps trying to get the OpenBSD userland to run on a NetBSD kernel with FreeBSD ports, but you'd be on your own and it certainly wouldn't be 'hardened'.
HardenedBSD is a fork of the FreeBSD project with the aim of implementing PIE, RELRO, SAFESTACK, CFIHARDEN. Some goals are there, others are extreme-WIP. I wouldn't consider it as "ready for production" yet, but usable as desktop (also depends on production env requirements).
Repo: https://github.com/HardenedBSD
Everything, including "make buildworld/buildkernel" is the same as on FreeBSD and the Handbook does a good job of explaining this. You'll have a bit of reading to do though even coming from linux-land. Building your own ports is an entire topic in it's self.
Re jails, the statement is not entirely correct. While certainly adding an important security layer, Unix systems (IDK about Linux) [quoting here] "lack kernel exploit mitigations. If an attacker gains access to a jail, it's not too much work to pivot to other jails or escalate privileges via a kernel exploit." Don't misunderstand me, I place almost every service in a jail as much possible.
As to "Hardened by default" comment: It's all in the sysctl settings which can be tweaked on every *BSD flavor, but sec measures are pretty much useless if the sysadmin does not take time to read the docs.
If you are interested, your homework: https://www.freebsd.org/doc/handbook/