I am building an LSM module, which should work with SELinux. It must be registered before SELinux to do the task it needs to do.
Does SELinux fail registration if it is not the first LSM Module to register?
EDIT
I realise this is 5 years too late, and the LSM Framework is a moving target. This may not have been correct at the time you posted this, or maybe incorrect in the future
No. SELinux loads last in the module order
Selinux will stack along side any other non-legacy-major LSM(!SMACK SELinux or Apparmor)
Currently Yama is compiled into the kernel by default, which shortens the scope for ptrace operations, Yama is one of the first LSMs to load, yet currently works well with SELinux.
The LSM devs are working on away to remove the exclusivity of certain LSMs and are just about ready to make AppArmor ready for this change.
The end goal is to have the capability of an unlimited amount of LSMs compiled into the kernel, although some work needs to be done before this can be achieved
Related
I want to build a small web application in Rust which should be able to read and write files on a users behalf. The user should authenticate with their UNIX credentials and then be able to read / write only the files they have access to.
My first idea, which would also seem the most secure to me, would be to switch the user-context of an application thread and do all the read/write-stuff there. Is this possible?
If this is possible, what would the performance look like? I would assume spawning an operating system thread every time a request comes in could have a very high overhead. Is there a better way to do this?
I really wouldn't like to run my entire application as root and check the permissions manually.
On GNU/Linux, it is not possible to switch UID and GID just for a single thread of a process. The Linux kernel maintains per-thread credentials, but POSIX requires a single set of credentials per process: POSIX setuid must change the UID of all threads or none. glibc goes to great lengths to emulate the POSIX behavior, although that is quite difficult.
You would have to create a completely new process for each request, not just a new thread. Process creation is quite cheap on Linux, but it could still be a performance problem. You could keep a pool of processes around to avoid the overhead of repeated process creation. On the other hand, many years ago, lots of web sites (including some fairly large ones) used CGI to generate web pages, and you can get relatively far with a simple design.
I think #Florian got this backwards in his original answer. man 2 setuid says
C library/kernel differences
At the kernel level, user IDs and group IDs are a per-thread attribute. However, POSIX requires that all threads in a process
share the same credentials. The NPTL threading implementation handles
the POSIX requirements by providing wrapper functions for the various
system calls that change process
UIDs and GIDs. These wrapper functions (including the one for setuid()) employ a signal-based technique to ensure that when one
thread changes credentials, all of the other threads in the process
also change their credentials. For details, see nptl(7).
Since libc does the signal dance to do it for the whole process you will have to do direct system calls to bypass that.
Note that this is linux-specific. Most other unix variants do seem to follow posix at the kernel level instead emulating it in libc.
Specifically, if cfengine is used to install the most recent version of an onboard device's firmware and do some tests to see if a reboot is required, and the results indicate that the machine needs a restart, is this something that can be done from within cfengine or should that practice be avoided? If so, why? My experience with Puppet tells me that stopping a run to reboot could be a Very Bad Thing in certain cases, so I'm wondering if the same limitations apply to cfengine as well.
Stopping a CFEngine run is not that bad; it's designed to be convergent and modifications are always atomic. If it stops, the next runs will behave correctly.
However, writing promises that restart a device could lead to bad surprises (like, if there is a flaws in the logic of the promise, that results in never-ending restarts), so I suggest that it should be avoided, if possible, and if it is necessary (like, handling thousands of devices), it should be thoroughly tested
Like Nicolas said, there is no harm in stopping a CFEngine run. A CFEngine policy will continue converging the next time it runs. If you want to ensure that everything is properly finished before the reboot, you could just set a class that indicates that a reboot is needed, and to the actual reboot in a separate bundle that is called near the end of your bundlesequence (I'm assuming CFEngine 3).
And indeed, be VERY mindful and test VERY carefully the conditions under which the reboot will take place!
My questions is what the title says. Can I run a remote thread without being blocked by some antivirus applications?
ReadProcessMemory is slow, so I need to inject my own code into the process and read it's own memory.
Whether or not anti-virus software is running should not affect this. You'll need elevated rights, though, but ReadProcessMemory requires that anyway.
One way is to ask that process somehow to load your code. If you have access to its source code, you can add an IPC interface for that. If that program has plugin/addon interface, consider writing a plugin which will contain such an interface.
On Windows, you can try SetWindowsHookEx API. It is more common operation than injecting a thread, so maybe AVs will ignore you this time.
Or you can ask users to add the program to AV's exclusion list.
Otherwise, there is no way to inject into a foreign process and not be suspicious. You're going to do what most malware wants to do, yet without being detected, how do you think any good AV can allow that?
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
If you update, what kinds of problems can happen before you reboot? This happens especially frequently if you use unattended-upgrade to apply security patches.
Shared objects get replaced and so it is possible for programs to get out of sync with each other.
How long can you go safely before rebooting?
Clarification:
What I meant by "can programs get out of sync with one another" is that one binary has the earlier version of the shared object and a newly launched instance has the newer version of the shared object. It seems to me that if those versions are incompatible that the two binaries may not interoperate properly.
And does this happen in practice very often?
More clarification:
What I'm getting at is more along the lines that installers typically start/stop services that depend on a shared library so that they will get the new version of an API. If they get all the dependencies, then you are probably ok. But do people see installers missing dependencies often?
If a service is written to support all previous API versions compatibly, then this will not be an issue. But I suspect that often it is not done.
If there are kernel updates, especially if there are incompatible ABI changes, I don't see how you can get all the dependencies. I was looking for experience with whether and how things "tip over" and whether people have observed this in practice, either for kernel updates or for library/package updates.
Yes, this probably should have been put into ServerFault...
There are two versions of an executable file at any moment in time; the one in memory and the one in disk.
When you update, the one on disk gets replaced; the one in memory is the old one. If it's a shared object, it stays there until every application that uses it quits; if it's the kernel, it stays there until you reboot.
Bluntly put, if it's a security vulnerability you're updating for, the vulnerability stays until you load the (hopefully) patched version. So if it's a kernel, you aren't safe until you reboot. If it's a shared object, a reboot guarantees safety.
Basically, I'd say it depends on the category of the vulnerability. If it's security, restart whatever is affected. Otherwise, well, unless the bug is adversely affecting you, I wouldn't worry. If it's the kernel, I always reboot.
We have a number of embedded systems requiring r/w access to the filesystem which resides on flash storage with block device emulation. Our oldest platform runs on compact flash and these systems have been in use for over 3 years without a single fsck being run during bootup and so far we have no failures attributed to the filesystem or CF.
On our newest platform we used USB-flash for the initial production and are now migrating to Disk-on-Module for r/w storage. A while back we had some issues with the filesystem on a lot of the devices running on USB-storage so I enabled e2fsck in order to see if that would help. As it turned out we had received a shipment of bad flash memories so once those were replaced the problem went away. I have since disabled e2fsck since we had no indication that it made the system any more reliable and historically we have been fine without it.
Now that we have started putting in Disk-on-Module units I've started seeing filesystem errors again. Suddenly the system is unable to read/write certain files and if I try to access the file from the emergency console I just get "Input/output error". I enabled e2fsck again and all the files were corrected.
O'Reilly's "Building Embedded Linux Systems" recommends running e2fsck on ext2 filesystems but does not mention it in relation to ext3 so I'm a bit confused to whether I should enable it or not.
What are your takes on running fsck on an embedded system? We are considering putting binaries on a r/o partition and only the files which has to be modified on a r/w partition on the same flash device so that fsck can never accidentally delete important system binaries, does anyone have any experience with that kind of setup (good/bad)?
I think the answer to your question more relates to what types of coherency requirements you application has relative to its data. That is, what has to be guaranteed if power is lost without a formal shutdown of the system? In general, none of the desktop operating system type file systems handle this all that well without specific application closing/syncing of files and flushing of the disk caches, etc. at key transaction points in the application to ensure what you need to maintain is in fact committed to the media.
Running fsck fixes the file-system but without the above care, there is no guarantees about what changes you made will actually be kept. ie: It's not exactly deterministic what you'll lose as a result of the power failure.
I agree that putting your binaries or other important read-only data on a separate read-only partition does help ensure that they can't erroneously get tossed due to an fsck correction to file-system structures. As a minimum, putting them in a different sub-directory off the root than where the R/W data is held will help. But in both cases, if you support software updates, you still need to have scheme to deal with writing the "read-only" areas anyway.
In our application, we actually maintain a pair of directories for things like binaries and the system is setup to boot from either one of the two areas. During software updates, we update the first directory, sync everything to the media and verify the MD5 checksums on disk before moving onto the second copy's update. During boot, they are only used if the MD5 checksum is good. This ensures that you are booting a coherent image always.
Dave,
I always recommend running the fsck after a number of reboots, but not every time.
The reason is that, the ext3 is journal-ed. So unless you enable the writeback (journal-less), then most of the time, your metadata/file-system table should be in sync with your data (files).
But like Jeff mentioned, it doesn't guarantee the layer above the file-system. It means, you still get "corrupted" files, because some of the records probably didn't get written to the file system.
I'm not sure what embedded device you're running on, but how often does it get rebooted?
If it's controlled reboot, you can always do "sync;sync;sync" before restart.
I've been using the CF myself for years, and very rare occasion I got file-system errors.
fsck does help on that case.
And about separating your partition, I doubt the advantage of it. For every data/files on the file-system, there's a metadata associated with it. Most of the time, if you don't change the files, eg. binary/system files, then this metadata shouldn't change. Unless you have a faulty hardware, like cross-talking write & read, those read-only files should be safe.
Most problems arises when you have something writable, and regardless where you put this, it can cause problems if the application doesn't handle it well.
Hope that helps.