I am working on an embedded Linux platform. In our platform there is only root user. Now we want to bring in security options like
1. Low Privileged user.
2. Allowing to run only executables from a particular location(only read permission).
3. Use Linux Containers
We have managed to add a low privileged user using the /etc/passwd file. But I have no idea how to do the rest. Is there any better options to implement security in the linux system. Any documentation or links are much appreciated.
Option two is achieved by the noexec flag on mounting. The slight challenge is figuring out exactly what to mount where; you'd want to mount / as noexec to get safety by default, but you need /sbin/mount to be executable. But you can probably make / read-only and mount all the writeable filesystems as noexec.
Related
I've searched through wiki of AppArmor's wiki as well as tried Internet searches for "apparmor mount namespace" (or similar). However, I always draw a blank as how AppArmor deals with them, which is especially odd considering that OCI containers could not exist without mount namespaces. Does AppArmor take mount namespaces into any account at all, or does it simply check for the filename passed to some syscall?
If a process inside a container switches mount namespaces does AppArmor take notice at all, or is it simply mount namespace-agnostic in that it doesn't care? For instance, if a container process switches into the initial mount namespace, can I write AppArmor MAC rules to prevent such a process from accessing senstive host files, while the same files inside its own container are allowed for access?
can I write AppArmor MAC rules to prevent such a process from
accessing senstive host files.
Just don't give container access to sensitive host filesystem part. That means don't mount them into container. This is out of scope of AppArmor to take care of if you do.
I would say that AppArmor is partially linux kernel mount namespace aware.
I think the attach_disconnected flag in apparmor is an indication that apparmor knows if you are in the main OS mount namespace or a separate mount namespace.
The attach_disconnected flag is briefly described at this link (despite the warning at the top of the page claiming to be a draft):
https://gitlab.com/apparmor/apparmor/-/wikis/AppArmor_Core_Policy_Reference
The following reference, from a ubuntu apparmor discussion, provides useful and related information although not directly answering your question.
https://lists.ubuntu.com/archives/apparmor/2018-July/011722.html
The following references, from a usenix presentation, provides a proposal to add security namespaces to the Linux kernel for use by frameworks such as apparmor. This does not directly show how / if apparmor currently uses kernel mount namespaces for decision making, but it's related enough to be of interest.
https://www.usenix.org/sites/default/files/conference/protected-files/security18_slides_sun.pdf
https://www.usenix.org/conference/usenixsecurity18/presentation/sun
I don't know if my response here is complete enough to be considered a full answer to your questions, however I don't have enough reputation points to put this into a comment. I also found it difficult to know when the AppArmor documentation meant "apparmor policy namespace" vs "linux kernel mount namespace", when the word "namespace" was specified alone.
I'm developing a generic honeypot for TCP services as part of my BA thesis.
I'm currently using Chroot, Linux Namespaces, Secure Computing and Capabilities to provide some sort of a Sandbox.
My question is: Are there any points I have to be aware of? Since I have to mount /proc in the sandbox, I'm curious if it will affect the overall security of the host system.
(User namespaces are not an option btw.)
/* EDIT */
To be more clear: I'm using capabilities(7) and libseccomp to restrict the access to features such as syscalls for root and non-root users.
But what about files in /proc e.g. /proc/sys/* ? Should I blacklist files/directories with an empty bind-mount like firejail does?
As commented by Yann Droneaud, reading the src of systemd-nspawn helped a lot.
I found the following /proc subdirs which should be bind mounted read-only/inaccessible:
/* Make these files inaccessible to container payloads: they potentially leak information about kernel
* internals or the host's execution environment to the container */
PROC_INACCESSIBLE("/proc/kallsyms"),
PROC_INACCESSIBLE("/proc/kcore"),
PROC_INACCESSIBLE("/proc/keys"),
PROC_INACCESSIBLE("/proc/sysrq-trigger"),
PROC_INACCESSIBLE("/proc/timer_list"),
/* Make these directories read-only to container payloads: they show hardware information, and in some
* cases contain tunables the container really shouldn't have access to. */
PROC_READ_ONLY("/proc/acpi"),
PROC_READ_ONLY("/proc/apm"),
PROC_READ_ONLY("/proc/asound"),
PROC_READ_ONLY("/proc/bus"),
PROC_READ_ONLY("/proc/fs"),
PROC_READ_ONLY("/proc/irq"),
PROC_READ_ONLY("/proc/scsi"),
We are a hardware vendor and want to provide support for linux.
This means we want to provide a (user space) shared library that can be used by our customers applications without struggling with the lowlevel protocol.
Our Hardware is accessed via USB/HID and thus our library need to get access to /dev/hidrawX.
But to get access to this device (or other kind of hardware devices) it seems that we need to modify the system by adding permissions to the udev system (see
Get access to USB device on Linux (libusb-1.0)?).
Is this really best practice? If so, where should I do this? In the .deb/.rpm/... installer of the customers application? What about FlatPak or similar concepts?
Udev is standard and best practice. Every linux distribution has udev rules files typically for ex in case of ubuntu it is found in /etc/udev/rules.d folder. You need to create a udev rule file and write MODE="0666" rule to it. Take a look at this example for how to write udev rule.
First off, my intention is to create a portable, bootable USB drive containing a GNU/Linux distribution. Specifically, I want to use Arch Linux with a squashfs read-only root filesystem.
The squashfs image is based on a snapshot of a working VM. The base system with it's services like ssh work out of the box as expected. But when trying to launch gnome via systemd (systemctl start gdm), all I see is a black screen (supposedly the X-Server started but gdm fails to load). I already tried to figure out whats happening, but failed to identify the exact problem.
Home directories are writeable
/tmp is writeable
/var/log is writeable
/var/run & /run are writeable anyway
/var/log/gdm gets created but stays empty.
Which modules may require write access to any other files? Is there any documentation? What would make sense to strace or similar?
My desire is to know the root of the problem and fix it, instead of using workarounds like unionfs. Thanks for any help or hints!
Although it's not relevant, for those who might wonder why I want to do this, here are some points to consider:
Stability - as you cannot modify system files, you cannot mess up the system (unless you write bogus directly to the drive of course)
Storage - as files are compressed, more data fits on the drive
Performance - as I/O on most USB drives is slow, compression gives you higher I/O speed
Portability - no special treatment for read-only storage, you might copy it on a CD or any other read-only technology and it will still work the same way as it would on a writeable disk
Update
I figured out that the problem was actually at /var/lib/gdm. GDM tried to access files in there an (silently) failed doing so giving me a black screen.
I figured out that the problem was actually at /var/lib/gdm. GDM tried to access files in there an (silently) failed doing so giving me a black screen.
journalctl was the debugging command i was missing in the first place.
I need to remove some linux file - may this file is virus or some rootkit.
I understand linux in general.
I already tried rm -Rf and some other linux general command. - but I get 'operation not permited
I can not delete the file when I actulay login to the OS and when I use live ubuntu and mount the /etc/ folder.
Ubuntu has auto mount and I CAN edit any other files - except this one.
The linux permission is unknown for google.
Please your help.
That most likely has nothing to do with the file, but with the permissions you mounted the file system with. Typically live systems mount external file systems with read-only permissions, you have to manually re-mount it. The path suggests that this file is part of a partition used as a systems root partition (/) in another system which most likely you want to clean.
Consult the man pages for details about mounting.
BTW: such file permissions may well exist in a "normal" system setup, that depends on the security level chosen. I do not know this file mentioned here. I assume you now what it is for? At least you should be able to ask your software management system what package it belongs to? If it does not belong to any registered package, then indeed you should be concerned about it.
If really that file shall re deleted and you do have mounted the file system with correct (write) permissions, then there is always a "last resort" for such cases:
sudo chattr -i /<path>/sfewfesfs*
sudo rm -rf /<path>/sfewfesfs*
That should do the trick... However a general warning:
If you really have a file in that file system that does not belong there according to your software management, then deleting a single file might well not be sufficient to remove a potential thread. If you come to the conclusion that this system has indeed been hacked or targted by a root kit, then you cannot trust it any more, since obviously the attacker had full administrative rights over the system. You just have to wipe and completely setup the system again from scratch. There is no alternative to that if you came to that conclusion.