I need to remove some linux file - may this file is virus or some rootkit.
I understand linux in general.
I already tried rm -Rf and some other linux general command. - but I get 'operation not permited
I can not delete the file when I actulay login to the OS and when I use live ubuntu and mount the /etc/ folder.
Ubuntu has auto mount and I CAN edit any other files - except this one.
The linux permission is unknown for google.
Please your help.
That most likely has nothing to do with the file, but with the permissions you mounted the file system with. Typically live systems mount external file systems with read-only permissions, you have to manually re-mount it. The path suggests that this file is part of a partition used as a systems root partition (/) in another system which most likely you want to clean.
Consult the man pages for details about mounting.
BTW: such file permissions may well exist in a "normal" system setup, that depends on the security level chosen. I do not know this file mentioned here. I assume you now what it is for? At least you should be able to ask your software management system what package it belongs to? If it does not belong to any registered package, then indeed you should be concerned about it.
If really that file shall re deleted and you do have mounted the file system with correct (write) permissions, then there is always a "last resort" for such cases:
sudo chattr -i /<path>/sfewfesfs*
sudo rm -rf /<path>/sfewfesfs*
That should do the trick... However a general warning:
If you really have a file in that file system that does not belong there according to your software management, then deleting a single file might well not be sufficient to remove a potential thread. If you come to the conclusion that this system has indeed been hacked or targted by a root kit, then you cannot trust it any more, since obviously the attacker had full administrative rights over the system. You just have to wipe and completely setup the system again from scratch. There is no alternative to that if you came to that conclusion.
Related
I have an NTFS partition that contains my data, shared between two operating systems (I am dual booting Linux and Windows). I have a repository that I have been working on using Linux for some time and all was good, until I tried opening the repository on windows. I noticed that I had unstaged changes even though I didn't change anything. If I commit them and open the repository from Linux I have another set of unstaged changes and the cycle goes on. When committing the changes that appear are a list of mode change [some #] => [some other #] [file name] for all tracked files.
I have seen some people saying that it is not a good idea to share a repository between different operating systems but without saying why. Can someone explain why does this happen, and if it can be solved (without using a different repository if possible).
PS. yes I am using Github to host the repo if this would make any difference.
Git stores information about the files in the working tree in the index. Part of the data stored in the index is information about the device and inode of each file. This information differs based on the operating system, since different operating systems number their devices differently. Consequently, sharing a working tree across operating systems will, at the very least, result in all files needing to be re-read when running git status or certain other commands after having switched operating systems.
In addition, Linux keeps executable permissions in the file system and Windows does not. Because NTFS is a Windows file system, it does not maintain executable permissions. Linux can only assume that every file is executable, and so your commits result in many files that could not be usefully executed being marked as executable. That's why the permissions seem to change.
In general, NTFS is not a good file system for Linux. You are better off using a UDF file system, which will work both on Linux and Windows, but can keep and use POSIX permissions.
As previously mentioned, you are going to have problems sharing a working tree across operating systems. UDF may make it functional and avoid the current problems with switching permissions, but it is still not a recommended solution and you should avoid it.
I'm in the process of building a small linux distro based on Debian for automated network testing. I am running into a pretty annoying problem though. A number of applications like paris-traceroute, ping, dublin-traceroute and so forth are not working correctly. They return an error of being unable to open a raw ICMP socket. I have tried using 'setcap cap_net_raw+ep ./application' and it's not working even though getcap indicates that the bits have been set.
I'm also running into the same problem if I try to use them as setuid root. They only work under sudo. So I'm wondering if I screwed up permissions on some intervening library or if there is some other issue.
Anyone run into something like this or have a solution?
Thanks!
In case anyone comes across this I'll explain why this is failing.
What I didn't mention is that the applications (like ping, etc) are actually installed in /opt. In this distro /opt actually and encfs file system that is only mounted after the livecd has been authorized against a licensing type of server (there are valid reasons for this - it automatically tests network connections and send the results to network engineer. We only want it to run within a specific time frame that would be associated with the user trouble ticket). So /opt isn't a real filesystem - it's an encrypted file mounted via fuse to looks like a file system. As such setcap and setuid don't actually work and likely cannot work.
In my linux box, i can able to access one mount path, which is not present in /etc/fstab or /etc/mtab.
I want to disable that mount point. Please help me with the command to show the hidden mount.
Below is the hidden mount in some xxx machine.
/net/bnrdev/bld-views/build
Above path present in bnrdev machine:
/bld-views/build
These are not "hidden" per se, but NFS mounts found by your system.
You can get rid of this functionality by disabling NFS client services, or just the automount daemon.
WARNING - this will likely break automounted home directories, which could cause issues for other system users.
Please! for the LUV of all things cute & cuddly, make a copy of the files you modify. Justin Case could have an issue with your changes, right as you're falling asleep.
First off, my intention is to create a portable, bootable USB drive containing a GNU/Linux distribution. Specifically, I want to use Arch Linux with a squashfs read-only root filesystem.
The squashfs image is based on a snapshot of a working VM. The base system with it's services like ssh work out of the box as expected. But when trying to launch gnome via systemd (systemctl start gdm), all I see is a black screen (supposedly the X-Server started but gdm fails to load). I already tried to figure out whats happening, but failed to identify the exact problem.
Home directories are writeable
/tmp is writeable
/var/log is writeable
/var/run & /run are writeable anyway
/var/log/gdm gets created but stays empty.
Which modules may require write access to any other files? Is there any documentation? What would make sense to strace or similar?
My desire is to know the root of the problem and fix it, instead of using workarounds like unionfs. Thanks for any help or hints!
Although it's not relevant, for those who might wonder why I want to do this, here are some points to consider:
Stability - as you cannot modify system files, you cannot mess up the system (unless you write bogus directly to the drive of course)
Storage - as files are compressed, more data fits on the drive
Performance - as I/O on most USB drives is slow, compression gives you higher I/O speed
Portability - no special treatment for read-only storage, you might copy it on a CD or any other read-only technology and it will still work the same way as it would on a writeable disk
Update
I figured out that the problem was actually at /var/lib/gdm. GDM tried to access files in there an (silently) failed doing so giving me a black screen.
I figured out that the problem was actually at /var/lib/gdm. GDM tried to access files in there an (silently) failed doing so giving me a black screen.
journalctl was the debugging command i was missing in the first place.
I am working on an embedded Linux platform. In our platform there is only root user. Now we want to bring in security options like
1. Low Privileged user.
2. Allowing to run only executables from a particular location(only read permission).
3. Use Linux Containers
We have managed to add a low privileged user using the /etc/passwd file. But I have no idea how to do the rest. Is there any better options to implement security in the linux system. Any documentation or links are much appreciated.
Option two is achieved by the noexec flag on mounting. The slight challenge is figuring out exactly what to mount where; you'd want to mount / as noexec to get safety by default, but you need /sbin/mount to be executable. But you can probably make / read-only and mount all the writeable filesystems as noexec.