What would happen if the Linux kernel deleted itself? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
What would happen if the Linux kernel deleted itself? Will there be a moment when it could no longer delete files because rm or the program used for deletion has been deleted too?
Regards.

The question is (apart from being off-topic) somewhat wrong in itself, as rm is not part of the kernel, but either a shell built-in or a separate user-level program. Admittedly, rm uses a syscall provided by the kernel, but that is irrelevant.
The kernel itself is loaded from a compressed image and locked in RAM. It does not matter whether you delete the compressed image until you reboot (which will fail with the boot loader giving you a message like "vmlinuz not found"). You have no way of removing the kernel from RAM (well, other than rebooting...).
Also, for the most part, it does not even matter whether you delete a file, including a running program's executable anyway (if we may be so daunting as to call the kernel a "program" for a moment) under Linux, because deleting a file merely removes the link, not the file. It is a Windows-typical assumption that deleting a file does evil, destructive things.
Under Unix-like systems, it is perfectly possible to delete (or replace) a program while it is running, and it will not cause any problems at all. You will remove the name in the filesystem, that's all. Any open descriptors will remain valid until the last one is closed, the original file will stay intact as-is for any observer who obtained a handle earlier, and it will be "gone" for everyone trying to get at it later.

Related

Linux: let a process fail, if it does opens a file for writing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
Improve this question
I would like a command-line-tool to fail if it opens a particular file for writing.
Is there a way I can modify the environment (maybe via cgroups) of the command-line tool, so that the command/process gets (for example) "permission denied"?
chmod a-w file does not work. The process seems to unlink() and then re-create the file.
I know that I can watch the syscalls of a process with strace. But is there a way to alter some calls, so that the process gets a different result?
Background: unittesting
strace has an option called -e inject or simply --inject which can be used to alter system calls of the tracee. (See manpage here)
In particular, in can be combined with the -P option to only trace syscalls accessing a specified path.
Since the calls are honored in the order they are loaded from shared libraries, you can use LD_PRELOAD to load a custom library prior to the system libraries and hijack their execution. This is used by many network card accelerators like OpenOnload from Solarflare/Xilinx.
https://sumit-ghosh.com/articles/hijacking-library-functions-code-injection-ld-preload/

Can a file's name contain executable code in linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
This showed up in download folder recently.
The file is empty but the filename was:
''$'\001\331\006''#f2#8'$'\f''#'$'\037\036\006\004''###'$'\240\002\240\002\b\003\004\340\002\340\002\340\002\034\034\001\001\004\250\210\002\250\210\002\020\001\005\220\002\220\002\220\002''e'$'\222'
Which bothered me right away because it looks like $unicode_chars, many of them being commands?
001 Start of heading
331 no idea
006 Accept char
\004 End of transmission
At any rate, how does such a file show up on your computer?
Linux file names can contain any character, except the null character (\0) and the slash character / (directory separator) 1. So yes, a file name can contain executable code or any kind of data. It doesn't mean it can be executed, though. The only functions provided by the operating system are file operations, like opening the file, directory listing, etc. To be able to execute the code it must be inside the file, not on its name.
1 https://en.wikipedia.org/wiki/Filename#Comparison_of_filename_limitations
This showed up in download folder recently.
You (not your computer) is responsible for your downloads!
At any rate, how does such a file show up on your computer?
Smells like your computer (or the computer from which you downloaded carelessly) has been compromised by some vulnerability or cyberattack, or that a very buggy program (with buffer overflow) has been carelessly run.
Another possibility is a severe hardware problem, e.g. a dying hard disk or SSD, faulty RAM, cosmic rays, a careless power outage, which corrupted some file system.
Consider using strace(1), gdb(1), fsck(8), dmesg(1) to investigate more.
Backup carefully your important data before.

What is not file in Linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
When I first learned Linux, I was told almost everything in Linux is file. This morning, when I repeated it to my girlfriend. She asked what is not? I tried to find an example for half a day.
So my question is what is not file in Linux?
Almost. Almost everything in Posix is handled through a file descriptor. This means that the same functions used for file operations also apply for pipes, sockets, and hardware devices. This also means that if you use select (or one of its better alternatives), you can have one point in your program where you wait for all possible inputs.
With that said, some things in Posix, and in particular, in Linux, are definitely not files.
The most obvious ones are signals. They are handled asynchronously to the program's execution, and therefor cannot take on a file interface. For that purpose, pselect and one of its better alternatives were invented.
Things more subtly not files are thread synchronization constructs (mutexs, semaphores, etc.). Some attempt have been made to make those available as file descriptors as well (see signalfd and eventfd), but those hardly caught on. I believe that this is due, in large part, for them having a vastly different performance profiles than the ususal way of handling them.
for example computer hardware (CPU, RAM, Etc) is not actually a file, but it is represented as a file in linux.
More details here

On Linux, could any system event prevent the copy command from working? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Trying to determine if there is a scenario in which the copy command may fail.
I don't mean something like $PATH not set or file missing, but more in line with if the file is being edited, if the file is a binary file and being accessed by a system process or if its a database file which is being accessed.
Some basic testing seems to indicate the cp command works fine even if the file is being edited but not sure if there are any OS commands or scenarios in which a cp would fail. For example, what if its a database file and being updated/saved as the exact time the cp takes place. Something like this would be hard to test yet may occur.
Would there be a list of scenarios in which the system prevents a cp command from executing?
There are plenty of ways cp might not do what you want.
Particular example that comes to mind: If you have a process that can read the destination of cp at any given time, there is no possible way to guarantee the reader won't start reading before cp is done copying and end up reading a partially written file. On small-ish files, this race condition may always work out in your favor, but it's still there.
The only way you can have a file that is always updated "atomically" from the perspective of readers such they always get either the old version or the new version, never a partial new version is via the rename system call. Which should be what mv uses for files on the same volume/partition.
Implementing cp is, at the very least 5 system calls: 2x open, 1x sendfile, 2x close.
So just be aware that even if cp succeeds, there can still be race conditions and unpredictable behavior.

How to store data permanently in /tmp directory in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Is there any way to store any data in Linux tmp directory. As i know Linux clear its /tmp directory when system is rebooted. But I want to store data permanently.
Like #bereal said, this defeats the purpose of the /tmp directory. Let me quote the Linux Filesystem Hierarchy Standard:
The /tmp directory must be made available for programs that require temporary files.
Programs must not assume that any files or directories in /tmp are preserved between invocations of the program.
You'll find a better place to store permanent data.
Since it's linux you are free to do what you want to do (as root). When /tmp is cleared depends on your system and can be changed; there is no particular magic involved. A good summary seems to be here: https://serverfault.com/questions/377348/when-does-tmp-get-cleared.
Of course if you are root you can set up an entirely different global directory, say "/not-quite-tmp" or such. But I assume that some progs not under your control write to tmp and you want to inspect or in any case persist those files.
While you are trying to do wrong things, it’s still possible.
/tmp directory is cleared accordigly to TMPTIME setting. The default is apparently 0, what means “clear on every startup”.
The value might be changed in /etc/default/rcS (value is to be set in days.)

Resources