Disable write to ext4 after memory modification - linux

I'm trying to modify user space application code in run-time from a Linux kernel driver.
Given the following code snippet:
writeCR3(process_cr3);
writeCR0(cr0 & ~X86_CR0_WP); // to allow writing to RO pages
*(char*)someUserAddress = 0x90; // just an example, nop
writeCR0(cr0 | X86_CR0_WP); // restore write protection
it successfully modifies the user application code in run-time but for some reason the file it self also changes (If I use 'OBJDUMP' or 'READELF' on the modified executable file after writing I can actually see that it has been modified) - it seems that it is getting written into the ext4 file system as well.
I do not want that. I want the code to be modified only inside the memory.
How do I achieve that? and why does the file system actually modify the file it self as well?

Related

LD_PRELOAD with file functions

I have a rather peculiar file format to work with:
Every line begins with the checksum of its content, followed by a new-line-character.
It looks like this:
[CHECKSUM OF LINE_1][LINE_1]\n
[CHECKSUM OF LINE_2][LINE_2]\n
[CHECKSUM OF LINE_3][LINE_3]\n
...
My goal: To allow any application to work with these files like they would work with any other text file - unaware of the additional checksums at the beginning of each line.
Since I work on a linux machine with debian wheezy (kernel 3.18.26) I want to use the LD_PRELOAD-mechanism to override the relevant file functions.
I have seen something like this with zlibc on https://zlibc.linux.lu/index.html - with an explanation of how it works ( https://zlibc.linux.lu/zlibc.html#SEC8 ).
But I dont get it. They only replace the file-opening functions. No read. No write. no fseek. Nothing. So how does it work?
Or - which functions would I have to intercept to handle every read or write operation on this file and handle them accordingly?
I didn't exactly check how it works but the reason seems to be quite simple.
Possible implementation:
zlibc open:
uncompress file you wanted to open to some temporary file
open this temporary file instead of yours
zlibc close:
Compress temporary file
Override original file
In this case you don't need to override read/write/etc because you can use original ones.
In your case you have two possible solutions:
open, that make a copy of your file with striped checksums. close that calculates checksums and override original file
read and write that are able to skip/calculate checksums.
Ad 2.
From What is the difference between read() and fread()?:
fread() is part of the C library, and provides buffered reads. It is
usually implemented by calling read() in order to fill its buffer
In this case I believe that overriding open and close will be less error prone because you can safely reuse original read, write, fread, fseek etc.

Open file by inode

Is it possible to open a file knowing its inode?
ls -i /tmp/test/test.txt
529965 /tmp/test/test.txt
I can provide path, inode (above 529965) and I am looking to get in return a file descriptor.
This is not possible because it would open a loophole in the access control rules. Whether you can open a file depends not only on its own access permission bits, but on the permission bits of every containing directory. (For instance, in your example, if test.txt were mode 644 but the containing directory test were mode 700, then only root and the owner of test could open test.txt.) Inode numbers only identify the file, not the containing directories (it's possible for a file to be in more than one directory; read up on "hard links") so the kernel cannot perform a complete set of access control checks with only an inode number.
(Some Unix implementations have offered nonstandard root-only APIs to open a file by inode number, bypassing some of the access-control rules, but if current Linux has such an API, I don't know about it.)
Not exactly what you are asking, but (as hinted by zwol) both Linux and NetBSD/FreeBSD provide the ability to open files using previously created “handles”: These are inode-like persistent names that identify a file on a file system.
On *BSD (getfh and fhopen) using this is as simple as:
#include <sys/param.h>
#include <sys/mount.h>
fhandle_t file_handle;
getfh("<file_path>", &file_handle); // Or `getfhat` for the *at-style API
// … possibly save handle as bytes somewhere and recreate it some time later …
int fd = fhopen(&file_handle, O_RDWR);
The last call requiring the caller to be root however.
The Linux name_to_handle_at and open_by_handle_at system calls are similar, but a lot more explicit and require the caller to keep track of the relevant file system mount IDs/UUIDs themselves, so I'll humbly link to the detailed example in the manpage instead. Beware, that the example is not complete if you are looking to persist the handles across reboots; one has to convert the received mount ID to a persistent filesystem identifier, such as a filesystem UUID, and convert that back to a mount ID later on. In essence they do the same however. And just like on *BSD using the later system call requires elevated privileges (CAP_DAC_READ_SEARCH to be exact).

ARM Linux file empty after reboot

I'm trying to open a file for rewriting. I then close the file, and reopen it for read to validate it was written OK. It is indeed as it should be. But, after I unplug the unit (ARM) and plug it again, I find that the file becomes empty. I also tried copying the file manually (with cp), and the same phenomenon reoccurs.
here is some code:
string fileName = "/home/root/LogiTrackV2/InitialSetup.xml";
ofstream theFile (fileName.c_str());
if (theFile.is_open())
{
theFile.close();
}
theFile.open(fileName.c_str(), ios::out | ios::trunc);
theFile << xmlOUT.c_str();
theFile.close();
As I mentioned after this the file exist and updated as it should. The problem is when I unplug the unit...
The problem is more complex than I thought in C++. There is no way in the standard library to force a POSIX fsync call on an ofstream. You can however use Boost.Iostreams with a file_descriptor_sink (http://www.boost.org/doc/libs/1_55_0/libs/iostreams/doc/classes/file_descriptor.html) and do an fsync on the provided fd to force Linux into writing the file to disk.

About the /proc file system

I am using a command in the proc file system which is the following
echo 0 > /proc/sys/net/ipv4/ip_forward
Note: I don't want to know the basic of the command written above, I want what all happens when it goes inside the kernel. As, I want to implement one of the /proc file.
Now if I want to trace the code right from when the 0 is echoed in the file-system then how to go about it. I mean if I want to trace what happens when I do this.
I want to see where in the kernel code this 0 is accepted and in which value does it get stored inorder to make the changes. Please, can somebody tell what all happens when you call this command. I want in detail explain. I don't want the description of the command.
Any related article on how it changes the kernel parameters is also fine.
I have read this but, not explained there. http://www.linuxjournal.com/article/8381
Thanks
search through linux tree (especially network stack) for create_proc_entry function. Figure out what file creates ip_forward (it must be in ip4v drivers) from name passed to create_proc_entry.
When you find the file, look at where proc_dir_entry structure is created and what functions are assigned to its read_proc, write_proc members.

Capturing code generated by Qemu in a file

In qemu, when we are giving instructions it gets converted to the machine code for the particular architecture. I would like to write this code to a file. For that I think in cpu-exec.c the generated code is obtained (it is returned for execution). How will i copy it to a file?
/qemu-0.14.0/cpu-exec.c
find cpu_gen_code() # translate-all.c:57,
-to->
# line104: log_disas(tb->tc_ptr, *gen_code_size_ptr);
try to hack it.

Resources