MacOs kernel-userspace communication using file - linux

I want to create a file from kernel and this file must be accessed from user space. Other ways of communication (for example ioctl) is not suitable, because the user space application needs only files, and I don't have the source code of it.
I need to do this on MAC. If I were using Linux, I would use sysfs for it, but on MacOs they dont have sysfs, so I decided to end up with devfs
I created the sample soultion and everything works great, but the problem is that the device file (devfs file) does not have size. The user-space code checks for file size and skips this file. I know how big the size will be, but I dont know how to set it to devfs file.
I dont want to create the file in real filesystem, because it can be quite big. All I want is to redirect reads and writes to my internal functions.
FUSE (http://en.wikipedia.org/wiki/Filesystem_in_Userspace) would be ideal for be, but this involves user-space daemon.
Any suggestions?

Related

Linux device without a file system

Today I just realized in my Ubuntu Linux, I can mount and store files on my newly purchased hard drive as a raw device without a file system. (as long as I partitioned the disk correctly)
So, I am not sure if my below statement is correct, looking for expert to answer:
Looks like it's not required to create a file system on a disk in order to use it in Linux? Is it correct?
I have some very basic understanding of how a file system works. In Linux, is the concept of "inode" a file system feature or a Linux feature?
I understand that the "inode" file system works unlike NTFS or FAT32 that it tries to spread out the data across the disk so that Linux/Unix doesn't need as Windows like "defgramentation" program to keep data in consecutive chunks. My question is, if I am storing my data on a raw device without a file system, and if "inode" is a file system feature not a Linux feature, what will the actual data layout look like on the raw device then?
Thanks in advance

Linux kernel : logging to a specific file

I am trying to edit the linux kernel. I want some information to be written out to a file as a part of the debugging process. I have read about the printk function. But i would like to add text to a particular file (file other from the default files that keep debug logs).
To cut it short: I would kind of like to specify the "destination" in the printk function (or at least some work-around it)
How can I achieve this? Will using fwrite/fopen work (if yes, will it work without causing much overhead compared to printk, since they are implemented differently)?
What other options do i have?
Using fopen and fwrite will certainly not work. Working with files in kernel space is generally a bad idea.
It all really depends on what you are doing in the kernel though. In some configurations, there may not even be a hard disk for you to write to. If however, you are working at a stage where you can have certain assumptions about the running kernel, you probably actually want to write a kernel module rather than edit the kernel itself. For all you care, a kernel module is just as good as any other part of the kernel, but they are inserted when the kernel is already up and running.
You may also be thinking of doing so for debugging, or have output of a kernel-level application (e.g. an application that you are forced to run at kernel level for real-time constraints etc). In that case, kio may be of interest to you, but if you want to use it, do make sure you understand why.
kio is a library I wrote just for those "kernel-level applications", which makes a kernel module see a /proc file as if it's a user of it (rather than a provider). To make it work, you should have a user-space application also opening that virtual file and redirect it to wherever you want to write your log. Something along the lines of opening the file with kopen in write mode and in user space tell cat /proc/your_file > ~/log_file.
Note: I still recommend printk unless you really know what you are doing. Since you are thinking of fopen in kernel space, I don't think you really know what you are doing.

Identifying that a file is being copied outside the computer in LKM

Assuming that i have Loadable-Kernel-Module inserted in linux-kernel and have hooked read, write, open and close functions. So now i can stop access to any file but i want to stop files from being copied outside the device like to a usb device, card, disk etc. The thing i want to know is that sitting in LKM and with function calls hooked how can i identify that a file is being written to external device?.
Also i want to know that which system calls are used during a copy operation ? I have idea that a program opens the file reads from it ( read system call) and then writes to second file( write system call) but i observed strange behavior when i was trying to stop write access to a file that a process which opens a file never calls write operation on that file for saving file (checked for pdf viewer).
If anybody have idea about this strange behavior or you have idea that how to stop writing to a file then please share it also.
They could mmap it to do read/write. Or they could read the entire original file into memory, close it, then open the destination.
Or they could encrypt the file, then write it out to a new file on the USB.
Or they could do minor edits to the contents, then save it out.
Or they could use gvfs to access the network/USB device.
Or the user could reboot and copy the file in a different OS.
All that really highlights is that the problem is really difficult - a determined user will always find a way to extract data from a system they have access to.
You're best bet is just to prevent accidental leakage - so scan files after close on the removable media, and check they don't have contents you don't want leaked. Overwrite and delete if they do.
Or else block the devices from being mounted in the first place, and disable gvfs as well.
As to why your hook isn't intercepting the write(), either:
Your hook isn't actually intercepting the operation.
The application isn't using write() to put the content in a file.

Artificially modify server load in Ubuntu

I am curious if it is possible artificially modify the server load in Ubuntu or more generally linux. I am working on an application that reacts to the server load, and in order to test it it would be nice if I could change the server load easily.
I am currently running an over-active program that will literally generate load, but I'd prefer to not continue overheating my laptop (it's getting hot!).
One of the most important things to know about Linux (or Unix) systems is, everything is just a file. Since you are just reading from /proc/loadavg, the easiest was for you to accomplish what you are after is simply make a text file that contains a line of text that you would see when running cat /proc/loadavg. Then have your program read from that file you created instead of /proc/loadavg and it will be none the wiser. If you want to test under different "artificial" situations, just change the text in this file and save. When your testing is done, simply change your program back to reading from /proc/loadavg and you can be sure it will work as expected.
Note, you can make this text file anywhere you want...in your home directory, in the program directory, wherever. However, you shouldn't make it in /proc. That directory is reserved for system objects.
You can use the stress command, see http://weather.ou.edu/~apw/projects/stress/
A tool to impose load on and stress test a computer system
sudo apt-get install stress
To avoid CPU warm, you can install a virtual machine with small cpu capacity. virtualbox and qemu-kvm are free.
Use chroot to run the various pieces of software you're testing with a specified directory as the root directory. Set up a manufactured/modified /proc/loadavg relative to that new root directory, too.
chroot will let you create a dummy file that appears to have /proc/loadavg as its path, so the software will observe your manufactured values even if you can't change your code to look for load data in a different location.
Since you don't want to actually/literally stress the machine, something like stress is not what you are after.
As stated, /proc/loadavg would be the place to set system load averages (faux loads).
But if that's also not the meat of what you're after, I would absolutely suggest
getloadavg
watchdog
and even possible Munin plugins
There're two methods.
Hacking /proc/loadavg
The machine is not overstressed
Your program reads load valus from a file
Todo: hack Linux to report fake load value
Modify your prg
The machine is not overstressed
Your program reads load valus from a file
Todo: change 4 characters in your prg: replace /proc/loadavg with /tmp/loadavg
You can decide now. Calculate costs ;)

reading and writing from a file in linux kernel

I'm writing a patch for VFS FAT implmentation on kernel 3.0
I want to add posix attributes to FAT files that are created in linux.
to achive that, I must save a file that contains all the relevant information on the mounted drive.
I know that reading and writing files from kernel space is something normally shouldn't be done, and I'm looking for another way to read/write the data.
I saw articles on the net that suggested to use /proc or to create a userspace daemon that will do the IO for me. I wanted to know if anyone saw or know where can I look at an implmentation of a thing like that,because I didn't find any examples for that over the net.
I'm not looking for a read/write to proc example, I want to see an entire solution for this issue.
Have a look at the quota implementation; this is a mechanism (ok, presumably not available on vfat) which reads/writes files from the kernel.
Additionally, the "loop" block device is another example of a kernel facility which does file IO.

Resources