How can I find file system concurrency issues? - linux

I have an application running on Linux, and I find myself wanting windows (!).
The problem is that every 1000 times or so I run into concurrency problems that are consistent with concurrent reading/writing of files. I am fairly sure that this behavior would be prohibited by file locking under Windows, but I don't have any sufficiently fast windows box to check.
There is simply too much file access (too much data) to expect strace to work reliably - the sheer volume of output is likely to change the problem too. It also happens on different files every time. Ideally I would like to change/reconfigure the linux file system to be more restrictive (as in fail-fast) wrt concurrent access.
Are there any tools/settings I can use to achieve this ?

Hmmm. Concurrent access to files is perfectly legitimate on Posix-like systems so there is no kind of "failure" mode associated with it. Is there a reason you can't use file-locking on Linux? It's difficult to tell from your description what the actual problem is (1000 times of what?) but it sounds like the traditional flock() or lockf() system calls might be what you're looking for.

For some reason I thought you were using C++. The following applies if you are.
If you are using multi-threading and fstream IO and custom streambufs or you disabled sync_with_stdio, then yes, the C++ iostreams will act differently from iostreams on Windows.
I ran into this with one of my own projects.
Windows defines a mutex in its iostream sentry. Linux does not. Linux does seem to have locking in its C stdio functions, so usually that works out anyway.
However, I defined a custom debug streambuf that didn't go through stdio and got all sorts of corruption in Linux.
I got around it by using a mutex that is preprocessed out if the OS is Windows.

Related

Quick questions about Linux kernel modules

I'm very familiar with Linux (I've been using it for 2 years, no Windows for 1 1/2 years), and I'm finally digging deeper into kernel programming and I'm working a project. So my questions are:
Will a kernel module run faster than a traditional c program.
How can I communicate with a module (is that even possible), for example call a function in it.
1.Will a kernel module run faster than a traditional c program.
It Depends™
Running as a kernel module means you get to play by different rules, you potentially get to avoid some context switches depending on what you are doing. You get access to some powerful tools that can be leveraged to optimize your code, but don't expect your code to run magically faster just by throwing everything in kernelspace.
2.How can I communicate with a module (is that even possible), for example call a function in it.
There are various ways:
You can use the various file system interfaces: procfs, sysfs, debugfs, sysctl, ...
You could register a char device
You can make use of the Netlink interface
You could create new syscalls, although that's heavily discouraged
And you can always come up with your own scheme, or use some less common APIs
Will a kernel module run faster than a traditional c program.
The kernel is already a C program, which is most likely be compiled with same compiler you use. So generic algorithms or some processor intensive computations will be executed with almost same speed.
But most userspace programs (like bash) have to ask kernel to perform some operations on system resources, i.e. print prompt onto monitor. It will require entering the kernel with system call, sending data over tty interfaces and passing to a video-driver, it may introduce some latency. If you'd implemented bash in-kernel, you may directly call video-driver, which is definitely faster.
That approach however, have drawbacks. First of all, bash should be able to print prompt on ssh-session or serial console, and that will complicate logic. Also, if your bash will hang, you cannot just kill, you have to reboot system.
How can I communicate with a module (is that even possible), for example call a function in it.
In addition to excellent list provided by #tux3, I would suggest to start with char devices.

Need advice for disk access program

I'm envisioning a program I will need to write and need some advice on the language. I will need to be doing raw disk access so I can display hex data, scroll or jump around on the disk, and do calculations from the data. I have been using Java the most and it's portability between OSes for my other projects is certainly a benefit, but raw disk access either isn't possible, would require JNI, or may be possible on *nix when you can access disks as "files". I keep reading different things. By the way I can handle this type of work using Files in Java, but in this project I need to be able to access the disk so disk imaging to files beforehand isn't needed.
It would be nice to make it as portable as I could since there is a real benefit to using different OSes, but it may not be worth it and I should just stick with Windows and a native compiling language. Is there any existing JNI code that could help? I have experience in other languages but I haven't used C++ in a long time. Should I forget about Java and tryout C#? Someone told me that Python has libraries available for this type of thing despite it being an interpreted language so what about Python? What would be best for the project? What would be good for me to learn?
Searching around for raw disk access, Java, Python, does not seem to give any useful results. Thanks for any help!
EDIT
It seems like this will be quite involved, learning what I need to know, and then learning that. It's too bad I couldn't use disk images instead because then I'd be able to start working on it immediately in Java, which I'm comfortable with and I know I could make a good product. I've gotten great throughput in other raw data processing projects with Java so that doesn't worry me. Plus it would be truly portable. Hmm might have to consider it more. I'd probably need a big azz storage system to hold all the images though :)
UPDATE
Just a note for anyone that finds this question... I have figured out this works just by specifying the disk for the File using the PhysicalDrive notation (in Windows) like the answer below by hunsricker. However there are some issues. First if you do a "exists" check File.exists(), it says the file does not exist. Also, the file size is zero, and when I get a "java.io.IOException: The drive cannot find the sector requested" is the way I know I'm at the end of the file. And the worst part- I was getting some odd runtime errors doing this when I was reading some bytes and skipping some (64) bytes in a loop. I altered my program a bit to read different amounts and that changed where the error occurred. I was using BufferedInputStream instead of RandomAccessFile like hunsricker below by the way, not sure if it makes a difference. My only answer for this issue is that since I'm doing physical disk access, it doesn't like that I am not reading in even 512 byte sectors or 1K blocks or such. Indeed when I read even 1K, 2K, 512bytes, etc., and don't skip anything, it works fine and runs to the end. The errors I saw were java.io.ioexception "incorrect function" and java.io.ioexception "the parameter is incorrect". There was no rhyme or reason to them. Then I made image files of the same data and ran my program on those and it would do any combination of reading and skipping bytes with no problem. Physical disk access was more picky I guess.
I was looking by myself for a possibility to access raw data of a physical drive. And now as I got it to work, I just want to tell you how. You can access raw disk data directly from within java ... just run the following code with administrator priviliges:
File diskRoot = new File ("\\\\.\\PhysicalDrive0");
RandomAccessFile diskAccess = new RandomAccessFile (diskRoot, "r");
byte[] content = new byte[1024];
diskAccess.readFully (content);
So you will get the first kB of your first physical drive on the system. To access logical drives - as mentioned above - just replace 'PhysicalDrive0' with the drive letter e.g. 'D:'
oh yes ... I tried with Java 1.7 on a Win 7 system ...
RageDs link brougth me to the solution ... thank you :-)
Disk access will depend on the disk's particular drivers. And since this is such a low-level task, I doubt Java/Python would have such support (these languages are generally used for fast, high-level software package development). Since you will probably not be aware of the disks' particular hardware implementations, you will probably have to end up using an operating system API (which is OS-dependent of course). I would recommend looking into C and/or the particular assembly language for the architecture you plan to do this work on. Then, I would recommend continuing your search to find the appropriate API for your target OS.
EDIT
For Windows, a good place to start is here. More specifically, MSDN's CreateFile() is probably a function you would be interested in.

Can regular file reading benefited from nonblocking-IO?

It seems not to me and I found a link that supports my opinion. What do you think?
The content of the link you posted is correct. A regular file socket, opened in non-blocking mode, will always be "ready" for reading; when you actually try to read it, blocking (or more accurately as your source points out, sleeping) will occur until the operation can succeed.
In any case, I think your source needs some sedatives. One angry person, that is.
I've been digging into this quite heavily for the past few hours and can attest that the author of the link you cited is correct. However, the appears to be "better" (using that term very loosely) support for non-blocking IO against regular files in native Linux Kernel for v2.6+. The "libaio" package contains a library that exposes the functionality offered by the kernel, but it has some caveats about the different types of file systems which are supported and it's not portable to anything outside of Linux 2.6+.
And here's another good article on the subject.
You're correct that nonblocking mode has no benefit for regular files, and is not allowed to. It would be nice if there were a secondary flag that could be set, along with O_NONBLOCK, to change this, but due to the way cache and virtual memory work, it's actually not an easy task to define what correct "non-blocking" behavior for ordinary files would mean. Certainly there would be race conditions unless you allowed programs to lock memory associated with the file. (In fact, one way to implement a sort of non-sleeping IO for ordinary files would be to mmap the file and mlock the map. After that, on any reasonable implementation, read and write would never sleep as long as the file offset and buffer size remained within the bounds of the mapped region.)

How to "hibernate" a process in Linux by storing its memory to disk and restoring it later?

Is it possible to 'hibernate' a process in linux?
Just like 'hibernate' in laptop, I would to write all the memory used by a process to disk, free up the RAM. And then later on, I can 'resume the process', i.e, reading all the data from memory and put it back to RAM and I can continue with my process?
I used to maintain CryoPID, which is a program that does exactly what you are talking about. It writes the contents of a program's address space, VDSO, file descriptor references and states to a file that can later be reconstructed. CryoPID started when there were no usable hooks in Linux itself and worked entirely from userspace (actually, it still does work, depending on your distro / kernel / security settings).
Problems were (indeed) sockets, pending RT signals, numerous X11 issues, the glibc caching getpid() implementation amongst many others. Randomization (especially VDSO) turned out to be insurmountable for the few of us working on it after Bernard walked away from it. However, it was fun and became the topic of several masters thesis.
If you are just contemplating a program that can save its running state and re-start directly into that state, its far .. far .. easier to just save that information from within the program itself, perhaps when servicing a signal.
I'd like to put a status update here, as of 2014.
The accepted answer suggests CryoPID as a tool to perform Checkpoint/Restore, but I found the project to be unmantained and impossible to compile with recent kernels.
Now, I found two actively mantained projects providing the application checkpointing feature.
The first, the one I suggest 'cause I have better luck running it, is CRIU
that performs checkpoint/restore mainly in userspace, and requires the kernel option CONFIG_CHECKPOINT_RESTORE enabled to work.
Checkpoint/Restore In Userspace, or CRIU (pronounced kree-oo, IPA: /krɪʊ/, Russian: криу), is a software tool for Linux operating system. Using this tool, you can freeze a running application (or part of it) and checkpoint it to a hard drive as a collection of files. You can then use the files to restore and run the application from the point it was frozen at. The distinctive feature of the CRIU project is that it is mainly implemented in user space.
The latter is DMTCP; quoting from their main page:
DMTCP (Distributed MultiThreaded Checkpointing) is a tool to transparently checkpoint the state of multiple simultaneous applications, including multi-threaded and distributed applications. It operates directly on the user binary executable, without any Linux kernel modules or other kernel modifications.
There is also a nice Wikipedia page on the argument: Application_checkpointing
The answers mentioning ctrl-z are really talking about stopping the process with a signal, in this case SIGTSTP. You can issue a stop signal with kill:
kill -STOP <pid>
That will suspend execution of the process. It won't immediately free the memory used by it, but as memory is required for other processes the memory used by the stopped process will be gradually swapped out.
When you want to wake it up again, use
kill -CONT <pid>
The more complicated solutions, like CryoPID, are really only needed if you want the stopped process to be able to survive a system shutdown/restart - it doesn't sound like you need that.
Linux Kernel has now partially implemented the checkpoint/restart futures:https://ckpt.wiki.kernel.org/, the status is here.
Some useful information are in the lwn(linux weekly net):
http://lwn.net/Articles/375855/ http://lwn.net/Articles/412749/ ......
So the answer is "YES"
The issue is restoring the streams - files and sockets - that the program has open.
When your whole OS hibernates, the local files and such can obviously be restored. Network connections don't, but then the code that accesses the internet is typically more error checking and such and survives the error conditions (or ought to).
If you did per-program hibernation (without application support), how would you handle open files? What if another process accesses those files in the interim? etc?
Maintaining state when the program is not loaded is going to be difficult.
Simply suspending the threads and letting it get swapped to disk would have much the same effect?
Or run the program in a virtual machine and let the VM handle suspension.
Short answer is "yes, but not always reliably". Check out CryoPID:
http://cryopid.berlios.de/
Open files will indeed be the most common problem. CryoPID states explicitly:
Open files and offsets are restored.
Temporary files that have been
unlinked and are not accessible on the
filesystem are always saved in the
image. Other files that do not exist
on resume are not yet restored.
Support for saving file contents for
such situations is planned.
The same issues will also affect TCP connections, though CryoPID supports tcpcp for connection resuming.
I extended Cryopid producing a package called Cryopid2 available from SourceForge. This can
migrate a process as well as hibernating it (along with any open files and sockets - data
in sockets/pipes is sucked into the process on hibernation and spat back into these when
process is restarted).
The reason I have not been active with this project is I am not a kernel developer - both
this (and/or the original cryopid) need to get someone on board who can get them running
with the lastest kernels (e.g. Linux 3.x).
The Cryopid method does work - and is probably the best solution to general purpose process
hibernation/migration in Linux I have come across.
The short answer is "yes." You might start by looking at this for some ideas: ELF executable reconstruction from a core image (http://vx.netlux.org/lib/vsc03.html)
As others have noted, it's difficult for the OS to provide this functionality, because the application needs to have some error checking builtin to handle broken streams.
However, on a side note, some programming languages and tools that use virtual machines explicitly support this functionality, such as the Self programming language.
This is sort of the ultimate goal of clustered operating system. Mathew Dillon puts a lot of effort to implement something like this in his Dragonfly BSD project.
adding another workaround: you can use virtualbox. run your applications in a regular virtual machine and simply "save the machine state" whenever you want.
I know this is not an answer, but I thought it could be useful when there are no real options.
if for any reason you don't like virtualbox, vmware and Qemu are as good.
Ctrl-Z increases the chances the process's pages will be swapped, but it doesn't free the process's resources completely. The problem with freeing a process's resources completely is that things like file handles, sockets are kernel resources the process gets to use, but doesn't know how to persist on its own. So Ctrl-Z is as good as it gets.
There was some research on checkpoint/restore for Linux back in 2.2 and 2.4 days, but it never made it past prototype. It is possible (with the caveats described in the other answers) for certain values of possible - I you can write a kernel module to do it, it is possible. But for the common value of possible (can I do it from the shell on a commercial Linux distribution), it is not yet possible.
There's ctrl+z in linux, but i'm not sure it offers the features you specified. I suspect you asked this question since it doesn't

Logging frameworks for embedded linux?

I need a small, portable framework for logging on embedded linux. Ideally it would output to a file or a socket, and having some sort of log rotation/compression would also be nice.
So far, I've found a lot of frameworks, but almost all of them have daunting build procedures or require the use of application frameworks (e.g. log4cxx requires the Apache Portable Runtime, which I'd rather not bother with...).
Just looking for something simple and robust, but everything I seem to find is complicated or requires lots of secondary junk just to run.
Suggestions? (and if the answer is roll my own, that's fine, but...it's be great to avoid that)
Use syslog(3) and syslogd from BusyBox. BusyBox can be very compact when stripped down and doesn't depend on anything other than libc. You can strip out everything you don't want so it is perfectly possible to use it only for logging.
We use BusyBox on a number of embedded systems, both Linux and uClinux, and find its logging facilities highly reliable.
I have no experience with the log4cxx-module but I am using APR on an embedded target running Linux (it is based on the Atmel AT91SAM926x processor family). It was really simple to configure and compile (more or less ./configure --host=arm-none-linux-gnueabi) so I would not be to afraid of going down the log4cxx-path.
Maybe you should consider spending some time on a good logging framework, since this is what you are going to use on your embedded Linux. ... and printf ...
I cooked something where I can enable/disable various logging levels per module in runtime.
Did you ever try debugging multithreaded apps on Linux?
Good luck!
Implementing very robust logging mechanism in C taking about 1000 code lines (from our code base). 90% of this defines of different sections. This includes different macros DBG_E DBG_W DBG_TRACE etc ... and spliting to the section, run time changing of debug level and debug modules (does not include compression just simple print abstraction that can be implemented in different ways file/socket/serial etc...) .
I will estimate that it take about few days to implement. The down side you will spend a few days the up side that you will get something that works for your needs and nothing more, i understand that you are working on embedded platform and footprint and memory usage are important, the best and optimized solution will be one you write. We invested those few days ones. and using it across different products/project and adjust/improve with the time past according to real needs. Main problem of generic solution that it usually will do sort of what you need and a lot more, this more usually just waist of resources.
I can't imagine that your platform is too small to include log4cxx and APR, neither is a large library, and even the tiniest platform is likely to have space for them.
You could just use syslog, which is provided by the C library - a syslog daemon is provided by busybox (which no doubt, you already use if you're on a really tiny platform). I don't know if busybox's syslogd can log to the network, but it has some level of flexibility. You can do log rotation using shell scripts pretty trivially.
Use klogd it reads the kernel log messages(from /proc/kmsg kernel) interface and redirect those messages to appropriate directory. you can use user configurable syslogd daemon along with klogd that will redirect kernel messages into appropriate files in /var/log/ directory.
For instance logs related to mail service will be stored in /var/log/main.log and logs related to kernel booting process will be stored in /var/log/boot.log . User can configure log parsing using syslogd configuration file.
But the use of syslogd may lead to your system performance degradation because for every log messages syslog daemon will do disk operation to store that log into appropriate file
Log sequence
Messages from kernel
---> klogd ( access messages from kernel ring buffer)-->syslogd --> /var/log/*

Resources