what is the equivalent of coreadm solaris command in linux
coreadm allows you to configure various aspects of core file generation (naming patterns, logging, ...) on Solaris systems. Linux isn't as feature rich here and only allows you to
customize the name of the core files. This can be done by writing a format string into
/proc/sys/kernel/core_pattern.
See man 5 core for the details.
Using the method discussed here (see man -s5 core, under Piping core dumps to a program
), you could do something along these lines (with root permissions of course):
~ cat /proc/sys/kernel/core_pattern
|/path/to/a/script some arguments
... and put together a script that reads the coredump on stdin and writes it out to a file whose path is dictated in some other fashion.
From a security standpoint, this seems like a very scary thing to do though. There's a lot of potential gotchas.
As mentioned on the sigquit blog, it looks like changes to /proc/sys/kernel/core_pattern are impermanent and you'd need to make changes to /etc/sysctl.conf by editing it directly, or using sysctl.
Related
The thing is, I want to track if a user tries to open a file on a shared account. I'm looking for any record/technique that helps me know if the concerned file is opened, at run time.
I want to create a script which monitors if the file is open, and if it is, I want it to send an alert to a particular email address. The file I'm thinking of is a regular file.
I tried using lsof | grep filename for checking if a file is open in gedit, but the command doesn't return anything.
Actually, I'm trying this for a pet project, and thus the question.
The command lsof -t filename shows the IDs of all processes that have the particular file opened. lsof -t filename | wc -w gives you the number of processes currently accessing the file.
The fact that a file has been read into an editor like gedit does not mean that the file is still open. The editor most likely opens the file, reads its contents and then closes the file. After you have edited the file you have the choice to overwrite the existing file or save as another file.
You could (in addition of other answers) use the Linux-specific inotify(7) facilities.
I am understanding that you want to track one (or a few) particular given file, with a fixed file path (actually a given i-node). E.g. you would want to track when /var/run/foobar is accessed or modified, and do something when that happens
In particular, you might want to install and use incrond(8) and configure it thru incrontab(5)
If you want to run a script when some given file (on a native local, e.g. Ext4, BTRS, ... but not NFS file system) is accessed or modified, use inotify incrond is exactly done for that purpose.
PS. AFAIK, inotify don't work well for remote network files, e.g. NFS filesystems (in particular when another NFS client machine is modifying a file).
If the files you are fond of are somehow source files, you might be interested by revision control systems (like git) or builder systems (like GNU make); in a certain way these tools are related to file modification.
You could also have the particular file system sits in some FUSE filesystem, and write your own FUSE daemon.
If you can restrict and modify the programs accessing the file, you might want to use advisory locking, e.g. flock(2), lockf(3).
Perhaps the data sitting in the file should be in some database (e.g. sqlite or a real DBMS like PostGreSQL ou MongoDB). ACID properties are important ....
Notice that the filesystem and the mount options may matter a lot.
You might want to use the stat(1) command.
It is difficult to help more without understanding the real use case and the motivation. You should avoid some XY problem
Probably, the workflow is wrong (having a shared file between several users able to write it), and you should approach the overall issue in some other way. For a pet project I would at least recommend using some advisory lock, and access & modify the information only thru your own programs (perhaps setuid) using flock (this excludes ordinary editors like gedit or commands like cat ...). However, your implicit use case seems to be well suited for a DBMS approach (a database does not have to contain a lot of data, it might be tiny), or some index locked file like GDBM library is handling.
Remember that on POSIX systems and Linux, several processes can access (and even modify) the same file simultaneously (unless you use some locking or synchronization).
Reading the Advanced Linux Programming book (freely available) would give you a broader picture (but it does not mention inotify which appeared aften the book was written).
You can use ls -lrt, it displays the last RW operations in the shell. Then you can conclude whether the file is opened or not. Make sure that you are in the exact directory.
I am working on a Linux machine (running openSUSE 13.1 w/ KDE, specifically) and I would like to determine what commands are actually being issued in the background when I do something with an application's GUI.
My question is very similar to the following one which has received no answer:
https://stackoverflow.com/questions/20930239/how-can-i-see-the-commands-being-passed-in-backend-of-a-gui-application
If it helps at all, the specific task I am trying to accomplish is figuring out what the command line-equivalent is for sending a file to the Trash in KDE's Dolphin utility. I would like to make an alias for this functionality in my .bashrc so that I have a "gentler" alternative to rm. But I would rather know the answer to my more general question so that I can do similar things in the future.
My naive guess was that a log file might exist somewhere. Then I could do a task with a GUI and just tail that log file afterward to see what the underlying commands were for what I just did in the GUI. As far as I can tell, however, no such log exists.
To move a file foo to your trash bin, try
mv foo $HOME/Trash/
so you could make that a shell function in your .bashrc
function movetotrash() {
mv $* $HOME/Trash/
}
AFAIK, most GUI applications don't have log files. They are generally free software (and using free software libraries), so you could study their source code and improve it. Try to interact with their communities (and use strace as I commented)
BTW, not every GUI application is using commands. Some are (e.g. IDE are indeed forking commands like gcc) but others just do directly syscalls (probably a file manager won't fork an mv but just would copy contents or call the rename(2) syscall).
I'm trying to get a core dump of a proprietary application running on an embedded linux system, for which I wrote some plugins.
What I did was:
ulimit -c unlimited
echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
kill -3 <PID>`
However, no core dump is created. '/tmp/cores' exists and is writable for everyone, and the disk has enough space available. When I try the same thing with sleep 100 & as an example process and then kill it, the core dump is created.
I tried the example for the pipe syntax from the core manpage, which writes some parameters and the size of the core dump into a file called core.info. This file IS created, and the size is greater than 0. So if the core dump is created, why isn't it written to /tmp/cores? To be sure, I also searched for core* on the file system - it's not there. dmesg doesn't show any errors (but it does if I pipe the core dump to an invalid program).
Some more info: The system is probably based on Debian, but I'm not quite sure. GDB is not available, as well as many other tools - there is only busybox for basic stuff.
The process I'm trying to debug is automatically restarted soon after being killed.
So, I guess one solution would be to modify the example program in order to write the dump to a file instead of just counting bytes. But why doesn't it work just normally if there obviously is some data?
If your proprietary application calls setrlimit(2) with RLIMIT_CORE set to 0, or if it is setuid, no core dump happens. See core(5). Perhaps use strace(1) to find out. And you could install gdb (perhaps by [cross-] compiling it). See also gcore(1).
Also, check (and maybe set) the limit in the invoking shell. With bash(1) use ulimit builtin. Otherwise, cat /proc/self/limits should display the limits. If you don't have bash you could code a small wrapper in C calling setrlimit then execve ...
The Wikipedia page on Core dump says
In Unix-like systems, core dumps generally use the standard executable
image-format:
a.out in older versions of Unix,
ELF in modern Linux, System V, Solaris, and BSD systems,
Mach-O in OS X, etc.
Does this mean a core dump is executable by itself? If not, why not?
Edit: Since #WumpusQ.Wumbley mentions a coredump_filter in a comment, perhaps the above question should be: can a core dump be produced such that it is executable by itself?
In older unix variants it was the default to include the text as well as data in the core dump but it was also given in the a.out format and not ELF. Today's default behavior (in Linux for sure, not 100% sure about BSD variants, Solaris etc.) is to have the core dump in ELF format without the text sections but that behavior can be changed.
However, a core dump cannot be executed directly in any case without some help. The reason for that is that there are two things missing from a simple core file. One is the entry point, the other is code to restore the CPU state to the state at or just before the dump occurred (by default also the text sections are missing).
In AIX there used to be a utility called undump but I have no idea what happened to it. It doesn't exist in any standard Linux distribution I know of. As mentioned above (#WumpusQ) there's also an attempt at a similar project for Linux mentioned in above comments, however this project is not complete and doesn't restore the CPU state to the original state. It is, however, still good enough in some specific debugging cases.
It is also worth mentioning that there exist other ELF formatted files that cannot be executes as well which are not core files. Such as object files (compiler output) and .so (shared object) files. Those require a linking stage before being run to resolve external addresses.
I emailed this question the creator of the undump utility for his expertise, and got the following reply:
As mentioned in some of the answers there, it is possible to include
the code sections by setting the coredump_filter, but it's not the
default for Linux (and I'm not entirely sure about BSD variants and
Solaris). If the various code sections are saved in the original
core-dump, there is really nothing missing in order to create the new
executable. It does, however, require some changes in the original
core file (such as including an entry point and pointing that entry
point to code that will restore CPU registers). If the core file is
modified in this way it will become an executable and you'll be able
to run it. Unfortunately, though, some of the states are not going to
be saved so the new executable will not be able to run directly. Open
files, sockets, pips, etc are not going to be open and may even point
to other FDs (which could cause all sorts of weird things). However,
it will most probably be enough for most debugging tasks such running
small functions from gdb (so that you don't get a "not running an
executable" stuff).
As other guys said, I don't think you can execute a core dump file without the original binary.
In case you're interested to debug the binary (and it has debugging symbols included, in other words it is not stripped) then you can run gdb binary core.
Inside gdb you can use bt command (backtrace) to get the stack trace when the application crashed.
I need a small, portable framework for logging on embedded linux. Ideally it would output to a file or a socket, and having some sort of log rotation/compression would also be nice.
So far, I've found a lot of frameworks, but almost all of them have daunting build procedures or require the use of application frameworks (e.g. log4cxx requires the Apache Portable Runtime, which I'd rather not bother with...).
Just looking for something simple and robust, but everything I seem to find is complicated or requires lots of secondary junk just to run.
Suggestions? (and if the answer is roll my own, that's fine, but...it's be great to avoid that)
Use syslog(3) and syslogd from BusyBox. BusyBox can be very compact when stripped down and doesn't depend on anything other than libc. You can strip out everything you don't want so it is perfectly possible to use it only for logging.
We use BusyBox on a number of embedded systems, both Linux and uClinux, and find its logging facilities highly reliable.
I have no experience with the log4cxx-module but I am using APR on an embedded target running Linux (it is based on the Atmel AT91SAM926x processor family). It was really simple to configure and compile (more or less ./configure --host=arm-none-linux-gnueabi) so I would not be to afraid of going down the log4cxx-path.
Maybe you should consider spending some time on a good logging framework, since this is what you are going to use on your embedded Linux. ... and printf ...
I cooked something where I can enable/disable various logging levels per module in runtime.
Did you ever try debugging multithreaded apps on Linux?
Good luck!
Implementing very robust logging mechanism in C taking about 1000 code lines (from our code base). 90% of this defines of different sections. This includes different macros DBG_E DBG_W DBG_TRACE etc ... and spliting to the section, run time changing of debug level and debug modules (does not include compression just simple print abstraction that can be implemented in different ways file/socket/serial etc...) .
I will estimate that it take about few days to implement. The down side you will spend a few days the up side that you will get something that works for your needs and nothing more, i understand that you are working on embedded platform and footprint and memory usage are important, the best and optimized solution will be one you write. We invested those few days ones. and using it across different products/project and adjust/improve with the time past according to real needs. Main problem of generic solution that it usually will do sort of what you need and a lot more, this more usually just waist of resources.
I can't imagine that your platform is too small to include log4cxx and APR, neither is a large library, and even the tiniest platform is likely to have space for them.
You could just use syslog, which is provided by the C library - a syslog daemon is provided by busybox (which no doubt, you already use if you're on a really tiny platform). I don't know if busybox's syslogd can log to the network, but it has some level of flexibility. You can do log rotation using shell scripts pretty trivially.
Use klogd it reads the kernel log messages(from /proc/kmsg kernel) interface and redirect those messages to appropriate directory. you can use user configurable syslogd daemon along with klogd that will redirect kernel messages into appropriate files in /var/log/ directory.
For instance logs related to mail service will be stored in /var/log/main.log and logs related to kernel booting process will be stored in /var/log/boot.log . User can configure log parsing using syslogd configuration file.
But the use of syslogd may lead to your system performance degradation because for every log messages syslog daemon will do disk operation to store that log into appropriate file
Log sequence
Messages from kernel
---> klogd ( access messages from kernel ring buffer)-->syslogd --> /var/log/*