RHEL 8 how to leave core files in working directory - linux

I have a product with many executables, and in its older versions (RHEL 5 and previous) the default behavior when a crash occurred was to deposit the core file in the executable's current working directory, with the name core.pid. In RHEL 7 and 8, the new behavior is that all core files go to /var/lib/systemd/coredump/. In order to avoid modifying about a dozen watchdog programs that expect the old behavior, I'd prefer to just revert to the old core locations (in process's cwd). How do I change that behavior?

Looks like I found my own answer. I added this line to /etc/sysctl.d/99-sysctl.conf:
kernel.core_pattern = core.%p
and that's all that was necessary. This takes effect after reboot, but you can also make it immediate by running
echo "core.%p" > /proc/sys/kernel/core_pattern
Note that my servers are NOT running abrtd, and this likely has an effect on whether this solution works.

Related

git forces refresh index after switching between Windows and Linux

I have a disk partition (format: NTFS) shared by Windows and Linux. It contains a git repository (about 6.7 GB).
If I only use Windows or only use Linux to manipulate the git repository everything is okay.
But everytime I switch the system, the git status command will refresh the index, which takes about 1 minute. If I run the git status in the same system again, it only take less than 1 second. Here is the result
# Just after switch from windows
[#5#wangx#manjaro:duishang_design] git status # this command takes more than 60s
Refresh index: 100% (2751/2751), done.
On branch master
nothing to commit, working tree clean
[#10#wangx#manjaro:duishang_design] git status # this time the command takes less than 1s
On branch master
nothing to commit, working tree clean
[#11#wangx#manjaro:duishang_design] git status # this time the command takes less than 1s
On branch master
nothing to commit, working tree clean
I guess there is some problem with the git cache. For example: Windows and Linux all use the .git/index file as cache file, but the git in Linux system can't recognize the .git/index changed by Windows. So it can only refresh the index and replace the .git/index file, which makes the next git status super fast and git status in Windows very slow (because the Windows system will refresh the index file again).
Is my guess correct? If so, how can I set the index file for different system? How can I solve the problem?
You are completely correct here:
The thing you're using here, which Git variously calls the index, the staging area, or the cache, does in fact contain cache data.
The cache data that it contains is the result of system calls.
The system call data returned by a Linux system is different from the system call data returned by a Windows system.
Hence, an OS switch completely invalidates all the cache data.
... how can I use set the index file for different system?
Your best bet here is not to do this at all. Make two different work-trees, or perhaps even two different repositories. But, if that's more painful than this other alternative, try out these ideas:
The actual index file that Git uses merely defaults to .git/index. You can specify a different file by setting GIT_INDEX_FILE to some other (relative or absolute) path. So you could have .git/index-linux and .git/index-windows, and set GIT_INDEX_FILE based on whichever OS you're using.
Some Git commands use a temporary index. They do this by setting GIT_INDEX_FILE themselves. If they un-set it afterward, they may accidentally use .git/index at this point. So another option is to rename .git/index out of the way when switching OSes. Keep a .git/index-windows and .git/index-linux as before, but rename whichever one is in use to .git/index while it's in use, then rename it to .git/index-name before switching to the other system.
Again, I don't recommend attempting either of these methods, but they are likely to work, more or less.
As torek mentioned, you probably don't want to do this. It's not generally a good idea to share a repo between operating systems.
However, it is possible, much like it's possible to share a repo between Windows and Windows Subsystem for Linux. You may want to try setting core.checkStat to minimal, and if that isn't sufficient, core.trustctime to false. That leads to the minimal amount of information being stored in the index, which means that the data is going to be as portable as possible.
Note, however, that if your repository has symlinks, that it's likely that nothing you do is going to prevent refreshes. Linux typically considers the length of a symlink to be its length in bytes, and Windows considers it to take one or more disk blocks, so there will be a mismatch in size between the operating systems. This isn't avoidable, since size is one of the attributes used in the index that can't be disabled.
This might not apply to the original poster, but if Linux is being used under the Windows Subsystem for Linux (WSL), then a quick fix is use git.exe even on the Linux side. Use an alias or something to make it seamless. For example:
alias git=git.exe
Auto line ending setting solved my issue as in this discussion. I am referring to Windows, WSL2, Portable Linux OS, and Linux as well which I have setup and working as my work requirement. I will update in case I face any issue while preferring this approach for updating code base from different filesystems (NTFS or Linux File System).
git config --global core.autocrlf true

Cassandra and Java 9 - ThreadPriorityPolicy=42 is outside the allowed range

Very recently I installed JDK 9 and Apache Cassandra from the official site. But now when I start cassandra in foreground, I get this message:
apache-cassandra-3.11.1/bin$ ./cassandra -f
[0.000s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/home/mmatak/monero/apache-cassandra-3.11.1/logs/gc.log instead.
intx ThreadPriorityPolicy=42 is outside the allowed range [ 0 ... 1 ]
Improperly specified VM option 'ThreadPriorityPolicy=42'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
So far I didn't find any solution for this. Is it maybe possible that Java 9 and Cassandra are not yet compatible? Here is that problem mentioned as well - #CASSANDRA-13107
But I am not sure how to just "remove the flag"? Where is it possible to override or remove this flag?
I had exactly the same issue:
Can't start Cassandra (Single-Node Cluster on CentOS7)
If it is an option for you, using Java 8, instead of 9, is the simplest way to solve the issue.
Setting the following env variables solved the problem in MAC
export JAVA8_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home
#Martin Matak Just comment out that line in the conf/jvm.options file:
########################
# GENERAL JVM SETTINGS #
########################
# allows lowering thread priority without being root on linux - probably
# not necessary on Windows but doesn't harm anything.
# see http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workaround.html
**#-XX:ThreadPriorityPolicy=42**
Some background on -XX:ThreadPriorityPolicy.
These were the values, as documented in the source code.
0 : Normal.
VM chooses priorities that are appropriate for normal
applications. On Solaris NORM_PRIORITY and above are mapped
to normal native priority. Java priorities below
NORM_PRIORITY map to lower native priority values. On
Windows applications are allowed to use higher native
priorities. However, with ThreadPriorityPolicy=0, VM will
not use the highest possible native priority,
THREAD_PRIORITY_TIME_CRITICAL, as it may interfere with
system threads. On Linux thread priorities are ignored
because the OS does not support static priority in
SCHED_OTHER scheduling class which is the only choice for
non-root, non-realtime applications.
1 : Aggressive.
Java thread priorities map over to the entire range of
native thread priorities. Higher Java thread priorities map
to higher native thread priorities. This policy should be
used with care, as sometimes it can cause performance
degradation in the application and/or the entire system. On
Linux this policy requires root privilege.
In other words: The default Normal setting causes thread priorities to be ignored on Linux.
Now someone found a bug in the code, which disabled the "is root?" check for values other than 1, but would still try to set the thread priority for every value other than 0.
Unless running as root, it would only be possible to lower the thread priority. So although not perfect, this was quite an improvement, compared to not being able to control the priorities at all.
Starting with Java 9, command line arguments like this one started to get checked, and this hack stopped working.
Fwiw, on Java 11/Linux, I can set the parameter to 1 without being root, and setting thread priorities does have an effect. So something has changed in the meantime, and at least with recent JVMs, and this hack does not seem necessary any more.
Solution to your Question
Reason for this exception
Multiple JDK versions running,probably JDK9,JDK 10 is causing this exception.
Set the Path to Point JDK 8 Version only.
Currently cassandra 3.1 is desired to run greater than jdk 8 only.
Change in Cassandra-Conf file (/opt/apache-cassandra-3.11.2/conf/cassandra-env.sh)
4.If you want to use higher JDK Version, update the system path variables based on your OS.
Theres a jvm.options in your conf directory which sets it:
https://github.com/apache/cassandra/blob/12d4e2f189fb228250edc876963d0c74b5ab0d4f/conf/jvm.options#L96
Following from Jay's answer if you're on macOS and installed via Homebrew: the file is located at local/etc/cassandra/jvm.options.

Protect file from system modifications

I am working on a linux computer which is locked down and used in kiosk mode to run only one application. This computer cannot be updated or modified by the user. When the computer crashes or freezes the OS rebuilds or modifies the ld-2.5.so file. This file needs to be locked down without allowing even the slightest change to it (there is an application resident which requires ld-2.5.so to remain unchanged and that is out of my control). Below are the methods I can think of to protect ld-2.5.so but wanted to run it by the experts to see if I am missing anything.
I modified the fstab to mount the EXT3 filesystem as EXT2 to disable journaling. Also set the DUMP and FSCK values to "0" to disable those processes.
Performed a "chattr +i ld-2.5.so" on the file but there are still system processes that can overwrite this protection.
I could attempt to trap the name of the processes which are hitting ld-2.5.so and prevent this.
Any ideas or hints would be greatly appreciated.
-Matt (CentOS 5.0.6)
chattr +i should be fine in most circumstances.
The ld-*.so files are under /usr/lib/ and /usr/lib64/. If /usr/ is a separate partition, you also might want to mount that partition read only on a kiosk system.
Do you have, by any chance, some automated updating/patching of said PC configured? ld-*.so is part of glibc and basically should only change if the glibc package is updated.

Is a core dump executable by itself?

The Wikipedia page on Core dump says
In Unix-like systems, core dumps generally use the standard executable
image-format:
a.out in older versions of Unix,
ELF in modern Linux, System V, Solaris, and BSD systems,
Mach-O in OS X, etc.
Does this mean a core dump is executable by itself? If not, why not?
Edit: Since #WumpusQ.Wumbley mentions a coredump_filter in a comment, perhaps the above question should be: can a core dump be produced such that it is executable by itself?
In older unix variants it was the default to include the text as well as data in the core dump but it was also given in the a.out format and not ELF. Today's default behavior (in Linux for sure, not 100% sure about BSD variants, Solaris etc.) is to have the core dump in ELF format without the text sections but that behavior can be changed.
However, a core dump cannot be executed directly in any case without some help. The reason for that is that there are two things missing from a simple core file. One is the entry point, the other is code to restore the CPU state to the state at or just before the dump occurred (by default also the text sections are missing).
In AIX there used to be a utility called undump but I have no idea what happened to it. It doesn't exist in any standard Linux distribution I know of. As mentioned above (#WumpusQ) there's also an attempt at a similar project for Linux mentioned in above comments, however this project is not complete and doesn't restore the CPU state to the original state. It is, however, still good enough in some specific debugging cases.
It is also worth mentioning that there exist other ELF formatted files that cannot be executes as well which are not core files. Such as object files (compiler output) and .so (shared object) files. Those require a linking stage before being run to resolve external addresses.
I emailed this question the creator of the undump utility for his expertise, and got the following reply:
As mentioned in some of the answers there, it is possible to include
the code sections by setting the coredump_filter, but it's not the
default for Linux (and I'm not entirely sure about BSD variants and
Solaris). If the various code sections are saved in the original
core-dump, there is really nothing missing in order to create the new
executable. It does, however, require some changes in the original
core file (such as including an entry point and pointing that entry
point to code that will restore CPU registers). If the core file is
modified in this way it will become an executable and you'll be able
to run it. Unfortunately, though, some of the states are not going to
be saved so the new executable will not be able to run directly. Open
files, sockets, pips, etc are not going to be open and may even point
to other FDs (which could cause all sorts of weird things). However,
it will most probably be enough for most debugging tasks such running
small functions from gdb (so that you don't get a "not running an
executable" stuff).
As other guys said, I don't think you can execute a core dump file without the original binary.
In case you're interested to debug the binary (and it has debugging symbols included, in other words it is not stripped) then you can run gdb binary core.
Inside gdb you can use bt command (backtrace) to get the stack trace when the application crashed.

Artificially modify server load in Ubuntu

I am curious if it is possible artificially modify the server load in Ubuntu or more generally linux. I am working on an application that reacts to the server load, and in order to test it it would be nice if I could change the server load easily.
I am currently running an over-active program that will literally generate load, but I'd prefer to not continue overheating my laptop (it's getting hot!).
One of the most important things to know about Linux (or Unix) systems is, everything is just a file. Since you are just reading from /proc/loadavg, the easiest was for you to accomplish what you are after is simply make a text file that contains a line of text that you would see when running cat /proc/loadavg. Then have your program read from that file you created instead of /proc/loadavg and it will be none the wiser. If you want to test under different "artificial" situations, just change the text in this file and save. When your testing is done, simply change your program back to reading from /proc/loadavg and you can be sure it will work as expected.
Note, you can make this text file anywhere you want...in your home directory, in the program directory, wherever. However, you shouldn't make it in /proc. That directory is reserved for system objects.
You can use the stress command, see http://weather.ou.edu/~apw/projects/stress/
A tool to impose load on and stress test a computer system
sudo apt-get install stress
To avoid CPU warm, you can install a virtual machine with small cpu capacity. virtualbox and qemu-kvm are free.
Use chroot to run the various pieces of software you're testing with a specified directory as the root directory. Set up a manufactured/modified /proc/loadavg relative to that new root directory, too.
chroot will let you create a dummy file that appears to have /proc/loadavg as its path, so the software will observe your manufactured values even if you can't change your code to look for load data in a different location.
Since you don't want to actually/literally stress the machine, something like stress is not what you are after.
As stated, /proc/loadavg would be the place to set system load averages (faux loads).
But if that's also not the meat of what you're after, I would absolutely suggest
getloadavg
watchdog
and even possible Munin plugins
There're two methods.
Hacking /proc/loadavg
The machine is not overstressed
Your program reads load valus from a file
Todo: hack Linux to report fake load value
Modify your prg
The machine is not overstressed
Your program reads load valus from a file
Todo: change 4 characters in your prg: replace /proc/loadavg with /tmp/loadavg
You can decide now. Calculate costs ;)

Resources