Cassandra and Java 9 - ThreadPriorityPolicy=42 is outside the allowed range - cassandra

Very recently I installed JDK 9 and Apache Cassandra from the official site. But now when I start cassandra in foreground, I get this message:
apache-cassandra-3.11.1/bin$ ./cassandra -f
[0.000s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/home/mmatak/monero/apache-cassandra-3.11.1/logs/gc.log instead.
intx ThreadPriorityPolicy=42 is outside the allowed range [ 0 ... 1 ]
Improperly specified VM option 'ThreadPriorityPolicy=42'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
So far I didn't find any solution for this. Is it maybe possible that Java 9 and Cassandra are not yet compatible? Here is that problem mentioned as well - #CASSANDRA-13107
But I am not sure how to just "remove the flag"? Where is it possible to override or remove this flag?

I had exactly the same issue:
Can't start Cassandra (Single-Node Cluster on CentOS7)
If it is an option for you, using Java 8, instead of 9, is the simplest way to solve the issue.

Setting the following env variables solved the problem in MAC
export JAVA8_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home

#Martin Matak Just comment out that line in the conf/jvm.options file:
########################
# GENERAL JVM SETTINGS #
########################
# allows lowering thread priority without being root on linux - probably
# not necessary on Windows but doesn't harm anything.
# see http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workaround.html
**#-XX:ThreadPriorityPolicy=42**

Some background on -XX:ThreadPriorityPolicy.
These were the values, as documented in the source code.
0 : Normal.
VM chooses priorities that are appropriate for normal
applications. On Solaris NORM_PRIORITY and above are mapped
to normal native priority. Java priorities below
NORM_PRIORITY map to lower native priority values. On
Windows applications are allowed to use higher native
priorities. However, with ThreadPriorityPolicy=0, VM will
not use the highest possible native priority,
THREAD_PRIORITY_TIME_CRITICAL, as it may interfere with
system threads. On Linux thread priorities are ignored
because the OS does not support static priority in
SCHED_OTHER scheduling class which is the only choice for
non-root, non-realtime applications.
1 : Aggressive.
Java thread priorities map over to the entire range of
native thread priorities. Higher Java thread priorities map
to higher native thread priorities. This policy should be
used with care, as sometimes it can cause performance
degradation in the application and/or the entire system. On
Linux this policy requires root privilege.
In other words: The default Normal setting causes thread priorities to be ignored on Linux.
Now someone found a bug in the code, which disabled the "is root?" check for values other than 1, but would still try to set the thread priority for every value other than 0.
Unless running as root, it would only be possible to lower the thread priority. So although not perfect, this was quite an improvement, compared to not being able to control the priorities at all.
Starting with Java 9, command line arguments like this one started to get checked, and this hack stopped working.
Fwiw, on Java 11/Linux, I can set the parameter to 1 without being root, and setting thread priorities does have an effect. So something has changed in the meantime, and at least with recent JVMs, and this hack does not seem necessary any more.

Solution to your Question
Reason for this exception
Multiple JDK versions running,probably JDK9,JDK 10 is causing this exception.
Set the Path to Point JDK 8 Version only.
Currently cassandra 3.1 is desired to run greater than jdk 8 only.
Change in Cassandra-Conf file (/opt/apache-cassandra-3.11.2/conf/cassandra-env.sh)
4.If you want to use higher JDK Version, update the system path variables based on your OS.

Theres a jvm.options in your conf directory which sets it:
https://github.com/apache/cassandra/blob/12d4e2f189fb228250edc876963d0c74b5ab0d4f/conf/jvm.options#L96

Following from Jay's answer if you're on macOS and installed via Homebrew: the file is located at local/etc/cassandra/jvm.options.

Related

How to get the operating system's "Currently Executing Thread" handle (NOT the "Managed Thread ID") in .NET 6?

In .NET6 I want to retrieve the native handle (NOT the "Managed Thread ID") of the OS thread, on which the handle-retrieving function just runs, as a (possibly casted to) UInt32.
I found a solution for Windows (using the kernel's "GetCurrentWin32ThreadId"), but I want to have solutions also for Linux, MacOS and Android, assuming that the respective implicit OS' object models contain also "Thread Handles".
To avoid senseless reading time consuming tries to lead me on other paths: my question is very precise, please don't ask "why"s! And please avoid "you could try"s, because I don't have access to Linux-Computers, Macs, Smartphones, and don't want to bother others by intermediate tests and/or even "tries". I need concrete definitoric "code snippet" answers.
I need it 1. for debugging purposes, 2. for .NET-ManagedThreadPool monitoring (if it always works correctly), 3. cross-checking with the Visual Studio output (about finished threads) and 4. some other (also platform specific to be handled, native) functions/stuff (e.g. native thread coordination, cross-process).
My goal:
I want to deliver my program(s) [atm especially the "OpenSimulator"-software, including the server (Windows, Linux) as well as the user's viewer (Windows, Linux, MacOS, iOS)] with a target-platform-independent .NET6-".exe", and an OS-respective target-platform-specific .NET6-.dll as the respective implementation for certain interfaces, to bridge the yet current compatibility-gaps, something/somehow like MAUI tries to do, but generalized more complete on the logical (.NET6) layer.

RHEL 8 how to leave core files in working directory

I have a product with many executables, and in its older versions (RHEL 5 and previous) the default behavior when a crash occurred was to deposit the core file in the executable's current working directory, with the name core.pid. In RHEL 7 and 8, the new behavior is that all core files go to /var/lib/systemd/coredump/. In order to avoid modifying about a dozen watchdog programs that expect the old behavior, I'd prefer to just revert to the old core locations (in process's cwd). How do I change that behavior?
Looks like I found my own answer. I added this line to /etc/sysctl.d/99-sysctl.conf:
kernel.core_pattern = core.%p
and that's all that was necessary. This takes effect after reboot, but you can also make it immediate by running
echo "core.%p" > /proc/sys/kernel/core_pattern
Note that my servers are NOT running abrtd, and this likely has an effect on whether this solution works.

ncurses disable kernel messages on console screen?

Im looking for a way how to get rid of (kernel?) messages that appear in my ncurses app. I wrote the app myself, so i would prefer a API that redirects these messages to /dev/null. I mean messages like, a USB stick that is inserted.
I tried to add this, but unfortunately it doesn't work
freopen("/dev/null", "w", stderr);
I'm not running X, just ncurses direct from the console.
I mean messages like, a USB stick that is inserted.
Thanks!
UPDATE 1:
Someone votes to close this question because it would not be related to programming. But it is, i wrote the ncurses app myself, i'm looking for a way how to disable the kernel message. I updated the question.
UPDATE 2:
Let me explain what i'm doing, and whats the problem in more detail:
I'm using Tiny Core linux, thats after boots starts (self written) ncurses program. Now when you for example connect a USB drive, a message (i suspect kernel) is shown over my program. I guess the message is written straight into the framebuffer. Im using TC 5.x since i need 32 bit, im running as root and have full access to the os.
You should be able to use openvt to have your program run on a new Virtual Terminal.
I'll also note that it should be possible to embed control for the VTs yourself if you prefer to break the external dependency, but note that structures used may not be stable between kernel versions, and may require recompilation.
See the KBD project's sources, specifically openvt.c to see how it works.
Try configuring the kernel through boot parameters with the option:
loglevel=3 (or a lower value)
0 (KERN_EMERG) system is unusable
1 (KERN_ALERT) action must be taken immediately
2 (KERN_CRIT) critical conditions
3 (KERN_ERR) error conditions
4 (KERN_WARNING) warning conditions
5 (KERN_NOTICE) normal but significant condition
6 (KERN_INFO) informational
7 (KERN_DEBUG) debug-level messages
source: https://www.kernel.org/doc/Documentation/kernel-parameters.txt
See also: Change default console loglevel during boot up
It might be impossible to block some other process with sufficient access from writing to /dev/console but you may be able to redefine console as some other device, at boot time by setting console=ttyS0 (first serial port), see:
https://unix.stackexchange.com/questions/60641/linux-difference-between-dev-console-dev-tty-and-dev-tty0
Also if we know exactly which software is sending the message it may be possible to reconfigure it (possibly dynamically) but it would help to know the version and edition of Tiny Core Linux you are using?
E.g. this website has a "Core", "TinyCore" and "CorePlus" versions 1.x up to 7
http://tinycorelinux.net/downloads.html
This would help reproducing the exact same behavior and testing potential solutions.

Rails 3.2.x + Glassfish + How to multithread?

I have a JRuby 1.6.7/Rails 3.2.11 web application deployed on Glassfish (with no web server in front of it). I would like to make my application multi-threaded.
A best practices article suggests that I need to set the max and min runtimes to 1, and then go to config/environment.rb and put in the line
config.threadsafe!
However, a note from Oracle says (along with this note at Github) that I only have to set the minimum and maximum number of runtimes in the web.xml configuration file or the command line, and it says nothing about config.threadsafe!. (My feeling with this method is that it will take up a lot of memory because each runtime loads up a full instance of Rails).
Which method is right? Are they both right? Which is the better way to go multi-threaded?
One must do the following
set the min and max runtimes to 1
go into config/environments/production.rb and uncomment the
#config.threadsafe! line, you must also do this for any other environments you would want threadsafe mode to work in.
By doing these things Rails will run using one runtime and multiple threads saving you lots of memory. Additional information regarding threadsafe jruby on rails apps can be found here http://nowhereman.github.com/how-to/rails_thread_safe/
If you are using Warbler, you can skip step one - if you only follow step#2 the min and max runtimes will be set by default look at the web.xml within the war file you will see that it has been set. Likewise, if threadsafe has not been set you will see the absence of the min and max settings.
That being said Rails 4 will have threadsafe enabled by default. Here's the commit https://github.com/rails/rails/pull/6685
Also, here's a post about the hows and whys: http://tenderlovemaking.com/2012/06/18/removing-config-threadsafe.html

Linux #open-file limit

we are facing a situation where a process gets stuck due to running out of open files limit. The global setting file-max was set extremely high (set in sysctl.conf) & per-user value also was set to a high value in /etc/security/limits.conf. Even ulimit -n reflects the per-user value when ran as that head-less user (process-owner). So the question is, does this change require system reboot (my understanding is it doesn't) ? Has anyone faced the similar problem ? I am running ubuntu lucid & the application is a java process. #of ephemeral port range too is high enough, & when checked during the issue, the process had opened #1024 (Note the default value) files (As reported by lsof).
One problem you might run into is that the fd_set used by select is limited to FD_SETSIZE, which is fixed at compile time (in this case of the JRE), and that is limited to 1024.
#define FD_SETSIZE __FD_SETSIZE
/usr/include/bits/typesizes.h:#define __FD_SETSIZE 1024
Luckily both the c library and the kernel can handle arbitrary sized fd_set, so, for a compiled C program, it is possible to raise that limit.
Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so
and then do
#ulimit -n unlimited
Note that you may need to log out and back in again before the changes take effect.

Resources