We are using apache Cassandra 3.11.7 running on native kubernetes cluster. Is it vulnerable to the log4j security exception?
Here you find a list of affected software. Cassandra is not in the list. https://github.com/cisagov/log4j-affected-db
Related
Which versions of Kafka are impacted by CVE-2021-44228?
Nothing is yet updated on Apache Kafka Security Vulnerabilities about this vulnerability.
Update 2021-12-15
APACHE KAFKA SECURITY VULNERABILITIES has confirmed:
CVE-2021-45046
Users should NOT be impacted by this vulnerability
CVE-2021-44228
Users should NOT be impacted by this vulnerability
CVE-2021-4104
Version 1.x of Log4J can be configured to use JMS Appender, which publishes log events to a JMS Topic. Log4j 1.x is vulnerable if the deployed application is configured to use JMSAppender.
So please check the site for details.
Update 2021-12-13
As suggested by bovine, log4j1.x may also be affected to this vulnerability.
strictly speaking, applications using Log4j 1.x may be impacted if their configuration uses JNDI. However, the risk is much lower.
please refer to this link for latest status.
Evidence for not using log4j2
By checking dependencies.gradle of Kafka:
1.0.0 and 3.0.0
both are using 1.2.17.
As the issue is affecting version from 2.0-beta9 to 2.14.1, Kafka is not affected by this security vulnerabilities.
I have been doing a research about Configuring Spark JobServer Backend (SharedDb) with Cassandra.
And I saw in the SJS documentation that they cited Cassandra as one of the Shared DBs that can be used.
Here is the documentation part:
Spark Jobserver offers a variety of options for backend storage such as:
H2/PostreSQL or other SQL Databases
Cassandra
Combination of SQL DB or Zookeeper with HDFS
But I didn't find any configuration example for this.
Would anyone have an example? Or can help me to configure it?
Edited:
I want to use Cassandra to store metadata and jobs from Spark JobServer. So, I can hit any servers through a proxy behind of these servers.
Cassandra was supported in the previous versions of Jobserver. You just needed to have Cassandra running, add correct settings to your configuration file for Jobserver: https://github.com/spark-jobserver/spark-jobserver/blob/0.8.0/job-server/src/main/resources/application.conf#L60 and specify spark.jobserver.io.JobCassandraDAO as DAO.
But Cassandra DAO was recently deprecated and removed from the project, because it was not really used and maintained by the community.
For Scylla monitoring, we need to configure Grafana but is it possible to integrate Cassandra Opscenter to Scylla?
TL;DR: No.
OpsCenter is a closed source product, which was not tested with Scylla. Part of it that uses Apache Cassandra CQL and JMX will probably work, while others might not.
In addition to the open source, Scylla monitoring stack (base on Prometheus and Grafana), ScyllaDB has its own close version product for cluster management named Scylla Manager.
Tzach (Scylla Product Manager)
We are using embedded cassandra in our groovy test cases, we are migrating from logback to log4j2. Whenever i run the groovy test which uses cassandra it gives an exception of NoClassDefFoundError for ch/qos/logback /classic /Logger. I have excluded logback dependency from all existing cassandra dependency still its looking for logback. How should i make cassandra log using log4j2
Cassandra isn't setup or designed to run embedded so while there might be some hacks that can get you by it will be something difficult to keep working across versions.
I would recommend using ccm for your tests to run it out of jvm and it will also give you more control for interesting configurations. The java driver has a useful bridge for java applications in their tests here: CCMBridge.java
Longterm you might be able to use something CASSANDRA-14821 as there will be native connections exposed and give you a lot more control over results of queries and such.
I'm experimenting with Datastax Enterprise and I'm trying to have a cluster that mixes Enterprise nodes and standard Cassandra community nodes. I would only need a few nodes with advanced features like Solr and it would be nice to have all the nodes in the same cluster.
I tried to bootstrap a community node to a test Enterprise cluster, and it couldn't join the ring properly, throwing exceptions like that:
Unable to find compaction strategy class
'com.datastax.bdp.hadoop.cfs.compaction.CFSCompactionStrategy'
I assume that the Enterprise node tries to replicate CFs that have features from DSE, which are not recognized by the community node.
Is there a way to prevent that from happening? Am I trying to do something that's not possible/supported/allowed by DSE?
That is an unsupported configuration. The full cluster needs to be installed with DataStax enterprise binaries on all nodes. You can choose which nodes run as vanilla Cassandra, Hadoop or Solr by startup options on each node. DSE has a custom compaction strategy and snitch so that error is expected.