My security tool is detecting a reactor netty package and flagging it with a netty CVEs.
Details:
My server has reactor netty v1.0.23 installed (v1.0.23 was released Sep 30, 2022)
My security tool identifies CVE-2019-20445
CVE-2019-20445 was written in 2019 against netty v4.1.44 and earlier (v4.1.44 was released Oct 24, 2019)
I suspect my security tool is misidentifying reactor-netty-http-1.0.23 as a version of netty earlier than 4.1.44
But I'm also aware of cases where a MySQL CVE is applicable to MariaDB because they share the same code base
Do CVEs against netty apply to reactor netty?
Is there a way to prove netty CVEs don't apply or are only applicable in certain cases?
If reactor-netty v1.0.23 is based on the "old" netty 4.1.44 then the CVE should be flagged.
If reactor-netty v1.0.23 is based on the "new" netty 4.1.82 then the CVE should NOT be flagged.
I'd appreciate any clarification/correction before I flag this as a false positive.
Related
Currently we are using Java Datastax Drivers Version 3.7.2 to connect to Open source Apache Cassandra Version 3.11.9.
We are planning on to upgrade to Open Source Apache Cassandra Version 4 , Can someone please let me know what are the recommended Java Datastax Drivers version to connect to Cassandra Version 4. I see in this article Datastax had mentioned that Datastax Drivers Version 3.11 is partially compatible with Cassandra Version 4.X and did not have much information on what they mean about partially compatible?
https://docs.datastax.com/en/driver-matrix/docs/java-drivers.html
First, Apache Cassandra® 4.1 is already released last Dec and you may want to look at upgrading to that as opposed to 4.0.x.
Next, partially compatible is also explained in the docs section as,
^4^ Limited to the Cassandra 3.x and 2.2.x API.
Also, I'm taking excerpts from the mailing list discussions here.
Neither the 4.x nor the 3.x Java drivers are in maintenance mode at the moment. It is very much true that any new Java driver features will be developed on the 4.x branch and in general will not be ported to 3.x. 3.x will continue to receive CVE and other critical bug fixes but as mentioned there are no plans for this branch to receive any new features. It's not completely impossible that a specific feature or two might make it's way to 3.x on a case-by-case basis but if you're planning for the future with 3.x you should do so with the expectation that it will receive no new features.
&
Having said that, I would strongly recommend and encourage you to upgrade to the 3.11.3 version of the java driver (released on Sep 20, 2022) which is directly binary compatible with the version that you're using today, 3.7.2 (released on Jul 10, 2019), to leverage features and fixes (including many CVE patches). In addition, I would also suggest you to sketch out a plan to upgrade your apps to 4.x driver or look into modernizing to interact with your Apache Cassandra®/DSE®/Astra DB® cluster via the Stargate® APIs.
Which versions of Kafka are impacted by CVE-2021-44228?
Nothing is yet updated on Apache Kafka Security Vulnerabilities about this vulnerability.
Update 2021-12-15
APACHE KAFKA SECURITY VULNERABILITIES has confirmed:
CVE-2021-45046
Users should NOT be impacted by this vulnerability
CVE-2021-44228
Users should NOT be impacted by this vulnerability
CVE-2021-4104
Version 1.x of Log4J can be configured to use JMS Appender, which publishes log events to a JMS Topic. Log4j 1.x is vulnerable if the deployed application is configured to use JMSAppender.
So please check the site for details.
Update 2021-12-13
As suggested by bovine, log4j1.x may also be affected to this vulnerability.
strictly speaking, applications using Log4j 1.x may be impacted if their configuration uses JNDI. However, the risk is much lower.
please refer to this link for latest status.
Evidence for not using log4j2
By checking dependencies.gradle of Kafka:
1.0.0 and 3.0.0
both are using 1.2.17.
As the issue is affecting version from 2.0-beta9 to 2.14.1, Kafka is not affected by this security vulnerabilities.
Good day, does anyone find information about log4j involving Hazelcast 3.11?
In the official website doesn't show anything about the log4j vulnerability.
The vulnerability is addressed in Log4j2.15.0. Hazelcast team is currently working to release fixes to add this for the versions listed above. UPDATE on December 15, 2021: IMDG 4.0.4, 4.1.7 and 4.2.3 have been released. Remaining release of Hazelcast 5.0.1 and Hazelcast Jet 4.5.2 is being worked on.
Users that explicitly use a vulnerable Log4j2 library are advised to upgrade to Log4j2.15.0 as soon as possible.
For more: Security Advisory for "Log4Shell" CVE-2021-44228 and CVE-2021-45046
Spark 1.6 can be configured to use AKKA or Netty for RPC. In case Netty is configured, does that mean that Spark runtime does not employ the actor model for messaging (e.g. between workers and driver blockmanagers) or even in case of netty configuration, a custom simplified actor model is used by relying on Netty.
I think AKKA itself relies on netty and Spark uses only a subset of AKKA. Still, is configuring AKKA is better for scalability (in terms of number of workers) as compared to netty? any suggestion on this particular spark configuration?
adding to #user6910411s pointer which was nicely explained about the design decision.
as explained by link Flexibility and removing dependency on Akka was the design decision..
Question :
I think AKKA itself relies on netty and Spark uses only a subset of
AKKA. Still, is configuring AKKA is better for scalability (in terms
of number of workers) as compared to netty? any suggestion on this
particular spark configuration?
Yes Spark 1.6 can be configured to use AKKA or Netty for RPC.
it can be configured through spark.rpc i.e val rpcEnvName = conf.get("spark.rpc", "netty") that means default: netty.
Please see 1.6 code base
Here are more insights , as when to go for what...
Akka and Netty both deals asynchronous processing and message handling, but they work at different levels W.R.T scalablity.
Akka is a higher level framework for building event-driven, scalable, fault-tolerant applications. It focuses on the Actor class for message processing. Actors have a hierarchical arrangement, parent actors are responsible for supervision of their child actors.
Netty also works around messages, but it is a little lower level and deals more with networking. It has NIO at its core. Netty has a lot of features for using various protocols like HTTP, FTP, SSL, etc. Also, you have more fine-grained control over the threading model.
Netty is actually used within Akka w.r.t. distributed actors.
So even though they are both asynchronous & message-oriented, with
Akka you are thinking more abstractly in your problem domain, and with
Netty you are more focused on the networking implementation.
Conclusion : Netty and Akka both are equally scalable. pls also note that Spark2 onwards default is Netty I cant see Akka as spark.rpc flag there I mean val rpcEnvName = conf.get("spark.rpc", "netty") is not available. in Spark2.0 code see RpcEnv.scala.
We are exploring apache cassandra and are going to use it for Production soon.
We are going to use mostly Datastax community edition of apache cassandra.
But after reading :
http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/
https://www.pythian.com/blog/cassandra-version-production/
With this sentence from above blog “If you don’t mind facing serious bugs and contribute to the development pick 3.x”
I am confused about which version to opt for our production deployment ?
Just need to know whether 3.5.0 and 3.0.6 are production ready.
Datastax community : 3.5.0 from http://www.planetcassandra.org/cassandra/
Datastax community : 3.0.6 from
http://www.planetcassandra.org/archived-versions-of-datastaxs-distribution-of-apache-cassandra/
or
Datastax community : 2.2.6 from
http://www.planetcassandra.org/archived-versions-of-datastaxs-distribution-of-apache-cassandra/
The version provided by datastax is supposed to be stable and production ready. You have an application to monitor your cluster, which is nice if you don't have any ops that knows about cassandra in the first place, and you can pay to get support.
However, you don't have the latest version of Cassandra, and you can miss interesting features.
As for Cassandra 3.x, as said above, you get more features (for example JSON support) and better performance, but if you find a critical bug and can't fix it, you can only writes a ticket and hope they will take care of it quickly. Yet it is production ready and this could work well for you.
In conclusion, go for the latest version only if you need a special feature, or if you have the developers in your team to back your choice. Go for Datastax if you want something that works with less effort.