Does the log4j vunerability CVE-2021-44228 affect logstash-logback-encoder - log4j

I would like the find out if the log4j security vulnerability CVE-2021-44228 (https://nvd.nist.gov/vuln/detail/CVE-2021-44228) affects logstash-logback-encoder?

Warning:
Alster's answer is technically correct, but it may be misleading to some people!
logstash, logback, and slf4j I think all use log4j-core-1.x... this means they are not vulnerable to CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105. See Apache's Log4J security bulletin.
HOWEVER logback usess Log4J version 1.x and Log4J version 1.2 IS VULNERABLE to CVE-2019-17571 and CVE-2021-4104 (keep reading for more info on these)
On the SLF4J website that Alster linked, the creators say that logback is safe from CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105 because it "does NOT offer a lookup mechanism at the message level". In other words, logback is not directly using the vulnerable JndiLookup.class file within Log4J...
HOWEVER (again), they do mention that Jndi lookup calls are possible from the logback config file. This is documented in CVE-2021-42550 with a severity score of CVSS 6.6. This severity is lower than the others because the exploit is harder to achieve for an attacker, thereby reducing the exposure... however the end result if an attacker were to be successful is the same: arbitrary remote code execution.
Additionally, the SLF4J website fails to mention the CVE's that are independently associated with their Log4J 1.x dependency that I linked above (CVE-2019-17571 and CVE-2021-4104). Those CVEs are not related to the JndiLookup.class file, so their statement "does NOT offer a lookup mechanism at the message level" is not a mitigation for these. They do actually talk about some of the details for CVE-2021-4104, but they do not reference the actual CVE documentation. They fail to mention altogether CVE-2019-17571.
YOU STILL NEED TO MAKE A CHANGE
CVE-2019-17571 has a severity score of CVSS 9.8...
This is an arbitrary remote code execution vulnerability (JUST LIKE CVE-2021-44228 that you asked about in your question).
CVE-2021-4104 has a severity score of CVSS 8.1...
This is also an arbitrary remote code execution vulnerability and in the description of it in the official documentation is says "that result in remote code execution in a similar fashion to CVE-2021-44228".
CVE-2021-42550 has a severity score of CVSS 6.6...
It is also an arbitrary remote code execution vulnerability
CVE-2021-44228 (which is the one that doesn't affect you) has a severity score of CVSS 10.
My recommendations
While it does seem possible to use logback safely if you smile at it just right and tweak 17 different configurations, and upgrade a package, and manually remove a class file from a jar... I do not feel comfortable giving you all of the specifics of that. While I have been trying to help people with the CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105 Log4J 2.x vulnerabilities... this software you are asking about is a whole different level of complexity and I'm not confident I could steer you in the right direction via a 1-time post without me or you missing crucial steps.
This package depends on software that reached end of life in 2015. My recommendation is that it's time to bite the bullet and upgrade to something that isn't holding on by a thread. I know that isn't what you were hoping to hear... I'm sorry.

Logback does NOT offer a lookup mechanism at the message level. Thus, it is deemed safe with respect to CVE-2021-44228.
reference

Related

What about terraform versioning scheme?

I've been using terraform for some time now and a doubt always come to my mind is, which versioning scheme is terraform-core using?
Is it semantic versioning AKA semver? Because if it is, why an upgrade in the minor version, as when upgrading a project from 0.11.X to 0.12.Y writes the state of terraform with that 0.12.x and it is not allowed to downgrade it back to 0.11.x?
Another thing related : why they opt for starting their version numbers in 0.X.X rather that 1.X.X? Does it mean anything?
They do use semantic versioning, but their interpretation is a little different than most.
Here's an answer on a GitHub issue from a Hashicorp employee regarding their versioning methodology:
At HashiCorp we take the idea of a v1.0 very seriously, and once Terraform gets there it will represent a strong promise of compatibility because we believe that the configuration language, internal architecture, CLI, and other product features are right for the long haul.
The current state of Terraform is a little more subtle. We do still consider backward-compatibility very important since we know there is a lot of production infrastructure depending on Terraform today. We must therefore make compromises so we can keep making progress towards something we could make v1.0 promises about. While we keep these disruptions to a minimum, they cannot always be avoided, and so we try to be very explicit about them in the changelog and, where applicable, in upgrade guides.
With this in mind, at this time we suggest always referring to the changelog before upgrading since this is our primary means to note any special considerations that apply during an upgrade. We try to reserve significant breaking changes for increases to the second (traditionally "minor") position in the version number, which at this time represent our "major" development milestones as we work towards an eventual v1.0.
Since Terraform is an application rather than a library we do not intend to follow the Semantic Versioning conventions to the letter, but since they do indeed represent common versioning idiom we are likely to follow them in spirit, since of course we wish to be as clear as possible. As #kshep noted, v0 releases are special in the semver conventions, but the meaning of v1.0 in semver is broadly consistent with how we intend to interpret it.
I'm sorry that our version numbering practices caused confusion here; based on this feedback, we will attempt to be clearer about the significance and risk of each release when we announce it and will work on writing some more explicit documentation on what I wrote above.
Ref: https://github.com/hashicorp/terraform/issues/15839#issuecomment-323106524
Better late than never: https://www.hashicorp.com/blog/announcing-hashicorp-terraform-1-0-general-availability
From here on you can expect regular semantic versioning support.

PropsValues in liferay

why we should not use any classes from portal-impl.jar inside your portlet?
In my case, how can I read PropsValues without adding portal-impl to maven dependencies.
I'm using Liferay 6.2
Thanks
#Origineil - in a comment to your question - gave you the alternative of what to do instead of using portal-impl.jar (e.g. use GetterUtil.getBoolean(PropsUtil.get(PropsKeys.SESSION_TIMEOUT_AUTO_EXTEND)); instead of PropsValues.SESSION_TIMEOUT_AUTO_EXTEND.
Why shouldn't you add portal-impl.jar to your project? Well, there are many reasons. First of all: It doesn't work. If you add portal-impl.jar to your plugin, there are quite a lot of spring components in there that would re-initialize - and they'd assume they're in the portal context. They'll be missing other code they're dependent on and you'd basically pull in a lot of Liferay's implementation and dependency code, making your plugin ridiculously big. And the initialization can't be done twice, so it won't even work anyways.
Plus, in portal-impl.jar you'll only find Liferay's implementation details - none of this code is ever promised to be stable. Not only will nobody care if you're depending on it, it'll most likely break your assumptions even on minor upgrades. Of course some components in there are more stable then others, but the basic assumption is a good one.
Liferay's API (that you're encouraged to use) lives in portal-service.jar. This is automatically available to all plugins and it contains the implementation mentioned above. Don't depend on someone's (Liferay's) internal implementation. Rather depend on the published API. If this means that you'll have to implement something again - so be it. It might be slightly less elegant, but a lot more future proof. And if you compare the size of portal-impl.jar to the amount of code that you'd duplicate in the case of PropsValues, you'll see that this single expansion is actually a nobrainer. Don't pull in 30M of code just because you'd rather type 30 characters than 60.

Code instrumentation in haskell

Suppose I maintain complex application connected to external systems. One day it starts to return unexpected results for certain input and I need to find out why. It could be DNS problem, filesytem related problem, external system change, anything.
Assuming that amount of processing is extensive, before I can identify possible locations of the problem I would need to obtain detailed traces which original application does not produce.
How can I instrument existing code so that I can (for example) provide non-volatile proof (not a live debug session) that certain component or function has a bug.
This sounds like more of an architecture/best practices type question than anything Haskell-specific, unless I'm misunderstanding something.
It sounds like your application needs to use a logging system, such as hslogger. The general approach is to have each component of your code create logging messages with an attached priority. You can then have the application handle different priority levels differently, so for example critical errors could be displayed on the console, while debug and info-level errors go to logfiles.
It's sometimes useful to use Debug.Trace.traceEvent and Debug.Trace.traceEventIO instead of a logging system, particularly if you suspect a concurrency issue, as the ghc eventlog also logs information about thread spawning/switching and garbage collection. But in general it's not a substitution for an actual logging framework.
Also, you may want to make use of assert as a sanity check that "impossible" conditions really don't occur.

Conventions for Stability field of Cabal packages

Cabal allows for a freeform Stability field:
stability: freeform
The stability level of the package, e.g. alpha, experimental, provisional, stable.
What are the community conventions about these stability values? What is considered experimental and what is provisional? I see only few packages are declared as stable. What kind of stability does it refer to, stability of the exposed API or the ultimate bug-free state of the software?
The field is mostly defunct now, and shouldn't be used. As Max said, it will probably be replaced by something meaningful in the future.
If you're interested in the history, the field originated in a design proposal for the first set of Hierarchical Haskell Libraries. That document describes the original intended meanings for the values.
Currently this field is a very poor guide to the stability of the library, so is mostly ignored. Duncan Coutts (one of the main Cabal and Hackage developers) has said that he eventually plans to replace this field entirely, with something like a social voting system on Hackage.
Personally (and I'm not alone) I just always omit the stability field. Given that it's going to go away, its probably not worth losing any sleep over what to put into it.
The original intended meanings were:
experimental: the API is unstable. It may change at any time, i.e.: any version number change;
provisional: the API is moving towards stability. It may be changed at every minor revision, but should provide deprecated versions of features;
stable: the API is stable. Only additions should be made at minor releases. After changes in the API, deprecated features should be kept for at least one major release.
As the other answers pointed out, the community seems not to be following these guidelines anymore.
As Simon Marlow points out, this is described in a design proposal for the first set of Hierarchical Haskell Libraries. The original link is dead, but you can find a copy in the wayback machine.

Security of scala runtime

I'm developer of Robocode engine. We would like to make Robocode
multilingual and Scala seems to be good match. We have Scala plugin prototype here.
The problem:
Because users are creative programmers, they may try to win battle
different ways. As well robots are downloaded from online database
where anyone could upload one. So gap in security may lead to security
hole into users computer. Robots written in Java are running in
restricted sandbox. Almost everything is prohibited [network, GUI,
disk (limited), threads (limited), classloaders and reflection]. The
sandbox is similar to browser applet. We use SecurityManager, custom
ClassLoader per robot, etc ...
There are two ways how to host Scala runtime in Robocode:
1) load it together with robot inside of sandbox. Pretty safe for us,
preferred solution. But it will damage Scala runtime abilities because runtime uses reflection. Maybe generates classes at runtime ? Use threads to do some internal cleanup ? Access to JVM/internals ? (I would not like to limit abilities of language)
2) use Scala runtime as trusted code, outside the box, security on
same level as JDK. Visibility to (malicious)
robot. Are the Scala runtime APIs safe ? Do methods they have security
guards ? Is there any safe mode ? Is there any singleton in Scala runtime,
which could be abused to communicate between robots ? Any concurency/threadpool/messaging which could simulate threads ? (Is there any security audit for Scala runtime?)
3) Something in between, some classes of runtime in and some out. Which classes/packages must be visible to robot/which are just private implementation ? (this seems to be future solution)
The question:
Is it possible to enumerate and isolate the parts of runtime which must run in
trusted scope from the rest ? Specific packages and classes ? Or better idea ?
I'm looking for specific answer, which will lead to secure solution. Random thoughts welcome, but not awarded. There is ongoing discussion at scala email group. No specific answer yet.
I think #1 is your best bet and even that is a moving target. As brought up on the mailing list, structural types use reflection. I don't think structural types are common in the standard library, but I don't think anyone keeps track of where they are.
There's also always the possibility that there are other features using reflection behind the scenes. For example, for a while in the 2.8 branch some array functionality was using reflection. I think that's been changed after benchmarking, but there's always the possibility that there's some problem where someone said "Aha! I will use reflection to solve this."
The Scala standard library is filled with singletons. Most of them are immutable, but I know that the Scheduler object in the actors library could be abused for communication because it is essentially a proxy for an actual scheduler so you can plug your own custom scheduler into it.
At this time I don't think Scala requires using a custom class loader and all of its classes are produced at compile time instead of runtime, but then again that's probably a moving target. Scala generates a lot of class files, and there is always talk of making it generate some of them at runtime when they are needed instead of at compile time.
So, in short, I do not think it's possible (within reasonable constraints on effort) to enumerate and isolate the pieces of Scala that can (and should) be trusted.
As you mentioned other J* language implementations which all may make use of reflections, it would be a ban for all those languages as long as reflection is not part of the game.
I guess that would be JVM's problem not to have a way to partition the scope of reflection API, such that you could sort of "sandbox" the part of code that could be reflected within.

Resources