Is there a way to tell if a class is an interface? - java-bytecode-asm

I'm trying to examine (at a bytecode level, ASM) classes implementing some specific interfaces (in this case, java.sql.Connection) and find that in some cases, the library has another interface extending something from my set of interfaces... and then their classes implement THAT interface. (In this case, a new extended interface com.mysql.jdbc.Connection extend java.sql.Connection and then their implementations, e.g, ConnectionImpl implement com.mysql.jdbc.Connection.) I therefore miss identifying ConnectionImpl as a target class.
So.. the result is that I need to identify com.mysql.jdbc.Connection as an 'interesting' interface when loading the class. But I can't see how to identify the class AS an interface versus just a normal class. Is there something in the ClassReader than can give me that sort of information?

As per the title, if you want to check if a class is an interface:
ClassNode node = // However you wish to load the class
if ((node.access & Opcodes.ACC_INTERFACE) != 0){
// is interface
} else {
// is not an interface
}
In your post you state you wish to find children/implementations of java.sql.Connection.
The issue you're having is this:
java.sql.Connection -> com.mysql.jdbc.Connection -> ConnectionImpl
ConnectionImpl does not directly implement java.sql.Connection so it's not detected as a child/implementation. For an issue like this what you would have to do is travel the class hierarchy. If I were in your situation I would load a map of ClassNodes <String, ClassNode> where the string is the ClassNode's name. Then I would use a recursive method to check if a given class is the intended type. So if you send a ClassNode in it would call itself with the node's parent and then the interfaces. Eventually if the initially given node is the correct type eventually the java.sql.Connection interface will be passed through and found. Also if you wanted to skip redundencies you could store the results of each ClassNode in a map so that you wouldn't have to check the entire hierarchy over and over again.
Edit: Sorta like this
public static boolean isMatch(ClassNode cn, Map<String,ClassNode> nodes, String target){
if (cn.name.equals(target)) return true;
else{
if (nodes.containsKey(cn.superName) && isMatch(nodes.get(cn.superName),nodes,target)) return true;
for (String interf : cn.interfaces){
if (nodes.containsKey(interf) && isMatch(nodes.get(interf),nodes,target)) return true;
}
}
return false;
}

As you already mentioned, you need to check every interface type for its interfaces in order to determine such a subtype-relation. This is however not difficult to do if you have access to all resources.
When you are using ASM, you simply need to take the interface names of your original class and find the class file to each such interface. You can then parse each class file for its interfaces and so on. This way, you can determine the entire graph and decide on the subtype relationship.
If you do not want to do this manually, you can use Byte Buddy which offers you methods similar to the reflection API for unloaded types:
ClassFileLocator cfl = ... // can be file system, class loader, etc.
TypePool.Default.of(cfl).describe("your.initial.type")
.resolve()
.isAssignableTo(someInterface);
You can also use the TypePool to read the target interface, if it is not available.

Using org.objectweb.asm.ClassReader ,we can identify loaded class is really a class or interface. Sample code snippet is below
public byte[] transform(ClassLoader loader, String className, Class<?> classBeingRedefined,
ProtectionDomain protectionDomain, byte[] classfileBuffer) throws IllegalClassFormatException{
ClassReader classReader = null;
try{
classReader=new ClassReader(classfileBuffer);
if(isInterface(classReader)){
return classfileBuffer;
}
}
catch(Throwable exp){
return classfileBuffer;
}
// Remaining logic here
}
public boolean isInterface(ClassReader cr) {
return ((cr.getAccess() & 0x200) != 0);
}

There is a method that check it for you Class#isInterface()
if (yourClass.isInterface()) {
//do something
}
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Class.html#isInterface%28%29

You can use isInterface() method of java.lang.Class to check whether a class is an interface or not.

Related

How do I create a custom FilterRule with an override for ElementPasses

I want to create my own Boolean operation on an element to pass in as a FilterRule. The ElementPasses member description states:
Derived classes override this method to implement the test that determines whether the given element passes this rule or not.
I have tried to create my own derived class but I can't figure out how to implement it. I would think an interface would be available but I can't find anything. Annoyingly, I remember seeing an example of this but I can't seem to find anything.
This fails with: Static class 'ParameterDefinitionExists' cannot derive from type 'FilterRule'. Static classes must derive from object.
static public class ParameterDefinitionExists : FilterRule
{
public static bool ElementPasses(Element element)
{
return true;
}
}
And this fails with:'FilterRule' does not contain a constructor that takes 0 arguments
static public class ParameterDefinitionExists : FilterRule
{
new public bool ElementPasses(Element element)
{
return true;
}
}
What constructor arguments does it take?
There may be another way to go about it but I can't anything for FilterRules. I'm trying to define and refine a trigger in an updater but maybe I should query the element after it is passed in to the command. I imagine catching it with a filter rule is more efficient.
You have to use one of the Revit API classes derived from FilterRule:
Inheritance Hierarchy
System Object
Autodesk.Revit.DB FilterRule
Autodesk.Revit.DB FilterCategoryRule
Autodesk.Revit.DB FilterInverseRule
Autodesk.Revit.DB FilterValueRule
Autodesk.Revit.DB SharedParameterApplicableRule
Cf. http://www.revitapidocs.com/2017/a8f202ca-3c88-ecc4-fa93-549b26a412d7.htm
The Building Coder provides several examples creating and using parameter filters:
http://thebuildingcoder.typepad.com/blog/2010/08/elementparameterfilter-with-a-shared-parameter.html
Here is the entire topic group on filtering.

How do I call a method of an attribute derived from a generic interface, where the specific type is not known?

Core Question:
I have a generic interface IValidatingAttribute<T>, which creates the contract bool IsValid(T value); The interface is implemented by a variety of Attributes, which all serve the purpose of determining if the current value of said Field or Property they decorate is valid per the interface spec that I'm dealing with. What I want to do is create a single validation method that will scan every field and property of the given model, and if that field or property has any attributes that implement IValidatingAttribute<T>, it should validate the value against each of those attributes. So, using reflection I have the sets of fields and properties, and within those sets I can get the list of attributes. How can I determine which attributes implement IValidatingAttribute and then call IsValid(T value)?
background:
I am working on a library project that will be used to develop a range of later projects against the interface for a common third party system. (BL Server, for those interested)
BL Server has a wide range of fairly arcane command structures that have varying validation requirements per command and parameter, and then it costs per transaction to call these commands, so one of the library requirements is to easily define the valdiation requirements at the model level to catch invalid commands before they are sent. It is also intended to aid in the development of later projects by allowing developers to catch invalid models without needing to set up the BL server connections.
Current Attempt:
Here's where I've gotten so far (IsValid is an extension method):
public interface IValidatingAttribute<T>
{
bool IsValid(T value);
}
public static bool IsValid<TObject>(this TObject sourceObject) where TObject : class, new()
{
var properties = typeof(TObject).GetProperties();
foreach (var prop in properties)
{
var attributeData = prop.GetCustomAttributesData();
foreach (var attribute in attributeData)
{
var attrType = attribute.AttributeType;
var interfaces = attrType.GetInterfaces().Where(inf => inf.IsGenericType).ToList();
if (interfaces.Any(infc => infc.Equals(typeof(IValidatingAttribute<>))))
{
var value = prop.GetValue(sourceObject);
//At this point, I know that the current attribute implements 'IValidatingAttribute<>', but I don't know what T is in that implementation.
//Also, I don't know what data type 'value' is, as it's currently boxed as an object.
//The underlying type to value will match the expected T in IValidatingAttribute.
//What I need is something like the line below:
if (!(attribute as IValidatingAttribute<T>).IsValid(value as T)) //I know this condition doesn't work, but it's what I'm trying to do.
{
return false;
}
}
}
return true;
}
}
Example usage:
Just to better explain what I am trying to achieve:
public class SomeBLRequestObject
{
/// <summary>
/// Required, only allows exactly 2 alpha characters.
/// </summary>
[MinCharacterCount(2), MaxCharacterCount(2), IsRequired, AllowedCharacterSet(CharSets.Alpha))]
public string StateCode {get; set;}
}
And then, later on in code:
...
var someBLObj = SomeBLRequestObjectFactory.Create();
if(!someBLObj.IsValid())
{
throw new InvalidObjectException("someBLObj is invalid!");
}
Thank you, I'm really looking for a solution to the problem as it stands, but I'm more than willing to listen if somebody has a viable alternative approach.
I'm trying to go generic extension method with this because there are literally hundreds of the BL Server objects, and I'm going with attributes because each of these objects can have upper double digit numbers of properties, and it's going to make things much, much easier if the requirements for each object are backed in and nice and readable for the next developer to have to use this thing.
Edit
Forgot to mention : This Question is the closest I've found, but what I really need are the contents of \\Do Something in TcKs's answer.
Well, after about 6 hours and a goods nights sleep, I realized that I was over-complicating this thing. Solved it with the following (ExtValidationInfo is the class that the below two extensions are in.):
Jon Skeet's answer over here pointed me at a better approach, although it still smells a bit, this one at least works.
public static bool IsValid<TObject>(this TObject sourceObject) where TObject : class, new()
{
var baseValidationMethod = typeof(ExtValidationInfo).GetMethod("ValidateProperty", BindingFlags.Static | BindingFlags.Public);
var properties = TypeDataHandler<TObject>.Properties;
foreach (var prop in properties)
{
var attributes = prop.GetCustomAttributes(typeof(IValidatingAttribute<>)).ToList();
if (!attributes.Any())
{
continue; // No validators, skip.
}
var propType = prop.PropertyType;
var validationMethod = baseValidationMethod.MakeGenericMethod(propType);
var propIsValid = validationMethod.Invoke(null, prop.GetValue(sourceObject), attributes);
if(!propIsValid)
{
return false;
}
}
return true;
}
public static bool ValidateProperty<TPropType>(TPropType value, List<IValidatingAttribute<TPropType>> validators)
{
foreach (var validator in validators)
{
if (!validator.IsValid(value))
{
return false;
}
}
return true;
}

GXT Grid ValueProvider / PropertyAccess for a Map<K,V> Datastore?

Rather than using Bean model objects, my data model is built on Key-Value pairs in a HashMap container.
Does anyone have an example of the GXT's Grid ValueProvider and PropertyAccess that will work with a underlying Map?
It doesn't have one built in, but it is easy to build your own. Check out this blog post for a similar way of thinking, especially the ValueProvider section: http://www.sencha.com/blog/building-gxt-charts
The purpose of a ValueProvider is to be a simple reflection-like mechanism to read and write values in some object. The purpose of PropertyAccess<T> then is to autogenerate some of these value/modelkey/label provider instances based on getters and setters as are found on Java Beans, a very common use case. It doesn't have much more complexity than that, it is just a way to simply ask the compiler to do some very easy boilerplate code for you.
As that blog post shows, you can very easily build a ValueProvider just by implementing the interface. Here's a quick example of how you could make one that reads a Map<String, Object>. When you create each instance, you tell it which key are you working off of, and the type of data it should find when it reads out that value:
public class MapValueProvider<T> implements
ValueProvider<Map<String, Object>, T> {
private final String key;
public MapValueProvider(String key) {
this.key = key;
}
public T getValue(Map<String, Object> object) {
return (T) object.get(key);
}
public void setValue(Map<String, Object> object, T value) {
object.put(key, value);
}
public String getPath() {
return key;
}
}
You then build one of these for each key you want to read out, and can pass it along to ColumnConfig instances or whatever else might be expecting them.
The main point though is that ValueProvider is just an interface, and can be implemented any way you like.

Faking enums in Entity Framework 4.0

There are a lot of workarounds for the missing support of enumerations in the Entity Framework 4.0. From all of them I like this one at most:
http://blogs.msdn.com/b/alexj/archive/2009/06/05/tip-23-how-to-fake-enums-in-ef-4.aspx?PageIndex=2#comments
This workaround allows you to use enums in your LINQ queries which is what i exactly need. However, I have a problem with this workaround. I get for every complex type I'm using a new partial autogenerated class.Therefore the code does not compile any more because I already have a wrapper class with this name in the same namespace which converts betwen the backed integer in the database and the enum in my POCO classes. If I make my wrapper a partial class, the code still does not compile as it now contains two properties with the same name "Value". The only possibility is to remove the Value property by hand everytime I generate the POCO classes because the DB model changed (which during the development phase happens very often).
Do you know how to prevent a partial class to be generated out of complex property everytime the EF model changes?
Can you recommend me some other workarounds supporting enumerations in LINQ queries?
That workaround is based on the fact that you are writing your POCO classes yourselves = no autogeneration. If you want to use it with autogeneration you must heavily modify T4 template itself.
Other workaround is wrapping enum conversion to custom extension methods.
public static IQueryable<MyEntity> FilterByMyEnum(this IQueryable<MyEntity> query, MyEnum enumValue)
{
int val = (int)enumValue;
return query.Where(e => e.MyEnumValue == val);
}
You will then call just:
var data = context.MyEntitites.FilterByMyEnum(MyEnum.SomeValue).ToList();
I am using an approach based on the one described in your link without any modifications of the T4 templates. The contents of my partial wrapper classes are as follows:
public partial class PriorityWrapper
{
public Priority EnumValue
{
get
{
return (Priority)Value;
}
set
{
Value = (int)value;
}
}
public static implicit operator PriorityWrapper(Priority value)
{
return new PriorityWrapper { EnumValue = value };
}
public static implicit operator Priority(PriorityWrapper value)
{
if (value == null)
return Priority.High;
else
return value.EnumValue;
}
}
I've only changed that instead of a back store variable with enum value I am using the autogenerated int typed Value property. Consequently Value can be an auto-implemented property and EnumValue property needs to do the conversion in getter and setter methods.

How to adapt the Specification pattern to evaluate a combination of objects?

I know that the Specification pattern describes how to use a hierarchy of classes implementing ISpecification<T> to evaluate if a candidate object of type T matches a certain specification (= satisfies a business rule).
My problem : the business rule I want to implement needs to evaluate several objects (for example, a Customer and a Contract).
My double question :
Are there typical adaptations of the Specification patterns to achieve this ? I can only think of removing the implementation of ISpecification<T> by my specification class, and taking as many parameters as I want in the isSatisfiedBy() method. But by doing this, I lose the ability to combine this specification with others.
Does this problem reveal a flaw in my design ? (i.e. what I need to evaluate using a Customer and a Contract should be evaluated on another object, like a Subscription, which could contain all the necessary info) ?
In that case (depending on what the specification precisely should do, I would use one of the objects as specification subject and the other(s) as parameter.
Example:
public class ShouldCreateEmailAccountSpecification : ISpecification<Customer>
{
public ShouldCreateEmailAccountSpecification(Contract selectedContract)
{
SelectedContract = selectedContract;
}
public Contract SelectedContract { get; private set; }
public bool IsSatisfiedBy(Customer subject)
{
return false;
}
}
Your problem is that your specification interface is using a generic type parameter, which prevents it from being used for combining evaluation logic across different specializations (Customer,Contract) because ISpecification<Customer> is in fact a different interface than ISpecification<Contract>. You could use Jeff's approach above, which gets rid of the type parameter and passes everything in as a base type (Object). Depending on what language you are using, you may also be able to pull things up a level and combine specifications with boolean logic using delegates. C# Example (not particularly useful as written, but might give you some ideas for a framework):
ISpecification<Customer> cust_spec = /*...*/
ISpecification<Contract> contract_spec = /*... */
bool result = EvalWithAnd( () => cust_spec.IsSatisfiedBy(customer), () => contract_spec.IsSatisfiedBy( contract ) );
public void EvalWithAnd( params Func<bool>[] specs )
{
foreach( var spec in specs )
{
if ( !spec() )
return false; /* If any return false, we can short-circuit */
}
return true; /* all delegates returned true */
}
Paco's solution of treating one object as the subject and one as a parameter using constructor injection can work sometimes but if both objects are constructed after the specification object, it makes things quite difficult.
One solution to this problem is to use a parameter object as in this refactoring suggestion: http://sourcemaking.com/refactoring/introduce-parameter-object.
The basic idea is that if you feel that both Customer and Contract are parameters that represent a related concept, then you just create another parameter object that contains both of them.
public class ParameterObject
{
public Customer Customer { get; set; }
public Contract Contract { get; set; }
}
Then your generic specification becomes for that type:
public class SomeSpecification : ISpecification<ParameterObject>
{
public bool IsSatisfiedBy(ParameterObject candidate)
{
return false;
}
}
I don't know if I understood your question.
If you are using the same specification for both Customer and Contract, this means that you can send the same messages to both of them. This could be solved by making them both to implement an interface, and use this interface as the T type. I don't know if this makes sense in your domain.
Sorry if this is not an answer to your question.

Resources