I have a 'document' table (very original) that I need to dynamically subset at runtime so that my API consumers can't see data that isn't legal to view given some temporal constraints between the application/database. JOOQ created me a nice auto-gen Document class that represents this table.
Ideally, I'd like to create an anonymous subclass of Document that actually translates to
SELECT document.* FROM document, other_table
WHERE document.id = other_table.doc_id AND other_table.foo = 'bar'
Note that bar is dynamic at runtime hence the desire to extend it anonymously. I can extend the Document class anonymously and everything looks great to my API consumers, but I can't figure out how to actually restrict the data. accept() is final and toSQL doesn't seem to have any effect.
If this isn't possible and I need to extend CustomTable, what method do I override to provide my custom SQL? The JOOQ docs say to override accept(), but that method is marked final in TableImpl, which CustomTable extends from. This is on JOOQ 3.5.3.
Thanks,
Kyle
UPDATE
I built 3.5.4 from source after removing the "final" modifier on TableImpl.accept() and was able to do exactly what I wanted. Given that the docs imply I should be able to override accept perhaps it's just a simple matter of an erroneous final declaration.
Maybe you can implement one of the interfaces
TableLike (and delegate all methods to a JOOQ implementation instance) such as TableImpl (dynamic field using a HashMap to store the Fields?)
Implement the Field interface (and make it dynamic)
Anyway you will need to remind that there are different phases while JOOQ builds the query, binds values, executes it etc. You should probably avoid changing the "foo" Field when starting to build a query.
It's been a while since I worked with JOOQ. My team ended up building a customized JOOQ. Another (dirty) trick to hook into the JOOQ library was to use the same packages, as the protected identifier makes everything visible within the same package as well as to sub classes...
Related
i am using jaxb to generate code from an xsd.
The generated code contains a lot of annotations; for classes and fields.
I am trying to use com.sun.tools.internal.xjc.Plugin to modify the generated code.
In the plugin run() method we are given an Outline class from which we can get ClassOutline. ClassOutline has an JDefinedClass final member which has the info about actual class which will be generated.
If i want to add anything, there are apis in JDefinedClass which can be used. But if i want to remove something, there is no way.
e.g. i cannot clear annotations, because the JDefinedClass.annotations() method returns an UnmodifiableCollection. so i cannot clear it or remove anything from it.
i tried to create another JDefinedClass by invoking the _class method but the ClassOutline.implClass variable is final, so i cannot set it.
how to get a JDefinedClass which does not have any annotations?
is there another phase of code generation which i can trap into to really control the generation of JDefinedClass?
The code model is, indeed mostly "write only". But, speaking of annotations, you have probably missed the methods like com.sun.codemodel.JDefinedClass.removeAnnotation(JAnnotationUse) and com.sun.codemodel.JMethod.removeAnnotation(JAnnotationUse) (implemented from com.sun.codemodel.JAnnotatable.removeAnnotation(JAnnotationUse)).
So they're there. You can remove annotations with the normal CodeModel API.
As I can see, you can also remove fields and methods from classes. So what exactly are you missing?
JDefinedClass.annotations() It return an unmodifiable collection object and you cannot modify them.
So work around for this, you can restrict annotation addition/deletion at class and field level before building JCodeModel.
You need to create a custom Jackson2Annotator class which extends Jackson2Annotator and override their methods according to your requirement.
Following are few methods which are being used for specific type of annotation property:
propertyOrder(OTB JsonPropertyOrder)
propertyInclusion(OTB JsonInclude)
propertyField(can be used for custom defined annotation at field level)
More you can discover by looking Jackson2Annotator class what fit into your need.
I do not know that the question is right? Please do not take it your mind if it is crazy. Actually I am working on xpages application. There I need to do two things, that I want to add the picklist functionality and binding the dynamic data like field_1,field_2,field_3, ... upto n depands on customer choice.I am using the composite data for both custom controls. I can remove the picklist control's composite data and also I can do it by passing the scope variables. But that takes more time than the composite data.
I did not get any error. But the binded documents is not saving.
Is it possible to import the CCs that are having composite Data?
Code for first CC:-
<xc:viewpicklist datasrc="view1" dialogID="dialog1" dialogWidth="700px" dialogTitle="Pick this field value!!!">
<xc:this.viewColumn>
<xp:value>0</xp:value>
<xp:value>1</xp:value>
<xp:value>2</xp:value>
</xc:this.viewColumn>
</xc:viewpicklist>
Code for Second CC:-
<xc:BOM_Partinfo BOM_Partinfo="#{document1}"
TNUM="field#{index+1}" Desc="Desc#{index+1}" quan="Ea#{index+1}"
exp="exp#{index+1}" cap="cap#{index+1}" total="price#{index+1}"
RD="RD#{index+1}" m="manufact#{index+1}"
m_n="manufactnum#{index+1}">
</xc:BOM_Partinfo>
You can read information that is set in the properties of a custom control if it was static in the calling page:
var x = getComponent("yourcomponentid");
x.getPropertyMap().get("parametername");
but you want to propagate a data source from the outer control to the inner control...
You need to plan carefully. If you hand over the data source, then your custom control is dependent on a fixed set of fields in the data source (that would be a parameter of type com.ibm.xsp.model.DocumentDataSource). This would violate the encapsulation principles. So I would recommend you actually hand over data bindings - the advantage: you are very flexible what to bind to (not only data sources, but also beans and scope variables would work then). The trick is you provide the binding name as you would statically type it in (e.g. "document1.subject" or "requestScope.bla" ). In your control you then do
${"#{compositeData.field1}"}
${"#{compositeData.field2}"}
You need one for each field.
You cannot send a document data source to a custom control using composite data parameters.
You can try and use this script instead
http://openntf.org/XSnippets.nsf/snippet.xsp?id=access-datasources-of-custom-controls
Define data source in XP/CC where you want those CCs. Define parameter "dataSourceName" for both CCs. Inside each of them use EL "requestScope[compositeData.dataSourceName].fieldName" everywhere you want to bind to datasource.
What is the difference between the System.ComponentModel.BindingList methods Add(object) and AddNew()? The MSDN documentation says this:
Add: Adds an object to the end of the Collection<T>.
AddNew: Adds a new item to the collection.
It seems like both methods add an item to the collection, but Add(object) does it in one shot whereas AddNew() is slightly more complicated. My tests with Add(object) seem to be working, but I want to know if I am using the correct method.
So what is the difference between these methods?
AddNew() creates the object for you (that's why it doesn't have a parameter).
It's designed to be used by grids, which don't know how to create a new object to pass to Add().
AddNew() is very handy (it’s the well-known Factory design pattern) when you implement a class derived of BindingList().
It allows your code to initialize new items with values that depend on the list itself - e.g. a foreign key to the parent object if the binding list contains a list of children.
I want to serialize an Entity Framework Self-Tracking Entities full object graph (parent + children in one to many relationships) into Json.
For serializing I use ServiceStack.JsonSerializer.
This is how my database looks like (for simplicity, I dropped all irrelevant fields):
I fetch a full profile graph in this way:
public Profile GetUserProfile(Guid userID)
{
using (var db = new AcmeEntities())
{
return db.Profiles.Include("ProfileImages").Single(p => p.UserId == userId);
}
}
The problem is that attempting to serialize it:
Profile profile = GetUserProfile(userId);
ServiceStack.JsonSerializer.SerializeToString(profile);
produces a StackOverflowException.
I believe that this is because EF provides an infinite model that screws the serializer up. That is, I can techincally call: profile.ProfileImages[0].Profile.ProfileImages[0].Profile ... and so on.
How can I "flatten" my EF object graph or otherwise prevent ServiceStack.JsonSerializer from running into stack overflow situation?
Note: I don't want to project my object into an anonymous type (like these suggestions) because that would introduce a very long and hard-to-maintain fragment of code).
You have conflicting concerns, the EF model is optimized for storing your data model in an RDBMS, and not for serialization - which is what role having separate DTOs would play. Otherwise your clients will be binded to your Database where every change on your data model has the potential to break your existing service clients.
With that said, the right thing to do would be to maintain separate DTOs that you map to which defines the desired shape (aka wireformat) that you want the models to look like from the outside world.
ServiceStack.Common includes built-in mapping functions (i.e. TranslateTo/PopulateFrom) that simplifies mapping entities to DTOs and vice-versa. Here's an example showing this:
https://groups.google.com/d/msg/servicestack/BF-egdVm3M8/0DXLIeDoVJEJ
The alternative is to decorate the fields you want to serialize on your Data Model with [DataContract] / [DataMember] fields. Any properties not attributed with [DataMember] wont be serialized - so you would use this to hide the cyclical references which are causing the StackOverflowException.
For the sake of my fellow StackOverflowers that get into this question, I'll explain what I eventually did:
In the case I described, you have to use the standard .NET serializer (rather than ServiceStack's): System.Web.Script.Serialization.JavaScriptSerializer. The reason is that you can decorate navigation properties you don't want the serializer to handle in a [ScriptIgnore] attribute.
By the way, you can still use ServiceStack.JsonSerializer for deserializing - it's faster than .NET's and you don't have the StackOverflowException issues I asked this question about.
The other problem is how to get the Self-Tracking Entities to decorate relevant navigation properties with [ScriptIgnore].
Explanation: Without [ScriptIgnore], serializing (using .NET Javascript serializer) will also raise an exception, about circular
references (similar to the issue that raises StackOverflowException in
ServiceStack). We need to eliminate the circularity, and this is done
using [ScriptIgnore].
So I edited the .TT file that came with ADO.NET Self-Tracking Entity Generator Template and set it to contain [ScriptIgnore] in relevant places (if someone will want the code diff, write me a comment). Some say that it's a bad practice to edit these "external", not-meant-to-be-edited files, but heck - it solves the problem, and it's the only way that doesn't force me to re-architect my whole application (use POCOs instead of STEs, use DTOs for everything etc.)
#mythz: I don't absolutely agree with your argue about using DTOs - see me comments to your answer. I really appreciate your enormous efforts building ServiceStack (all of the modules!) and making it free to use and open-source. I just encourage you to either respect [ScriptIgnore] attribute in your text serializers or come up with an attribute of yours. Else, even if one actually can use DTOs, they can't add navigation properties from a child object back to a parent one because they'll get a StackOverflowException.
I do mark your answer as "accepted" because after all, it helped me finding my way in this issue.
Be sure to Detach entity from ObjectContext before Serializing it.
I also used Newton JsonSerializer.
JsonConvert.SerializeObject(EntityObject, Formatting.Indented, new JsonSerializerSettings { PreserveReferencesHandling = PreserveReferencesHandling.Objects });
I created a plain Groovy class (i.e Person class)with some properties. Now I want to get those declared attributes (which I've defined in my class) with their order, but I don't know how to do it.
I've tried to use Person.metaClass.getProperties() but it retrieves not only declared properties but also built-in Groovy ones.
Could you please help me on this: just get declared properties by its order when declaring.
Thank you so much!
I can't see a use case, but the compiler could reorder all fields declaration while creating bytecode. I'm pretty sure ordering is not a constraint on fields though it should mostly be the case for not modified/enhanced class
As per the JVM spec, generated fields should be marked SYNTHETIC (like generated methods) in the bytecode, so you can test with :
Person.getDeclaredFields().grep { !it.synthetic }
and filter the base Groovy fields like ClassInfo,metaClass and others beginning by __timestamp
But I'm not a specialist, there could be another way I don't think of
There was a question about this on the mailing list back in February of this year
The answer is, no. There is no way to get properties in the order they are declared in the class without doing some extra work.
You could parse the source file for the class, and generate an ordered list of property names from that
You could write a custom annotation, and annotate the fields with this annotation ie: #Order(1) String prop
You could make all of the classes where this matters implement an interface which forces them to have a method that returns the names of the properties in order.
Other than that, you probably want to have a re-think :-(