GXT Grid ValueProvider / PropertyAccess for a Map<K,V> Datastore? - gxt

Rather than using Bean model objects, my data model is built on Key-Value pairs in a HashMap container.
Does anyone have an example of the GXT's Grid ValueProvider and PropertyAccess that will work with a underlying Map?

It doesn't have one built in, but it is easy to build your own. Check out this blog post for a similar way of thinking, especially the ValueProvider section: http://www.sencha.com/blog/building-gxt-charts
The purpose of a ValueProvider is to be a simple reflection-like mechanism to read and write values in some object. The purpose of PropertyAccess<T> then is to autogenerate some of these value/modelkey/label provider instances based on getters and setters as are found on Java Beans, a very common use case. It doesn't have much more complexity than that, it is just a way to simply ask the compiler to do some very easy boilerplate code for you.
As that blog post shows, you can very easily build a ValueProvider just by implementing the interface. Here's a quick example of how you could make one that reads a Map<String, Object>. When you create each instance, you tell it which key are you working off of, and the type of data it should find when it reads out that value:
public class MapValueProvider<T> implements
ValueProvider<Map<String, Object>, T> {
private final String key;
public MapValueProvider(String key) {
this.key = key;
}
public T getValue(Map<String, Object> object) {
return (T) object.get(key);
}
public void setValue(Map<String, Object> object, T value) {
object.put(key, value);
}
public String getPath() {
return key;
}
}
You then build one of these for each key you want to read out, and can pass it along to ColumnConfig instances or whatever else might be expecting them.
The main point though is that ValueProvider is just an interface, and can be implemented any way you like.

Related

Arguments against a generic JSF object converter with a static WeakHashMap

I want to avoid boiler plate code for creating a list of SelectItems to map my entities/dtos between view and model, so I used this snippet of a generic object converter:
#FacesConverter(value = "objectConverter")
public class ObjectConverter implements Converter {
private static Map<Object, String> entities = new WeakHashMap<Object, String>();
#Override
public String getAsString(FacesContext context, UIComponent component, Object entity) {
synchronized (entities) {
if (!entities.containsKey(entity)) {
String uuid = UUID.randomUUID().toString();
entities.put(entity, uuid);
return uuid;
} else {
return entities.get(entity);
}
}
}
#Override
public Object getAsObject(FacesContext context, UIComponent component, String uuid) {
for (Entry<Object, String> entry : entities.entrySet()) {
if (entry.getValue().equals(uuid)) {
return entry.getKey();
}
}
return null;
}
}
There are already many answers to similliar questions, but I want a vanilla solution (without *faces). The following points still leave me uncertain about the quality of my snippet:
If it was that easy, why isn't there a generic object converter build into JSF?
Why are so many people still using SelectItems? Isn't there more flexibility by using the generic approach? E.g. #{dto.label} can be quickly changed into #{dto.otherLabel}.
Given the scope is just to map between view and model, is there any major downside of the generic approach?
This approach is hacky and memory inefficient.
It's "okay" in a small application, but definitely not in a large application with tens or hundreds of thousands of potential entities around which could be referenced in a f:selectItems. Moreover, such a large application has generally a second level entity cache. The WeakHashMap becomes then useless and is only effective when an entity is physically removed from the underlying datastore (and thus also from second level entity cache).
It has certainly a "fun" factor, but I'd really not recommend using it in "heavy production".
If you don't want to use an existing solution from an utility library like OmniFaces SelectItemsConverter as you already found, which is basically completely stateless and doesn't use any DAO/Service call, then your best bet is to abstract all your entities with a common base interface/class and hook the converter on that instead. This only still requires a DAO/Service call. This has been fleshed out in detail in this Q&A: Implement converters for entities with Java Generics.

Managed Java bean gets re-initialized at EVERY complete refresh or page reload

In my XPages application, I use a managed Java bean (scope = application) for translating strings:
public class Translator extends HashMap<String,String> implements Serializable {
private static final long serialVersionUID = 1L;
public String language = "en";
public Translator() { super(); this.init(null); }
public Translator(String language) { super(); this.init(language); }
public boolean init(String language) {
try {
FacesContext context = FacesContext.getCurrentInstance();
if (language!=null) this.language=language;
Properties data = new Properties();
// load translation strings from properties file in WEB-INF
data.load(new InputStreamReader(context.getExternalContext().getResourceAsStream("WEB-INF/translations_"+this.language+".properties"),"UTF-8"));
super.putAll(new HashMap<String,String>((Map) data));
// serializing the bean to a file on disk > this part of the code is just here to easily test how often the bean is initialized
ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream("C:\\dump\\Translator_"+this.language+"_"+new Date().getTime()+".ser"));
out.writeObject(this);
out.close();
return true;
}
catch (Exception e) { return false; }
}
public String getLanguage() { return this.language; }
public boolean setLanguage(String language) { return this.init(language); }
// special get function which is more tolerant than HashMap.get
public String get(Object key) {
String s = (String) key;
if (super.containsKey(s)) return super.get(s);
if (super.containsKey(s.toLowerCase())) return super.get(s.toLowerCase());
String s1 = s.substring(0,1);
if (s1.toLowerCase().equals(s1)) {
s1=super.get(s1.toUpperCase()+s.substring(1));
if (s1!=null) return s1.substring(0,1).toLowerCase()+s1.substring(1);
} else {
s1=super.get(s1.toLowerCase()+s.substring(1));
if (s1!=null) return s1.substring(0,1).toUpperCase()+s1.substring(1);
}
return s;
}
}
I use "extends HashMap" because in this way i only have to write "${myTranslatorBean['someText']}" (expression language) to get the translations into my XPage. The problem is that the bean is re-initialized at EVERY complete refresh or page reload. I tested this by serializing the bean to a unique file on the disk at the end of every initialisiation. In my other managed Java beans (which do not use "extends HashMap") this problem does not occur. Can anybody tell me what's wrong with my code? Thanks in advance.
EDIT: The entry for the managed Java bean in the faces-config.xml looks like this:
<managed-bean>
<managed-bean-name>myTranslatorBean</managed-bean-name>
<managed-bean-class>com.ic.Translator</managed-bean-class>
<managed-bean-scope>application</managed-bean-scope>
</managed-bean>
I concur with David about the faces-config entry - if you could post it, that could shine some light on it.
In its absence, I'll take a stab at it: are you using a managed property to set the "language" value for the app. If you are, I suspect that there's a high chance that the runtime calls the setLanguage(...) method excessively. Since you call this.init(...) in that method, that would re-run that method repeatedly as well.
As a point of code style you are free to ignore, over time I (in part due to reading others' opinions) have moved away from extending collection classes directly for this kind of use. What I do instead in this situation is create an object that implements the DataObject interface and then uses a HashMap internally to store cached values. That's part of a larger industry preference called "Composition over inheritance": http://en.wikipedia.org/wiki/Composition_over_inheritance
Just to make sure nothings weird - I suggest you post your faces-config. I use beans all the time but haven't extended HashMap in any of them. You can add a map and still use EL.
Assuming you have a map getter like "getMyMap()" then EL might be:
AppBean.myMap["myKey"]
Truth be told I don't typically use that syntax but I BELIEVE that works. I gave it a quick test and it didn't work as I expected so I'm missing something. I tried something like:
imageData.size["Large"].url
I THINK it didn't work for me because my bean doesn't IMPLEMENT Map. I notice you're EXTENDING HashMap. You might want to try implementing it. I found an interesting post here: http://blog.defrog.nl/2012/04/settings-bean-parameterized-method-call.html
Usually I do still use SSJS to pass Parameters in. It's really not the end of the would using SSJS for that. And I use EL for everything else.
This is an example of passing an object to a custom control and return a TreeSet with EL.
value="#{compositeData.imageSet.allImages}">
Regarding the bigger issue of the bean re-initializing.. That is odd.. I don't do a ton with ApplicationScope. But I suggest you play with the constructor. I'm not sure what you get by calling super() there. I would suggest use a boolean to only run any init code of the boolean wasn't already set. Obviously you then set it in the init code. See what that does.

Abstract Azure TableServiceEntity

I want to abstract the implementation of my Azure TableServiceEntities so that I have one entity, that will take an object, of any type, use the properties of that object as the properties in the TableServiceEntity.
so my base object would be like
public class SomeObject
{
[EntityAttribute(PartitionKey=true)]
public string OneProperty {get; set:}
[EntityAttribute(RowKey=true)]
public string TwoProperty {get; set;}
public string SomeOtherProperty {get;set;}
}
public class SomeEntity<T> : TableServiceEntity
{
public SomeEntity(T obj)
{
foreach (var propertyInfo in properties)
{
object[] attributes = propertyInfo.GetCustomAttributes(typeof (DataObjectAttributes), false);
foreach (var attribute in attributes)
{
DataObjectAttributes doa = (DataObjectAttributes) attribute;
if (doa.PartitionKey)
PartitionKey = propertyInfo.Name;
}
}
}
}
Then I could access the entity in the context like this
var objects =
(from entity in context.CreateQuery<SomeEntity>("SomeEntities") select entity);
var entityList = objects.ToList();
foreach (var obj in entityList)
{
var someObject = new SomeObject();
SomeObject.OneProperty = obj.OneProperty;
SomeObject.TwoProperty = obj.TwoProperty;
}
This doesn't seem like it should be that difficult, but I have a feeling I have been looking at too many possible solutions and have just managed to confuse myself.
Thanks for any pointers.
Take a look at Lokad Cloud O/C mapper I think the source code imitates what you're attempting, but has insightful rationale about its different approach to Azure table storage.
http://lokadcloud.codeplex.com/
I have written an alternate Azure table storage client in F#, Lucifure Stash, which supports many abstractions including persisting a dictionary object. Lucifure Stash also supports large data columns > 64K, arrays & lists, enumerations, out of the box serialization, user defined morphing, public and private properties and fields and more.
It is available free for personal use at http://www.lucifure.com or via NuGet.com.
What you are attempting to achieve, a single generic class for any entity, can be implemented in Lucifure Stash by using the [StashPool] attribute on a dictionary type.
I have written a blog post about the table storage context, entities by specifying the entity type. Maybe it can help you http://wblo.gs/a2G
It seems you still want to use concrete types. Thus, the SomeEntity is a bit redundant. Actually, TableServiceEntity is already an abstract class. You can derive SomeObject from TableServiceEntity. From my experience, this won’t introduce any issues to your scenario.
In addition, even with your custom SomeEntity, it is failed to remove the dependence on the concrete SomeObject class in your last piece of code anyway.
Best Regards,
Ming Xu.

Automapper and immutability

Is it possible to use AutoMapper with Immutable types?
For example my Domain type is immutable and I want to map my view type to this.
I believe it is not but just want this confirmed.
Also as it is best practice to have your domain types immutable, what is the best practice when mapping your view types to domain types?
I typically do the mapping from view types to domain types by hand, as I'll typically be working through a more complex interface, using methods and so on. If you use AutoMapper to go from view to domain, you're now locked in to an anemic domain model, whether you've intentionally decided to or not.
Suppose that you really did want an immutable property on your Domain type, say Id. Your domain type might look something like this:
public class DomainType
{
public DomainType(int id)
{
Id = id;
}
public int Id { get; }
// other mutable properties
// ...
}
Then you can use ConstructUsing using a public constructor of your choice, such as:
CreateMap<ViewType, DomainType>()
.ConstructUsing(vt => new DomainType(vt.Id));
Then map all the mutable properties in the normal way
AutoMapper relies on property setters to do its work, so if you have read-only properties, AutoMapper won't be of much use.
You could override the mapping behaviour and, for example, configure it to invoke a specific constructor, but that basically defeats the purpose of AutoMapper because then you are doing the mapping manually, and you've only succeeded in adding a clumsy extra step in the process.
It doesn't make a lot of sense to me that your domain model is immutable. How do you update it? Is the entire application read-only? And if so, why would you ever need to map to your domain model as opposed to from? An immutable domain model sounds... pretty useless.
P.S. I'm assuming that you mean this AutoMapper and not the auto-mapping feature in Fluent NHibernate or even some other totally different thing. If that's wrong then you should be more specific and add tags for your platform/language.
We have immutable objects using the builder pattern. Mapping them takes a little more boilerplate code, but it is possible
// ViewModel
public class CarModel : IVehicleModel
{
private CarModel (Builder builder)
{
LicensePlate = builder.LicensePlate;
}
public string LicensePlate { get; }
//
public Builder
{
public string LicensePlate { get; set; }
}
}
// Model
public class CarViewModel : IVehicleViewModel
{
private CarViewModel (Builder builder)
{
LicensePlate = builder.LicensePlate ;
}
public ILicensePlate LicensePlate { get; }
//
public Builder
{
public ILicensePlate LicensePlate { get; set; }
}
}
Our AutoMapper Profiles have three mappings registered:
CreateMap<IVehicleModel, CarViewModel.Builder>();
CreateMap<CarViewModel.Builder, IVehicleViewModel>().ConvertUsing(x => x.Build());
CreateMap<IVehicleModel, IVehicleViewModel>().ConvertUsing<VehicleModelTypeConverter>();
The VehicleModelTypeConverter then defines a two stage conversion:
public IVehicleViewModel Convert(IVehicleModel source, IVehicleViewModel destination,
ResolutionContext context)
{
var builder = context.Mapper.Map<CarViewModel.Builder>(source);
var model = context.Mapper.Map<IVehicleViewModel>(builder);
return model;
}
(An implementation of ITypeListConverter<string, ILicensePlate> carries out that mapping).
Usage in our system is as normal:
var result = _mapper<IVehicleViewModel>(_carModel);
This is using AutoMapper v7.0.1
You can use Automapper with classes or records that have properties init only setters. This is new in C# 9.0.
Automapper can set the properties at object creation because the properties have init only setters, but after Automapper has mapped them, they are locked in (immutable).
https://www.tsunamisolutions.com/blog/c-90-records-and-dtos-a-match-made-in-redmond

Are we all looking for the same IRepository?

I've been trying to come up with a way to write generic repositories that work against various data stores:
public interface IRepository
{
IQueryable<T> GetAll<T>();
void Save<T>(T item);
void Delete<T>(T item);
}
public class MemoryRepository : IRepository {...}
public class SqlRepository : IRepository {...}
I'd like to work against the same POCO domain classes in each. I'm also considering a similar approach, where each domain class has it's own repository:
public interface IRepository<T>
{
IQueryable<T> GetAll();
void Save(T item);
void Delete(T item);
}
public class MemoryCustomerRepository : IRepository {...}
public class SqlCustomerRepository : IRepository {...}
My questions: 1)Is the first approach even feasible? 2)Is there any advantage to the second approach.
The first approach is feasible, I have done something similar in the past when I wrote my own mapping framework that targeted RDBMS and XmlWriter/XmlReader. You can use this sort of approach to ease unit testing, though I think now we have superior OSS tools for doing just that.
The second approach is what I currently use now with IBATIS.NET mappers. Every mapper has an interface and every mapper [could] provide your basic CRUD operations. The advantage is each mapper for a domain class also has specific functions (such as SelectByLastName or DeleteFromParent) that are expressed by an interface and defined in the concrete mapper. Because of this there's no need for me to implement separate repositories as you're suggesting - our concrete mappers target the database. To perform unit tests I use StructureMap and Moq to create in-memory repositories that operate as your Memory*Repository does. Its less classes to implement and manage and less work overall for a very testable approach. For data shared across unit tests I use a builder pattern for each domain class which has WithXXX methods and AsSomeProfile methods (the AsSomeProfile just returns a builder instance with preconfigured test data).
Here's an example of what I usually end up with in my unit tests:
// Moq mocking the concrete PersonMapper through the IPersonMapper interface
var personMock = new Mock<IPersonMapper>(MockBehavior.Strict);
personMock.Expect(pm => pm.Select(It.IsAny<int>())).Returns(
new PersonBuilder().AsMike().Build()
);
// StructureMap's ObjectFactory
ObjectFactory.Inject(personMock.Object);
// now anywhere in my actual code where an IPersonMapper instance is requested from
// ObjectFactory, Moq will satisfy the requirement and return a Person instance
// set with the PersonBuilder's Mike profile unit test data
Actually there is a general consensus now that Domain repositories should not be generic. Your repository should express what you can do when persisting or retrieving your entities.
Some repositories are readonly, some are insert only (no update, no delete), some have only specific lookups...
Using a GetAll return IQueryable, your query logic will leak into your code, possibly to the application layer.
But it's still interesting to use the kind of interface you provide to encapsulate Linq Table<T> objects so that you can replace it with an in memory implementation for test purpose.
So I suggest, to call it ITable<T>, give it the same interface that the linq Table<T> object, and use it inside your specific domain repositories (not instead of).
You can then use you specific repositories in memory by using a in memory ITable<T> implementation.
The simplest way to implement ITable<T> in memory is to use a List<T> and get a IQueryable<T> interface using the .AsQueryable() extension method.
public class InMemoryTable<T> : ITable<T>
{
private List<T> list;
private IQueryable<T> queryable;
public InMemoryTable<T>(List<T> list)
{
this.list = list;
this.queryable = list.AsQueryable();
}
public void Add(T entity) { list.Add(entity); }
public void Remove(T entity) { list.Remove(entity); }
public IEnumerator<T> GetEnumerator() { return list.GetEnumerator(); }
public Type ElementType { get { return queryable.ElementType; } }
public IQueryProvider Provider { get { return queryable.Provider; } }
...
}
You can work in isolation of the database for testing, but with true specific repositories that give more domain insight.
This is a bit late... but take a look at the IRepository implementation at CommonLibrary.NET on codeplex. It's got a pretty good feature set.
Regarding your problem, I see a lot of people using methods like GetAllProducts(), GetAllEmployees()
in their repository implementation. This is redundant and doesn't allow your repository to be generic.
All you need is GetAll() or All(). The solution provided above does solve the naming problem though.
This is taken from CommonLibrary.NET documentation online:
0.9.4 Beta 2 has a powerful Repository implementation.
* Supports all CRUD methods ( Create, Retrieve, Update, Delete )
* Supports aggregate methods Min, Max, Sum, Avg, Count
* Supports Find methods using ICriteria<T>
* Supports Distinct, and GroupBy
* Supports interface IRepository<T> so you can use an In-Memory table for unit-testing
* Supports versioning of your entities
* Supports paging, eg. Get(page, pageSize)
* Supports audit fields ( CreateUser, CreatedDate, UpdateDate etc )
* Supports the use of Mapper<T> so you can map any table record to some entity
* Supports creating entities only if it isn't there already, by checking for field values.

Resources