Abstract Azure TableServiceEntity - azure

I want to abstract the implementation of my Azure TableServiceEntities so that I have one entity, that will take an object, of any type, use the properties of that object as the properties in the TableServiceEntity.
so my base object would be like
public class SomeObject
{
[EntityAttribute(PartitionKey=true)]
public string OneProperty {get; set:}
[EntityAttribute(RowKey=true)]
public string TwoProperty {get; set;}
public string SomeOtherProperty {get;set;}
}
public class SomeEntity<T> : TableServiceEntity
{
public SomeEntity(T obj)
{
foreach (var propertyInfo in properties)
{
object[] attributes = propertyInfo.GetCustomAttributes(typeof (DataObjectAttributes), false);
foreach (var attribute in attributes)
{
DataObjectAttributes doa = (DataObjectAttributes) attribute;
if (doa.PartitionKey)
PartitionKey = propertyInfo.Name;
}
}
}
}
Then I could access the entity in the context like this
var objects =
(from entity in context.CreateQuery<SomeEntity>("SomeEntities") select entity);
var entityList = objects.ToList();
foreach (var obj in entityList)
{
var someObject = new SomeObject();
SomeObject.OneProperty = obj.OneProperty;
SomeObject.TwoProperty = obj.TwoProperty;
}
This doesn't seem like it should be that difficult, but I have a feeling I have been looking at too many possible solutions and have just managed to confuse myself.
Thanks for any pointers.

Take a look at Lokad Cloud O/C mapper I think the source code imitates what you're attempting, but has insightful rationale about its different approach to Azure table storage.
http://lokadcloud.codeplex.com/

I have written an alternate Azure table storage client in F#, Lucifure Stash, which supports many abstractions including persisting a dictionary object. Lucifure Stash also supports large data columns > 64K, arrays & lists, enumerations, out of the box serialization, user defined morphing, public and private properties and fields and more.
It is available free for personal use at http://www.lucifure.com or via NuGet.com.
What you are attempting to achieve, a single generic class for any entity, can be implemented in Lucifure Stash by using the [StashPool] attribute on a dictionary type.

I have written a blog post about the table storage context, entities by specifying the entity type. Maybe it can help you http://wblo.gs/a2G

It seems you still want to use concrete types. Thus, the SomeEntity is a bit redundant. Actually, TableServiceEntity is already an abstract class. You can derive SomeObject from TableServiceEntity. From my experience, this won’t introduce any issues to your scenario.
In addition, even with your custom SomeEntity, it is failed to remove the dependence on the concrete SomeObject class in your last piece of code anyway.
Best Regards,
Ming Xu.

Related

Passing Interfaces into Dictionary key's

I have been following following coreclr for a little while and I am new to programming. My question is why do they pass interfaces into Dictionary's especially the key value?
//
// Allocate a new Dictionary containing a copy of the old values, plus the new value. We have to do this manually to
// minimize allocations of IEnumerators, etc.
//
Dictionary newValues = new Dictionary(current.m_localValues.Count + (hadPreviousValue ? 0 : 1));
My understanding is that interface is to implemented by a class. Once implemented it can call/use functions or store data in the classes properties/ variables. I am missing some understanding of interfaces and their use cases but I do not know what that it.
Why do you instantiate a variable to an interface or pass an interface into a parameter? My understanding is you will then have an instance of that variable which still can't hold values nor change state through methods.
Let me explain.
Interface is contract. It just contains method without implementation. Now it may possible that that interface is being implemented by any number of class.
public interface IEntity { int Id {get;set;} }
public class Student : IEntity { public int Id {get;set;} // Interface Property }
public class Teacher : IEntity { public int Id {get;set;} // Interface Property }
Dictionary<IEntity,object> obj = new Dictionary<IEntity,object>(); Student s = new Student(); Teacher t = new Teacher(); obj.Add(s,any object); obj.Add(t,any object);
This is because of interface that your dictionary can hold reference of both type ( Student and Teacher).
In .NET when any object is created it is uniquely identify by GetHashCode() method. // You can find more detail on this on MSDN.
Also Dictionary not means that keys must be only primitive type. This is the reason it is good if you have more than one key ( Like composite key in Database) so it allow you to identify uniquely based on your custom implementation.
Now second Generic.
public class PersonInfo<T> where T : IEntity
{
public string Name {get;set;}
public T Entity {get;set;}
}
PersonInfo<Student> student = new PersonInfo<Student>();
student.T = new Student();
student.Name = "";
PersonInfo<Teacher> Teacher = new PersonInfo<Teacher>();
teacher.T= new Teacher();
teacher.Name = "";
When you have interface. It not actually interface. You always have a reference to object with that interface. And that object is the one responsible for comparison in dictionary
The benefit is not difference from using class as a key. Dictionary can be used as list to iterate KeyValuePair to take key to do some operation. But using interface means you can store various type of class with same interface instead of just one type. Which is decoupled and more flexible

GXT Grid ValueProvider / PropertyAccess for a Map<K,V> Datastore?

Rather than using Bean model objects, my data model is built on Key-Value pairs in a HashMap container.
Does anyone have an example of the GXT's Grid ValueProvider and PropertyAccess that will work with a underlying Map?
It doesn't have one built in, but it is easy to build your own. Check out this blog post for a similar way of thinking, especially the ValueProvider section: http://www.sencha.com/blog/building-gxt-charts
The purpose of a ValueProvider is to be a simple reflection-like mechanism to read and write values in some object. The purpose of PropertyAccess<T> then is to autogenerate some of these value/modelkey/label provider instances based on getters and setters as are found on Java Beans, a very common use case. It doesn't have much more complexity than that, it is just a way to simply ask the compiler to do some very easy boilerplate code for you.
As that blog post shows, you can very easily build a ValueProvider just by implementing the interface. Here's a quick example of how you could make one that reads a Map<String, Object>. When you create each instance, you tell it which key are you working off of, and the type of data it should find when it reads out that value:
public class MapValueProvider<T> implements
ValueProvider<Map<String, Object>, T> {
private final String key;
public MapValueProvider(String key) {
this.key = key;
}
public T getValue(Map<String, Object> object) {
return (T) object.get(key);
}
public void setValue(Map<String, Object> object, T value) {
object.put(key, value);
}
public String getPath() {
return key;
}
}
You then build one of these for each key you want to read out, and can pass it along to ColumnConfig instances or whatever else might be expecting them.
The main point though is that ValueProvider is just an interface, and can be implemented any way you like.

Is this Object Casting pattern acceptable in SharePoint?

I'm creating a SharePoint application, and am trying some new things to create what amounts to an API for Data Access to maintain consistency and conventions.
I haven't seen this before, and that makes me think it might be bad :)
I've overloaded the constructor for class Post to only take an SPListItem as a parameter. I then have an embedded Generic List of Post that takes an SPListItemCollection in the method signature.
I loop through the items in a more efficient for statement, and this means if I ever need to add or modify how the Post object is cast, I can do it in the Class definition for a single source.
class Post
{
public int ID { get; set; }
public string Title { get; set; }
public Post(SPListItem item)
{
ID = item.ID;
Title = (string)item["Title"];
}
public static List<Post> Posts(SPListItemCollection _items)
{
var returnlist = new List<Post>();
for (int i = 0; i < _items.Count; i++) {returnlist.Add(new Post(_items[i]));}
return returnlist;
}
}
This enables me to do the following:
static public List<Post> GetPostsByCommunity(string communityName)
{
var targetList = CoreLists.SystemAccount.Posts(); //CAML emitted for brevity
return Post.Posts(targetList.GetItems(query)); //Call the constructor
}
Is this a bad idea?
This approach might be suitable, but that FOR loop causes some concern. _items.Count will force the SPListItemCollection to retrieve ALL those items in the list from the database. With large lists, this could either a) cause a throttling exception, or b) use up a lot of resources. Why not use a FOREACH loop? With that, I think the SPListItems are retrieved and disposed one at a time.
If I were writing this I would have a 'Posts' class as well 'Post', and give it the constructor accepting the SPListItemCollection.
To be honest, though, the few times I've seen people try and wrap SharePoint SPListItems, it's always ended up seeming more effort than it's worth.
Also, if you're using SharePoint 2010, have you considered using SPMetal?

Aggregate root and instances creation of child entities

I have an aggregate that includes the entities A, AbstractElement, X, Y and Z. The root entity is A that also has a list of AbstractElement. Entities X,Y and Z inherit from AbstractElement. I need the possibility to add instances of X, Y and Z to an instance of A. One approach is to use one method for each type, i.e. addX, addY and addZ. These methods would take as arguments the values required to create instances of X, Y and Z. But, each time I add a new type that inherits from AbstractElement, I need to modify the entity A, so I think it's not the best solution.
Another approach is to use an abstract add method addAbstractElement for adding AbstractElement instances. But, in this case, the method would take as argument an instance of AbstractElement. Because this method would be called by entities located outside of the aggregate, following DDD rules/recommandations, are these external entities authorized to create instances of AbstractElement? I read in the Eric Evans book that external entities are not authorized to hold references of entities of an aggregate other than the root?
What is the best practice for this kind of problem?
Thanks
From Evan's book, page 139:
"if you needed to add elements inside a preexisting AGGREGATE, you might create a FACTORY METHOD on the root of the AGGREGATE"
Meaning, you should create a factory method on the root (A) which will get the AbstractElement's details. This method will create the AbstractElement (X/Y/Z) according to some decision parameter and will add it to its internal collection of AbstractElements. In the end this method return the id of the new element.
Best Regards,
Itzik Saban
A few comments. As the previous answerer said, it's a good practice to use a factory method. If you can avoid it, never create objects out of the blue. Usually, it's a pretty big smell and a missed chance to make more sense out of your domain.
I wrote a small example to illustrate this. Video is in this case the aggregate root. Inside the boundaries of the aggregate are the video object and its associated comments. Comments can be anonymous or can have been written by a known user (to simplify the example, I represented the user by a username but obviously, in a real application, you would have something like a UserId).
Here is the code:
public class Video {
private List<Comment> comments;
void addComment(final Comment.Builder builder) {
this.comments.add(builder.forVideo(this).build());
// ...
}
}
abstract public class Comment {
private String username;
private Video video;
public static public class Builder {
public Builder anonymous() {
this.username = null;
return this;
}
public Builder fromUser(final String username) {
this.username = username;
return this;
}
public Builder withMessage(final String message) {
this.message = message;
return this;
}
public Builder forVideo(final Video video) {
this.video = video;
return this;
}
public Comment build() {
if (username == null) {
return new AnonymousComment(message);
} else {
return new UserComment(username, message);
}
}
}
}
public class AnonymousComment extends Comment {
// ...
}
static public class UserComment extends Comment {
// ...
}
One thing to ponder on also is that aggregate boundaries contain objects and not classes. As such, it's highly possible that certain classes (mostly value objects but it can be the case of entities also) be represented in many aggregates.

Automapper and immutability

Is it possible to use AutoMapper with Immutable types?
For example my Domain type is immutable and I want to map my view type to this.
I believe it is not but just want this confirmed.
Also as it is best practice to have your domain types immutable, what is the best practice when mapping your view types to domain types?
I typically do the mapping from view types to domain types by hand, as I'll typically be working through a more complex interface, using methods and so on. If you use AutoMapper to go from view to domain, you're now locked in to an anemic domain model, whether you've intentionally decided to or not.
Suppose that you really did want an immutable property on your Domain type, say Id. Your domain type might look something like this:
public class DomainType
{
public DomainType(int id)
{
Id = id;
}
public int Id { get; }
// other mutable properties
// ...
}
Then you can use ConstructUsing using a public constructor of your choice, such as:
CreateMap<ViewType, DomainType>()
.ConstructUsing(vt => new DomainType(vt.Id));
Then map all the mutable properties in the normal way
AutoMapper relies on property setters to do its work, so if you have read-only properties, AutoMapper won't be of much use.
You could override the mapping behaviour and, for example, configure it to invoke a specific constructor, but that basically defeats the purpose of AutoMapper because then you are doing the mapping manually, and you've only succeeded in adding a clumsy extra step in the process.
It doesn't make a lot of sense to me that your domain model is immutable. How do you update it? Is the entire application read-only? And if so, why would you ever need to map to your domain model as opposed to from? An immutable domain model sounds... pretty useless.
P.S. I'm assuming that you mean this AutoMapper and not the auto-mapping feature in Fluent NHibernate or even some other totally different thing. If that's wrong then you should be more specific and add tags for your platform/language.
We have immutable objects using the builder pattern. Mapping them takes a little more boilerplate code, but it is possible
// ViewModel
public class CarModel : IVehicleModel
{
private CarModel (Builder builder)
{
LicensePlate = builder.LicensePlate;
}
public string LicensePlate { get; }
//
public Builder
{
public string LicensePlate { get; set; }
}
}
// Model
public class CarViewModel : IVehicleViewModel
{
private CarViewModel (Builder builder)
{
LicensePlate = builder.LicensePlate ;
}
public ILicensePlate LicensePlate { get; }
//
public Builder
{
public ILicensePlate LicensePlate { get; set; }
}
}
Our AutoMapper Profiles have three mappings registered:
CreateMap<IVehicleModel, CarViewModel.Builder>();
CreateMap<CarViewModel.Builder, IVehicleViewModel>().ConvertUsing(x => x.Build());
CreateMap<IVehicleModel, IVehicleViewModel>().ConvertUsing<VehicleModelTypeConverter>();
The VehicleModelTypeConverter then defines a two stage conversion:
public IVehicleViewModel Convert(IVehicleModel source, IVehicleViewModel destination,
ResolutionContext context)
{
var builder = context.Mapper.Map<CarViewModel.Builder>(source);
var model = context.Mapper.Map<IVehicleViewModel>(builder);
return model;
}
(An implementation of ITypeListConverter<string, ILicensePlate> carries out that mapping).
Usage in our system is as normal:
var result = _mapper<IVehicleViewModel>(_carModel);
This is using AutoMapper v7.0.1
You can use Automapper with classes or records that have properties init only setters. This is new in C# 9.0.
Automapper can set the properties at object creation because the properties have init only setters, but after Automapper has mapped them, they are locked in (immutable).
https://www.tsunamisolutions.com/blog/c-90-records-and-dtos-a-match-made-in-redmond

Resources