Azure Table Storage - Customising table entity saves for persisting collections - azure

There's already a lot of blog posts out there about being able to hook into the WritingEntity event to customise the XML that gets submitted to the server, such as this.
Has anything changed with this process in the newer versions of the SDK? I ask because I have the following simple entity:
public class Label : TableServiceEntity
{
public Guid Id { get; set; }
public string Name { get; set; }
public string Notes { get; set; }
public string ContactInfo { get; set; }
public List<string> Urls { get; set; }
public Label()
{
Urls = new List<string>():
}
}
I want to be able to persist that collection of URLs, and I'm already aware that the only thing that's supported directly in terms of arrays/collections is binary arrays. So I thought, fine, I'll just hook into that WritingEntity event and serialize that list to JSON/XML, then add that to the properties list as per that blog post. Then deserialize back to the list during the handling of the ReadEntity event.
However, when I do that, on the call to SaveChanges on the TableServiceContext I get a DataServiceRequest exception that contains an inner NotSupported exception with the message "Only collections of entities are supported". Is this because the String class doesn't inherit from TableEntity? The thing that's confusing me is, when I check the XML that it's written out, it has actually been able to successfully write the custom XML with the additionally added property containing the serialized list, despite the exception.
When I try to retrieve the label via CreateQuery, I get the same exception thrown.
Can anybody tell me what I'm doing wrong here, and what the best practice is for dealing with this situation? I've already came across Lokad Cloud for doing the persistence, but it doesn't seem ideal to me as the querying options for getting data back out are too limited for what I'm wanting to do.
I did have a look at past questions but none seem to address this issue directly.
Any advice would be appreciated!
Based on the response:
I don't know if you got the impression that I'm serializing the entire entity manually? The partition key is just "LABELX", where X is the first letter of the Name property of the label, and the row key is just the string representation of the GUID (I know it's wasteful to store both of those, but I'm just trying to get up and running at the moment).
If you set a breakpoint on the first line of the WritingEntity event and you inspect the XML that's in the e.Data property, there is nothing to represent the URLs collection in the XML. It doesn't matter whether the URLs list is empty, null, or it has entries in it - it doesn't appear at all in the XML, so it doesn't matter what list I pass in. So I think that should answer all 4 questions.
Inside the writing entity event, there really isn't anything special: just code to serialize the list to XML, and then code to add a property to the XML, as per the blog post - it all runs without any exceptions.
OK, sorry, I had neglected to mention that fact that I'm only using the development storage at the moment. The problem seemed to be the fact that I had created some Label entities that didn't have any URLs, before I had created ones that did, and so the schema information in the TableContainer table didn't have the additional URL property. After I cleaned out the database and added a fully populated object before adding anything else, everything worked OK!

I've got some code working on this - and it definitely seems to work with 1.4 SDK
My code is more based on generic entities and the sources I used for inspiration were:
Jai's post on http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/481afa1b-03a9-42d9-ae79-9d5dc33b9297/
with modifications from Yi-Lun Luo on http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/f57cb566-cc7a-4b31-b1ab-47b6d16604af/
and also a good some credit due to ideas from http://azuretablequery.codeplex.com/
I'm guessing that something is wrong in your WritingEntity event handler. Can you post any more of your code - especially:
how are you serialising the RowKey and PartitionKey?
are you removing the raw Urls list from the serialisation?
if you're using XML for the inner serialisation, then are you fully escaping that XML?
One further debugging idea is to just try to get the code working step by step - i.e.
start with an entity with no list,
then try adding another simple test property using a WritingEntity hook,
then try adding a Urls list, and removing it during WritingEntity.
then try serialising and adding this new property in WritingEntity

Related

Retrieving a value object without Aggreteroot

I'm developing an application with Domain Drive Design approach. in a special case I have to retrieve the list of value objects of an aggregate and present them. to do that I've created a read only repository like this:
public interface IBlogTagReadOnlyRepository : IReadOnlyRepository<BlogTag, string>
{
IEnumerable<BlogTag> GetAllBlogTagsQuery(string tagName);
}
BlogTag is a value object in Blog aggregate, now it works fine but when I think about this way of handling and the future of the project, my concerns grow! it's not a good idea to create a separate read only repository for every value object included in those cases, is it?
anybody knows a better solution?
You should not keep value objects in their own repository since only aggregate roots belong there. Instead you should review your domain model carefully.
If you need to keep track of value objects spanning multiple aggregates, then maybe they belong to another aggregate (e.g. a tag cloud) that could even serve as sort of a factory for the tags.
This doesn't mean you don't need a BlogTag value object in your Blog aggregate. A value object in one aggregate could be an entity in another or even an aggregate root by itself.
Maybe you should take a look at this question. It addresses a similar problem.
I think you just need a query service as this method serves the user interface, it's just for presentation (reporting), do something like..
public IEnumerable<BlogTagViewModel> GetDistinctListOfBlogTagsForPublishedPosts()
{
var tags = new List<BlogTagViewModel>();
// Go to database and run query
// transform to collection of BlogTagViewModel
return tags;
}
This code would be at the application layer level not the domain layer.
And notice the language I use in the method name, it makes it a bit more explicit and tells people using the query exactly what the method does (if this is your intent - I am guessing a little, but hopefully you get what I mean).
Cheers
Scott

Should I add item using repository pattern or a create event if I am using domain events?

I am trying to understand the Domain Event pattern illustrated by Udi Dahan with regard to adding new domain entities in a certain situation.
Now normally with entities I would create them and then add them via repository. I assume I would still do this?
My example is we normally add assets to the system. Like this:
var asset= new Asset();
/*bunch of prop setting*/
_assetRepository.Add(asset);
However asset creation is an event that we want to follow certain processes as a result of. Therefore it was suggested by developer we no longer need to do this as it could be handled by domain event:
var asset= new Asset();
/*bunch of prop setting*/
asset.Create(location);
Now a create method would raise an event and be handled by a create event handler that basically just inserts it into the repo and does some other stuff email the warehouse manager of the create location etc.
However having a create event on the asset looks pretty active record to me. However in the domain people talk about new assets being created. So we were not sure.
Thoughts?
The created domain event should be raised in the constructor of the Asset class because that is when that particular entity is created. In your current implementation, this would be erroneous because the Asset entity provides a parameterless constructor. Instead, create a constructor which has all required properties as parameters thereby preventing creation of an Asset entity in an inconsistent state. It could look like this:
public class Asset
{
public Asset(string prop1, decimal prop2)
{
this.Prop1 = prop1;
this.Prop2 = prop2;
DomainEvents.Raise(new AssetCreated(prop1, prop2));
}
public string Id { get; private set; }
public string Prop1 { get; private set; }
public decimal Prop2 { get; private set; }
}
You still have to persist the entity using the repository after creating it. This can be problematic because the handlers for the AssetCreated cannot reference its ID since it is not yet assigned when they are notified. If using event sourcing, then the creation event would be explicitly stored in the underlying event store.
I've been struggling for this problem for quite a long time. But no good solution. I think,
A domain event shouldn't be published or handled before the aggregate it belongs to being successfully persisted
It's not the application layer's responsibility to publish any domain events
So far, I think the best approach is to take advantage of AOP. We can "fire" events in the aggregate, but instead of dispatching them instantly, we keep it in a queue, and really dispatch it after the corresponding transaction successes. We can define a custom #Transactional interceptor to achieve this, thus keeping the app service from knowning any concept of "event publishing".

Customized Tridion Search Index Handler: Custom vs Standard field for page url?

I was playing around with custom Search Indexing Handlers for SDL Tridion 2011 (GA). I got something working, using the very helpful information provided by Arjen, however I am not sure if my execution is the best option.
The requirement is to be able to search for pages in the CMS by url (eg www.example.com/news/index.html). In order to do this I have the created a class using the ISearchIndexingHandler interface (code below). I am indexing the url in the ContentText field of the item, however I am not sure if this would normally contain something else for a page (I think a page only has metadata so this should be OK). The advantage of using this over a custom field is that I can simply type the url in the search box without having to use <url> IN <fieldname> or something like that.
So my question is, is there any reason not to use ContentText for Pages, and is there any advantage in using a custom field? Also bonus marks go to anyone with good ideas on how to handle BluePrinting (if I create a page in a parent publication, I want the local urls also to be indexed in the child publications), and the case where a Structure group path is altered (I guess I can somehow trigger a re-index of child page items from within my indexing handler).
The code:
using System;
using Tridion.ContentManager.Search;
using Tridion.ContentManager.Search.Indexing.Handling;
using Tridion.ContentManager.Search.Indexing.Service;
using Tridion.ContentManager.Search.Indexing;
using Tridion.ContentManager.Search.Fields;
namespace ExampleSearchIndexHandler
{
public class PageUrlHandler : ISearchIndexingHandler
{
public void Configure(SearchIndexingHandlerSettings settings)
{
}
public void ExtractIndexFields(IdentifiableObjectData subjectData, Item item, CoreServiceProxy serviceProxy)
{
PageData data = subjectData as PageData;
if (data != null)
{
PublishLocationInfo info = data.LocationInfo as PublishLocationInfo;
string url = GetUrlPrefix(data) + info.PublishLocationUrl;
item.ContentText = url;
}
}
private string GetUrlPrefix(PageData page)
{
//hardcoded for now, but will be read from publication metadata
return "www.example.com";
}
}
}
You can store the url in the ContextText Property. Thies field is used to index Template content data.
Tridion does not index shared item(s) of child publication.
Indexing is triggered on Item modification(create, update, delete, localize and unlocalize).
Or you can use reindexing tool to reindex ur item. but there is no way to index shared items in child publication.
I don't think you can include the URL prefix in neither your search query as the indexed item. Because shared items are not indexed, you will probably index the Page from the Website Structure layer, which is never published.
When a Structure Group is moved you would have to make an event handler that triggers re-indexing all child pages using a protected method of the TOM.NET API. This method is not part of the public API, so posting the code for that solution would probably declare me a persona non grata with R&D :)
Before you re-index anything you should store the original publish location url of the Structure Group in the TcmEventArgs.ContextVariables property, so you can verify whether or not a re-indexing action is necessary.

Domain Driven Development: Detecting changes (.NET)

I've just started with Domain Driven Design and trying to apply it for my current project.
I've started with a pure domain model and now stuck with my Data Access layer. I have a completely home made data access layer therefore no any of well known ORM tools can be applied here.
I cannot figure out how to deal with updates. Let's say I have the following Objects:
public class Document : Entity
{
public IPropertiesCollection Properties { get; set; }
public IContents Contents { get; set; }
}
public class PostalDocumentsPackage : Entity
{
public String Name { get; set; }
public DateTime DeliverDate { get; set; }
public ICollection<Document> Documents { get; set; }
}
I have corresponding repositories IDocumentsRepository and IPostalDocumentPackagesRepository for retrieving objects (for now).
The problem I deal with now is to situation when i want to add a new document do Documents collection of PostalDocumentsPackage. Basically I see two possible cases here:
1) Implement the collection that track changes in original collection and holds lists of items that were updated\removed.
2) Implement separate methods in repository for adding documents to the package.
I wonder are these approaches is ok or can cause problems in future? or there is another alternatives?
Typically change tracking would be handled by an ORM such as NHibernate. In your case you may be able to do the following:
Select new documents based on the value of the identity property.
Issue a SQL delete statement before re-inserting into the table.
A problem with either approach is that the documents collection may be big, such that loading all documents for each PostalDocumentsPackage may be a bottleneck. Also you must consider whether you need change tracking on the Document entity in addition to the documents collection. If so, then you would need to implement change tracking for the Document class as well. Given that you're not using an ORM I would suggest solution #2 since solution #1 will lead you down a path of re-implementing change tracking, which among other things would pollute your domain classes. You may also consider a CQRS/Event Sourcing architecture in which change tracking is made explicit.

Adding properties to an existing object retrieved using SubSonic

I think this is more of a polymorphism question but it applies to SubSonic table objects...
Here's the thing (and I love this one):
TblUser userObj = new TblUser(1);
Which fills userObj's properties with all of PK=1's goodies.
Now, I'd like to add more properties to the existing user object, for example, an ArrayList property of say, account numbers.
I've seen questions like this around - "add a property to an existing object...", but in this case, would it be most-recommended to create a user wrapper object, then have a TblUser property type, and my own other additional properties in this?
Ok, so it looks like once-again I have come up with a solution to this, but am still curious about the possibility of adding properties to existing objects.
All the generated SubSonic classes are partials so all you need to do to add extra properties/methods to them is to create your own partial class with the same name in the same namespace and the two will be merged at compile time. For example for your TblUser class:
public partial class TblUser
{
public List<AccountNumber> AccountNumbers
{
get
{
// Get and return the AccountNumbers
}
}
}

Resources