Capitalization of Custom Entities, Fields in CRM 2011 - dynamics-crm-2011

In migrating to CRM 2011, we're discovering different developers used different capitalization of custom entities and custom fields. This creates a headache for custom programming using the early bound methods. http://msdn.microsoft.com/en-us/library/gg327844.aspx. Is there a way to normalize the entity/field names before (or after) the migration?

As far as I'm aware. The only way to achieve the correct capitalisation you desire is to recreate the entities with the appropriate names.
Before or after the migration is mostly the upgrade of the CRM Server installation and modifying the database schema to reflect the upgrade, while still maintaining the current data and customisation data.
Thats as far as the "Supported" spiel goes.
As for an actual workaround. If your looking to upgrade anyway I'd be tempted to restore your current CRM 4 system to a test domain. Then look at how feasible it is to change the schema names in the actual "untouchable" crm database. I believe there is a MetaDataSchema.Entity table where this is centrally stored so i'd test this to see how usable this is and what impact it has on say the webservice.
So you face a similiar choice that I face on multiple occasions while working with Dynamics CRM. Go with the supported way, or a bit of "Yee-Ha" development. Sorry it's probably not what you want to hear!
Edit:
In regards to what to change I can't say for sure as I haven't got a CRM 4.0 system to hand only a 2011 at this moment in time. However as an example, there will be a MetadataSchema.Entity table in the [OrganisationName_MSCRM] Database. Of which certain columns will jump out. Name, PhysicalName and Logical Name.
Logical name is the one that CRM users which it defaults to lowercase no matter how you enter it.
I believe PhysicalName and Name are the ones you would be looking to change into lower case.
The actual "Name" of the entity, eg logical name is "account" whereas in CRM where it is displayed in a user friendly way is related through a table called MetadataSchema.LocalizedLabel through the foreign key "ObjectId" which in this case would be the "EntityId" field.
This is where I would be looking to do the changes as it shouldn't have an impact on the rest of the data due to the "logicalname" field being the one CRM probably uses.
As far as your generation for strongly types classes goes.
if you use latebound such as
relatedEntity.LogicalName = "new_related_account";
relatedEntity["relatedaccountid"] = entity["accountid"];
then all the properties and logical names need to be lower case. As this will use the "logicalname" property previously identified in the MetadataSchema table.
Howver using SVCUtil I can only assume it looks at the "Name" and "Physical Name" attributes to give a slightly more user friendly coding experience when it generates the file for use in custom applications.
Though if you are looking to use the early bound class generation it shouldn't be a problem as the definition file generated will provide intellisense on the correct capitalisation of attributes and properties, and if you are using late bound like the example previously it's all lower case. So it's more it will just be a bit untidy to look at than completely impractical =)

Related

Complex Finds in Domain Driven Design

I'm looking into converting part of an large existing VB6 system, into .net. I'm trying to use domain driven design, but I'm having a hard time getting my head around some things.
One thing that I'm completely stumped on is how I should handle complex find statements. For example, we currently have a screen that displays a list of saved documents, that the user can select and print off, email, edit or delete. I have a SavedDocument object that does the trick for all the actions, but it only has the properties relevant to it, and I need to display the client name that the document is for and their email address if they have one. I also need to show the policy reference that this document may have come from. The Client and Policy are linked to the SavedDocument but are their own aggregate roots, so are not loaded at the same time the SavedDocuments are.
The user is also allowed to specify several filters to reduce the list down. These to can be from properties that are stored on the SavedDocument or the Client and Policy.
I'm not sure how to handle this from a Domain driven design point of view.
Do I have a function on a repository that takes the filters and returns me a list of SavedDocuments, that I then have to turn into a different object or DTO, and fill with the additional client and policy information? That seem a little slow as I have to load all the details using multiple calls.
Do I have a function on a repository that takes the filters and returns me a list of SavedDocumentsForList objects that contain just the information I want? This seems the quickest but doesn't feel like I'm using DDD.
Do I load everything from their objects and do all the filtering and column selection in a service? This seems the slowest, but also appears to be very domain orientated.
I'm just really confused how to handle these situations, and I've not really seeing any other people asking questions about it, which masks me feel that I'm missing something.
Queries can be handled in a few ways in DDD. Sometimes you can use the domain entities themselves to serve queries. This approach can become cumbersome in scenarios such as yours when queries require projections of multiple aggregates. In this case, it is easier to use objects explicitly designed for the respective queries - effectively DTOs. These DTOs will be read-only and won't have any behavior. This can be referred to as the read-model pattern.

Can I use MODEL-FIRST in EF5 withOUT losing the data in DB?

I am wondering about the model-first approach. I wish to design a new database using the model designer in VS2012. The new features of the model designer such as coloring and splitting up model sections are wonderful. Hopefully there will be purpose for using the model designer beyond initially creating a new database.
I would like to perform the following steps...
using the model designer, visually design and push the model to create the initial database and a table
add data to the table
make a change to the table in the model designer (e.g. add a field)
push the changes to the database (i.e. update the database)
NOT LOSE MY DATA FROM STEP 2. Also, just to clear any confusion... did I mention that I DON'T WISH TO LOSE THE DATA?
Please, please tell me this obvious need (i.e. the need to evolve the the tables and their fields without losing data, starting from scratch) has not been overlooked in iteration FIVE of EF.
This page on EF (http://msdn.microsoft.com/en-us/data/ee712907.aspx) makes things sound that the developer has equal choices between coding first and modeling first. To me, the intro video on the page creates a similar impression.
It would be nice if there were a simple menu option or better yet just a way to establish "automatic pushes to DB" upon changes to the model. That way whenever changes are made and the SAVE button is clicked, a dialog could appear "Update database?".
I see that using code-first there is a migrations option. I cannot seem to find the same for model-first. And I don't understand why this wouldn't be possible... after all the code that I would have written in code-first does indeed exist - it was created by the model-first code generation.
I'm keeping my fingers crossed in hopes someone will have a simple solution, perhaps something I've just overlooked and all this rambling/venting is in vain. :-)
You really have to use code-first if you want to modify your database when the model changes. Even then it's not some magical automated process but you'll have to script the changes.
With model first your best option is to generate a new database each time and create a change script (DDL) by using a tool like Redgate's SQL Compare or a Visual Studio Sql Server Database Project.
I'd like to add that it is virtually impossible to synchronize a database automatically with a model. Some changes require manual intervention, e.g. removing a field and adding another field cannot be distinguished from renaming/retyping a field. Some changed can easily be done in a model, but would require a table rebuild script in Sql Server (e.g. changing field order), or a combination of modified content and structure (e.g. making a field not null, adding a foreign key).
At the moment the only thing to do is:
Copy your database file... (backup)
Allow EF to recreate the database according to model
Per table copy-paste your records from backup to your new db.
This is not that easy as you need to copy paste in a specific order because of relations and it will only be good for minor changes such as adding columns and new tables or removing scalar columns or removing tables.
But I am certain that this is the begining of a correct approach to deal with the problem which later on can be automated by writing a more generic migration app between two databases which share same table names and relations.
Deeper problems begin when the relations are not the same / table names changed / column names changed.

How do I add a set of strings to an Entity?

This is a simple requirement: I want to add a set of strings to Accounts in Dynamics 2011. The string are external IDs for other systems. All the strings should be unique accross all entities.
The only way I can see to do this is define the strings as entities (say 'ExternalCode') and set up a 1:N reslationship between Account and ExternalCode, but this seems incredibly overweight. Also, defining as an entity insists thhat I give the 'ExternalCode' a name, which it obviously doesn't have.
What's the best way to implement this?
Thank you
Ryan
It may seem overweight, but think about entities as if it were tables. Would you create a second table inside MS SQL? If so, then you should create another entity. CRM is very well optimized so I wouldn't worry about this additional overhead.
Alternatively, you could always carry the GUID in the other system.
How are these unique references entering your CRM system. Are you importing the data from each of the external systems? If so I assume the references are unique in the external system? Once imported you want to make sure that any of these references are not duplicated?
Additionally, how many strings are we talking about here? If it is a small number then it would make sense to just define attributes to manage them and check for duplicates in one of the following ways:-
1) Some javascript could be used to make an oData query to confirm the 'uniqueness' of your external reference number before the record is commited. (But, this is not sufficient is records will be created programmatically in the system also).
2) A plug-in which fires on pre-create to again query the system for other records which match the same unique reference numbers and handles the event of a match accordingly.
However, if there are many of them then it may make more sense to define a separate entity as you say and then as above you could associate a new 'reference record' with the entity via a plug-in, but again, check if the record already exists and then either handle an exception or merely associate with an existing record if that is appropriate.
I think they key is what you want to do if you do find a duplicate and how these records are going to be created in the system (e.g. via UI or programmatically or potentially both).
Happy to provide some more assistance if you have some more details.

SharePoint InfoPath best practices for persisting large forms

I am currently architecting a large SharePoint deployment.
This deployment has the potential to grow to petabytes in size over the course of several years.
One of the current issues we are discussing is the option of storing our data in SharePoint using InfoPath Forms. Some of these forms contain hundres of fields and require a lot of mapping to content types for persistence and search. Our search requirement is primarily a singular identifier and NOT the contents of the forms, although I am told I should preempt the "want" to search in the future.
We require our information to be utilised for secondary purposes (such as reporting etc). The information MUST be accessible instantly after persisting to the system.
My core questions therefore are:
What are the benefit/risks of this approach compared to storing
our data in a singular relational store using web-service
persistence?
If we decided on this approach what would be the
impact of changing the forms, content-types over time?
What happens when our farm grows beyond a single web-application / site collection how accessible will the information be?
Will I know where it is and how portable will the information be overtime?
1.)
Benefit:
Form templates can be created & deployed (relatively) easy
You can easily configure Field Validation
Probably no code involved
Risks:
Hitting SharePoint 2010 Limits (not so uncommon as you might
think)
Needs careful form design/planning (correct XML structure)
Information only accessible via SharePoint Object model or
WebService's (very slow)
2.) Well this is a tough one. Changing the form template and re-deploying is easy and only takes a few minutes. However changing the structure (underlying XML) of the template can get you in trouble very easily, because older (filled out) forms will be invalid - there is an option to "upgrade" older forms out-of-the-box, but in my experience it never worked as it supposed to.
Content Types behave very similar, say you want to delete a column from a content type because it's no longer needed - you'll have to remove all references to it, which means removing all items so you can delete the column.
3.) Well portability is definitly an issue with InfoPath, because it heavily relies on the corresponding URL structure. You absolutely can add more site collections, but this means you have to deploy your form template to each site collection. Information (filled out forms) can't easily be shared between site collection's because each form contains the SourceURL (where it came from) and the Namespace of the template (which changes constantly once you deploy).
Considering your requirements, i would strongly recommend a relational store instead of InfoPath - simply because it is not designed to be a data storage.
I would use a SQL database to store the data and a custom UI (WebPart or Application Page) to perform CRUD operations. This means that the information is not actually stored in SharePoint - just displayed (which also means that it can't be searched with the builtin SharePoint Search). There is also the possibilty to use the Business Connectivity Services (which basically does all of the above without you needing to create a custom UI - however very slow with large amount of data).
If you do need the information just in SharePoint, why not just make all this happen with Lists only?
This is going to be a long one and may not have an answer just because there's no silver bullet for what you're looking for. It's mostly insight and ultimately the choice is up to you.
the option of storing our data in SharePoint using InfoPath Forms
This statement throws me a little. SharePoint data is stored in SharePoint (well, SQL technically) but InfoPath is just a UI layer for accessing any part of that data.
Some of these forms contain 100s of fields and require alot of mapping to content types for persistence and search
From this I assume there are multiple forms which would mean different types of data being accessed (and probably different purposes). Hundreds of fields is no problem and it really boils down to managing the form and view design.
From the form side you should check out cxpartners form design crib sheet. This gives you a nice standard to follow to manage all those fields. Another thing would be to look at breaking the form up in tabs or views itself (in InfoPath) based on what the user needs to fill out. Basically it breaks down to not creating a form with 100s of fields on one massively scrolling screen the user will just freak out over.
Same with the views on the form or document library you're storing the form data in. InfoPath forms are just xml stored in a library (so regardless of how many fields you have, the footprint is pretty minimal). You don't want to map and surface every field in the form nor do you want to have a view with 100 columns on it. You should look at breaking down the views as they're fit for purpose, with only a few hundred items in each view with a few columns. It's a balancing act too as you don't want to create 100s of views either so you need to find out what's right. A good B.A. or Information Architect will help with this with the SharePoint/InfoPath guru and business user helping out.
We require our information to be utilised for secondary purposes (such as reporting etc). The information MUST be accessible instantly
This is another requirement that's going to be a little difficult to meet exactly. If the library has thousands of items (or 10s of thousands) and a view has dozens of fields then expect the view to come to crawl (especially if the user is insistent on "seeing everything" and wants the limits of each view to be set to 1000 items, like anyone could process that much information at once). Instant access is difficult if you're keeping everything online for a long time (like for reporting). There's the operational side where users are filling out forms, finding forms, editing them, etc. and for that you only want a few hundred items to be live at any given moment (up to a few thousand but you need to be careful on the views). If you have a list with 100,000 items in it and users are using this for daily activities and trying to run reports for trending or long term operations against it, you're going to lose the performance battle. Look at doing reporting offline, potentially shipping the data that's reportable to a second source like SQL and using SSRS against it. Performance Point is an option but adds a layer of complexity to the architecture. The question will ultimately fall to what reporting looks like and how important is it in relation to daily operations.
To try to answer your questions directly:
The benefits to using SharePoint over a database are that the data can be easily viewed and sliced and diced up. Creating a view is child's play and can quickly show you useful information like # of sales in a month or customer feedback grouped by call centre person. SharePoint makes it easy to view this information and even setup dashboards, hook in KPIs, etc. without having to get some developer to craft custom web pages. As far as risks go, you need to be careful with letting things grow organically and out of control. Don't let the users design views of data, they generally want something but not sure and will ask for all columns to be available which they just export to Excel to slice and dice. Make sure there's a good design around the views and lists and they're fit for purpose and meet what needs the user is trying to get out of the data. Ask the question of what they're looking for and why, that will help shape what to expose.
Any change needs to be thought out and planned and tested. It's no different in SharePoint if you add a column to a list as you would by adding a column to a SQL database. Form updates should be considered and while you won't get it 100% right the first time, you should try to get as much as possible without going overboard and putting in crazy things like 100 "blank" fields that are players to be named later. Strike a balance by understanding the needs of the users and company and where things are going. Hopefully someone will have a vision of what this thing might be when it grows up and that'll go a long way to understanding the impact of change.
Data is just xml and as long as you're not doing stupid stuff in the form like hard coding absolute paths to services (use data connection libraries) the impact of growth will be minimal. Growing beyond a web application into multiple ones is a pretty big change and not something to be taken lightly. Even splitting site collections out is big and there needs to be a really good reason for this. Site collections can handle thousands of sites and millions of documents without issue. Web applications are really there for dividing up areas of interest or separation of purpose (like team sites on one web app and a publishing portal on another) and not really meant for splitting data due to growth concerns.
Like I said, there's no silver bullet here and what you're asking for is an architecture for a solution that nobody here has all the requirements for. Hope this helps.

How to see before/after state of entities in SubSonic?

I have a few large forms that I need to provide visual cues about the before/after state, so the person approving the form can see what has been modified (not the previous answer, tho that would be a plus). This is currently being done with an extra column for each column of data (Name, Name_IsModified, Phone, Phone_IsModified, etc...). I'm curious if there was a better way to getting around this, leveraging SubSonic?
The initial load is done by grabbing data from 6 source tables on 3 different servers. This data is saved in the form tables, where it resides until it is approved by various people who will manually update it into the live systems that then update the 6 source tables. The visual cues are primarily used during the approval process, but are occassionaly used to research when a change has been made in the past.
Since I have to make a bunch of updates, I thought this might be a good time to break away from the legacy 2000+ lines of code, making my job a bit easier!!!
Thanks,
Zach
All of the properties on SubSonic objects are actually collections and you can pull this out and review changes - all without reflection.
We have a "DirtyColumns" collection (not sure if it's public or not) that we use to run updates - this would be the thing you'd want to have a look at.

Resources