Multiple Results Sets in EF 5 - entity-framework-5

So I am following the instructions found here: http://msdn.microsoft.com/en-US/data/jj691402 concerning how to handle multiple result sets from in EF.
I am trying to avoid the second solution as this will involve changing the EDMX by hand, which concerns me as I do not want to have to worry about other members on my team overwriting them.
But the first example still seems to be lacking. It refers to the ObjectContext.Translate<TEntity> method, but no where does it say how the <TEntity> is being created. Any time I create an Entity by hand, I of course get Error 2062, "no mapping between entity set and association set". Is there a step that I am missing? Or does the first solution not work with a DB first approach?

If you have create entity by hand in EF designer without mapping it to existing table or database view you indeed receive an error. Try to create complex type instead.

Related

Does models.Base.metadata.create_all(bind=engine) stay in code after creating tables?

I am working on a project built by some people before me, when I was creating some new tables using sqlalchemy I figured out that the method which creates tables were removed. So, I had to put it in code again to create tables. What I am wondering is if it doesn't need to stay in code after creating tables. is there any problem with keeping it there?
Here is the line code that I am talking about:
models.Base.metadata.create_all(bind=engine)
Metadata.create_all takes a checkfirst keyword argument which determines whether SQLAlchemy should check whether a table already exists before trying to create it. The default value of this argument is True, so once the tables have been created future invocations will have no effect, beyond emitting a few queries.
You can leave the code in place - it will be useful when a developer needs to create a fresh environment.

How to use database views in EF Core 3.0?

I know the question was asked before, but at the time it was, we had EF Core 2.x. The short answer was "no you can't" and obviously, not very helpful.
The other answers involved ugly hacks like changing migration files after they were created by the tool.
I make an application Code First. I have my models created with lot's of foreign keys and database joins in mind.
But here comes the unpleasant surprise (I'm a little new to EF): those joins written in LINQ are pretty slow, as a matter of fact they do not produce database join, but fetch whole tables instead.
Of course it's totally unacceptable, I import an old database with millions of records, with the joins I get results in milliseconds, without I get couple of seconds lags - on my very fast internet connection (in real world scenario it would be much worse).
I need views, and AFAIK EF won't create them for me, is it STILL true for EF 3.0?
Then, what would be the best and the most clean way to create views in SQL and to make entities for them? I mean - considering the situation the database models would change over time, and the database structure would have to be updated.
Well, I would prefer doing my joins not in SQL views, just have queries returned "JOIN" statement results. Especially some not obvious joins. Lets say table B has a column being a foreign key referencing table A. I want to get results from table A joining B for details. With normal SQL JOIN performance.
I checked the database: there is no significant performance difference between "select * from A" and "select * from A join B...". In LINQ - the difference is huge.
I figured out that in Code First database views are redundant.
The "views" can be created as models (ordinary classes) having a field or a property set to joined entity. I use private fields for that purpose. Then I use LINQ Join() to create my view entity. The query may refer ONLY to the fields set to joined entities, nothing else. Such query, if written properly translates clearly to SQL JOIN and works with full speed. In my application it's equivalent of a database view.
Why private fields and not properties, you may ask. Maybe because joined entities are "implementation details", but another reason is my presentation code uses reflection to operate on entity public properties, it's good to have those entities hidden from it. Otherwise I would probably need to use attributes to hide those "columns".
BTW, such views can be ordered with OrderBy(), filtered with Where() at virtually no cost. The constraint is to maintain the collection's IQueryable interface, never refer joined entities indirectly. So even if X refers to A.B, never refer X in a LINQ query, always A.B where A is direct entity reference assigned in the Join() query.
To build dynamic queries at runtime one must use expressions.
This set of properties of EF Core 3.0 allows to build a database application without using SQL, but with the full SQL speed maintained. However, the database / entity structure must be relatively simple to achieve that.

NSRangeException following Core Data migration

After adding a new Core Data model version to my app, I performed a lightweight migration, apparently successfully. The migrated file loaded fine, but upon the first attempt to access an attribute via a particular relationship, the app crashes with an NSRangeException: '*** -[__NSArrayM objectAtIndex:]: index 4294967295 beyond bounds [0 .. 35]'. This relationship worked fine prior to the migration. I know from other posts here that 4294967295 is really -1, but the only thing I can identify with 36 items in my app/data is that there are 36 total entities in the data model (for reference, the relationship that's being fetched has 58 items in its table).
The question:
My question is: based on the error I'm getting and the troubleshooting I've done below, is there a type of schema change that could pass the lightweight migration, but corrupt the data along the way, leading to the noted exception? I'm going to try breaking down the migration into smaller chunks over several versions to either isolate or avoid the issue, but it would be nice to be able to focus on specific schema changes that might be at fault.
The failure:
The failure occurs with the following code in "myobject":
[[self object2] text];
The object2 relationship is to-one, non-optional both ways and neither the forward nor inverse relationship was changed between data models. The text attribute is likely not relevant because when the error occurs, awakeFromFetch is not reached in object2. If I assign [self object2] to a variable prior to the above statement, the assignment is successful and reports data: <fault>.
The database:
Looking at the database in sqlite3, I notice the following:
The index values for the forward and inverse relationships appear to be correct in each table.
The object2 table has two columns for the inverse relationship instead of the one prior to migration (ZMYOBJECT as before and the additional Z2_MYOBJECT, which is empty for all rows). No other relationship were added to explain this column.
In the Z_PRIMARYKEY table, all entries post-migration show -1 for Z_MAX, whereas prior to migration they showed zero for empty tables and the maximum row number for populated tables. Manually updating Z_MAX to the proper values did not help with the exception. All Z_SUPER values were correct.
I set up a mapping model to see if anything looked awry with the automatic mappings, but everything looked fine.
Overall schema changes:
In the source version of the data model, there were fourteen entities, of which only four had been populated with data (the app is still in development). Seven were top-level entities and seven were sub-entities of three of the top-level entities.
In the target version of the data model, twenty-two entities were added, some top-level and some sub-entities, with dozens of relationships, including some added to existing entities.
Some attributes and relationships were removed from existing entities and others were added. No data types or relationship settings were changed, no attributes or relationships were renamed, and no special mappings were required.
Update (2/25/12): As I started working on a new intermediate model, I remembered that I had changed the class (representedClassName) for a number of entities from NSManagedObject to an NSManagedObject subclass, but hadn't generated the class files. I didn't suspect that would cause an issue and, indeed, creating all of the class files did not help with the exception. I just wanted to note that as another change between models.
Conclusions:
This is a wild guess, but if the 36 entity count is not a coincidence, it seems that when "myobject" attempts to fault in "object2" it does not have a valid reference for the table and is attempting to load table number -1, causing the exception. The fact that a simple assignment of [self object2] is successful, however, doesn't jibe with that conclusion.
Any ideas?
By working through several incremental migrations I was able to determine what is causing the issue, and a solution.
The problem:
One of the existing entities with data has no child entities in the current model. If I create a new model that simply adds a child entity, containing no attributes or relationships, and makes no other changes, the NSRangeException, Z_MAX observation, and doubling of the inverse relationship noted in my question all occur.
The solution:
After observing the failures following a "successful" lightweight migration for the case above, I created a mapping model. Since the only change was one additional entity, all but one of the entity mappings were straightforward. The question was what to do with the single added entity.
By default, the added entity with no attributes or relationships of its own was showing attribute and relationship mappings for all of the parent's properties. All of the mappings had empty value expressions by default, which I assumed meant that it would just skip them during the migration. Not true, apparently. By deleting all of the attribute and relationship mappings within the entity mapping and then turning off inferred mapping, the migration proceeded successfully.
I still have to tackle all of the remaining entities and will be trying this approach to do the rest in bulk, with all planned attributes and relationships intact.
Your posts were helpful when I encountered this problem. Thank you. [Have you reported the bug yet?]
Here are some more experimental results but, alas, not a great solution.
My schema change similarly added an entity subtype that has no additional attributes or relationships. The error message is the same as yours except the bounds are [0 .. 19]. That does correspond to 20 entity types, validating your hypothesis. Like your situation, the error happened when attempting to access an entity property after migration completed.
Adding a dummy attribute and a dummy self-relationship to the new entity type didn't avoid the post-migration crash. (However, I didn't test with that new entity type as the only schema change since I previously pushed that schema change to alpha testers.)
I observe the Z2_MYOBJECT column and Z_PRIMARYKEY.Z_MAX = -1 symptoms after successful migrations for other schema changes, so those may not be problematic at all. The -1 values get replaced lazily by the proper max values. The extra column might be used during migration.
In my case, the new entity's supertype has an ordered to-many relationship. In the very simple case where the entire data store contains just one object instance (an instance of that entity type with no outgoing relationship links), the schema migration succeeds. It does have the extra Z2_MYOBJECT column and Z_PRIMARYKEY.Z_MAX = -1 values and yet the resulting data store works fine when adding objects from there.
I tried creating a mapping model but was unsuccessful in getting Core Data to apply it. Turning off inferred mapping just made Core Data unable to migrate at all. Is there a trick to it? Do I have to write custom migration code to invoke a mapping model? This is Xcode 4.6.2 so the older bug is long gone.
When using git to roll the code & data model backwards or forwards to conduct an experiment, it seems to be necessary to (1) close & reopen the Xcode project and (2) do a clean build. Otherwise Xcode may crash and/or leave confounding state around.
To experimentally roll backwards, you must delete the .momd/ directory or the entire app from the target iOS simulator/device (or deploy the app via iTunes or TestFlight) since redeploying via Xcode won't remove obsolete files (like .mom and .omo data model definitions) which in turn lets the app do lightweight migrations that the actual deployed app can't do.
About the entity mapping to use for the added entity type, note that when Core Data applies a mapping model, it's copying entities from the old data store to a new one. It's not modifying the tables in place. You don't want it to "skip" properties (including inherited properties) unless you want to drop them.
However, since the schema change added an entity type, that entity has no instances to migrate so its custom mapping model rules do not matter.
Thus I wonder if something else caused your crashes to stop, like leftover experimental .mom files or custom migration code. Did your workaround hold up?
After 2 days of experimenting I decided my alpha testers would have to live without data migration this time. Fortunately this happened without production customers. But it doesn't give me confidence in Core Data.
I had the same sort of NSRangeException after adding a core data model version when accessing any instance of a particular entity after automatic lightweight migration. In my case also the range corresponded to the number of entities in my model.
I generated a mapping model with Xcode 4.6 (4H127) using File > New > File... and then selecting Core Data > Mapping Model. This caused the crash to (d)evolve into -[NSSymbolicExpression length]: unrecognized selector sent to instance...
Solution
The issue in my case was that my entity causing the original crash had a relationship named size, which is a reserved word listed in apple's Predicate Programming Guide. An examination of the mapping model revealed that the reserved word had been capitalized in the Value Expression for the relationship:
FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "PNSizeOptionToPNSizeOption", $source.SIZE)
I found the solution in Core Data Model Versioning and Data Migration Programming Guide:
Reserved words in custom value expressions: If you use a custom value
expression, you must escape reserved words such as SIZE, FIRST, and
LAST using a # (for example, $source.#size).
Unfortunately, Xcode's algorithm for generating the mapping model did not recognize the reserved word and I had to change the expression's key path in the Relationship Mapping inspector to $source.#size. This solved the problem. I assume that core data's inferred mapping model ran into a similar problem during lightweight migration.
There may be other causes of this kind crash and so this solution may not apply, but it may be worth checking the property names in your model against the list of reserved words in the Predicate Programming Guide.

How do I set default values of fields added to existing entities?

I'm adding 4 new checkboxes to an entity and its form. There are already instances of this entity created in production. I need to have these checkboxes defaulted to checked on these existing entity instances. I set the default value of the field but apparently this only gets used when a new instance of the entity is created. Is there an easy way to set these on all the existing instances?
I could use a one-off workflow, but I don't know how many instances of this entity there are and due to auditing requirements I can't access the production environment.
You could create execute some JavaScript on the load of the form. Of course, this wouldn't update all of the values in the database, but it would update it before a user is able to view it. Do you need all the values in the database defaulted, or will java script work?
Edit
Your best options are either an update multiple ( you can increase the number of records an advanced find returns to 250 records per page) and continue to update all records manually, or perform a batch update. See this related SO question.
There's another way, too. You could write a console application that connects to your server (not that hard if you've done it before and don't have to make it general). Then, you simply execute an update on the service fetching all the existing entities and updating them after the change is made.
As #Daryl says, there's probably a way to do that from the GUI too, but real programmers do it the hard way. :)
Of course I'm kidding. I just love to type code, hehe. Never the less - once you start coding, you have full freedom to affect the data any way you need, forever.
You can do this by exporting the records and change the value for the field and then re-import back it.

How do I add a set of strings to an Entity?

This is a simple requirement: I want to add a set of strings to Accounts in Dynamics 2011. The string are external IDs for other systems. All the strings should be unique accross all entities.
The only way I can see to do this is define the strings as entities (say 'ExternalCode') and set up a 1:N reslationship between Account and ExternalCode, but this seems incredibly overweight. Also, defining as an entity insists thhat I give the 'ExternalCode' a name, which it obviously doesn't have.
What's the best way to implement this?
Thank you
Ryan
It may seem overweight, but think about entities as if it were tables. Would you create a second table inside MS SQL? If so, then you should create another entity. CRM is very well optimized so I wouldn't worry about this additional overhead.
Alternatively, you could always carry the GUID in the other system.
How are these unique references entering your CRM system. Are you importing the data from each of the external systems? If so I assume the references are unique in the external system? Once imported you want to make sure that any of these references are not duplicated?
Additionally, how many strings are we talking about here? If it is a small number then it would make sense to just define attributes to manage them and check for duplicates in one of the following ways:-
1) Some javascript could be used to make an oData query to confirm the 'uniqueness' of your external reference number before the record is commited. (But, this is not sufficient is records will be created programmatically in the system also).
2) A plug-in which fires on pre-create to again query the system for other records which match the same unique reference numbers and handles the event of a match accordingly.
However, if there are many of them then it may make more sense to define a separate entity as you say and then as above you could associate a new 'reference record' with the entity via a plug-in, but again, check if the record already exists and then either handle an exception or merely associate with an existing record if that is appropriate.
I think they key is what you want to do if you do find a duplicate and how these records are going to be created in the system (e.g. via UI or programmatically or potentially both).
Happy to provide some more assistance if you have some more details.

Resources