I have been using subsonic and the issues i keep running into is with when i make schema changes, I have to recompile everything and at times subsonic does not recognise some of the schema changes.
Is there a better OR/M that i can use asp.net which is more efficient with working with schema changes
I never had any problems with class generation with SubSonic. Are you sure your schema is a good one? Do you follow the conventions? If some tables are not generated, you may be missing PKs, but if you look into the generated classes it will tell you (in a comment) that this is the case. That's all the advice that I can give based on the information provided by you. I still think it's not SubSonic that's the problem...
If you are using the BuildProvider there is known issues with that and you might have to edit the web.config sometimes for it to rebuild, you can also try using SubCommander (sonic.exe) to generate your classes for you.
You can also try NHibernate check out http://www.summerofnhibernate.com/
What Database are you using and what schema changes are not showing up. In my experience with SubSonic I have never had any schema changes not show up in the generated classes. The largest error that I have seen people make is to add a table to their database, run the SubCommander to generate their classes and then forget to include the generated class in their project.
The only other related issue I have seen people make is if you generate your classes in a seperate class and they are all generated in C# and the project that is using the classes is written in VB. VB can not read into a C# project so you first need to build your C# project to see the schema changes in your VB project. That is one of the limitations of VB.
I suggest LLBLGen PRO. It is not free but well worth the $$$. I have been using it for more that 4 years on both web and windows applications.
Related
I have been trying to find a standard way to include databases schema patches into my Azure continuous deployment flow.
So the problem I am looking for a solution to, is that as an application evolves, so does the database. Ever so often there are changes to the database to support new functionality etc.
In earlier work situations I have used proprietary solutions that hold changes to the database in a linked list in an Xml document. The database then knows the latest patch it applied, and if any new patches are present it will apply them. That way it is easy to keep all environments synchronised, and the changes follow the code.
While these proprietary solutions have worked great, I was thinking that before I implement yet another tool to do this, I would see if there was a standard solution provided by SQL Azure to solve this problem. But I haven't been able to find one.
Is there one or do I need to create a tool myself?
Visual Studio Database Projects support deploying to Azure SQL Database so this is a good way to incorporate it into a CI workflow. If you are used to traditional deployment methods it is a bit of a mindset change; these projects work out at deploy-time what to deploy. For example, if you want to create a table, add a Table to the project and fill out the columns. Then, say months later, you want to add a column, simply add the column to the CREATE TABLE script. When you deploy, it will work out that the only schema change is a new column and it will add it.
This is a nice little series on that topic:
https://channel9.msdn.com/Blogs/Have-you-tried-turning-it-off-and-on-again/Creating-a-Database-Project-for-Artificial-Intelligence
Before: We had (still working) a couple of CRM 4.0 servers working: A productive one and a test one. We would perform any changes on the test server first and, after testing, replicate them in the productive server. For entities (custom or not) this would mean using the "Export Customizations"/"Import Customizations" functionalities. Pretty straight-forward stuff.
Now: we're testing CRM 2013 and trying to do the same with a couple of servers. We set up our data structure by hand (took some time) including the creation of all our custom entities, which are not few in number.
My question then is: How can I perform a bulk entity export-import in the same manner as it was with 4.0? I've tried selecting saving the entities to a Solution package, export the package from one server and import it onto the other. System entities feature in the target-server's import list but not the custom entities! And they are a part of the original solution packet (both checking it through CRM itself or the package file's XML code directly)
The lack of online help on this may imply that I'm not approaching this in the right way and I presume this is something already standard in CRM 2011.
Can someone give me a hint?
Thanks in advance!
Ok, I have no time to delve into reasons and explanations but things got solved.
I tried to export ONLY the custom entities and their related entities and it ended up working out.
Afterwards, trying again to export ALL entities ended up working just fine!
Therefore, i'm still not fully convinced I was not doing anything wrong. Most likely I just missed some essential basic small step or detail no one thought of due to it's "self-evident" nature.
(I guess being too stuck to CRM 4.0's "modus operandi" takes it's toll when updating...)
I'm new to MVC, and my intention actually is to learn 'Code First' technique, using MVC 4. I'd been through several 'MVC 4 Entity Framework' tutorials, and no doubt, tutorials are really simple, and understandable. but for every tutorial I followd, the
EF will look for the database in the default location and if it doesn’t exist, it will create it for you
part is not working for me. Do i have to add ADO.net Entity framework model in project, and design the database entities firt? or am I missing something? It's been more than 15 days, & I'm stuck.
(I tried making database manually first, and it worked perfect. But I want to learn, as described in tutorials, that databases are automatically created, if we mention the connection string, and EntityFramework finds out that path doesn't actually contains such database, so it creates one).
I'd been through several tutorials, let's say, I'm following THIS one.
(I'm using Visual Studio Express for web 2012, EF 5.0.0)
your question: Do i have to add ADO.net Entity framework model in project, and design the database entities firt?
yes, in code first approach you need to tell entity framework what tables you are going to create, you do it by creating poco classes(Plain old CLR Object). then in run time entity framework will create a model(witch is a xml file) and generate query according to that xml file then executes it. you can read more about that in [CSDL MSL CSDL][1]
so for summary you write your domain classes(poco classes) and ef will create db for you but btw there is another way where in entity framework code first witch is called [Code first to existing database][2]
[2]: http://weblogs.asp.net/scottgu/archive/2010/08/03/using-ef-code-first-with-an-existing-database.aspx it revers engineer the databse and create model for you.
Do i have to add ADO.net Entity framework model in project, and design the database entities first?
This is really the more traditional approach, aptly called the database-first approach. The code-first approach is an alternative to the former. They mean exactly what they say. In the case of code-first the code is first written and in effect the database is generated automatically. In database approach, the database is first created then you can use EF to automatically generate the code required to handle the database, mainly CRUD and connection management.
The built-in scaffolder in ASP.NET MVC should work fine either in code-first or database-first as long as you have your models already established in your project. If you find the generated code too simplistic, an alternative scaffolder that you can use is CamoteQ - it is an online ASP.NET MVC scaffolding tool that is based in DDD principles and recognized patterns.
We have a very large solution project for our MVC structure where I work. I am trying to filter my solution explorer down to only relevant files with a custom filter. Microsoft has an article on making a custom filter here, but when I try to build the source code they give it says one of the .NET Framework namespaces is not available (I have already reinstalled .NET). The namespace that won't resolve is: System.ComponentModel.Composition. I am hoping fixing this will allow me correctly build a filter (there are 5 errors in the project total).
I definitely have the 2012 SDK installed (you won't make it far in the tutorial without it).
The Funnel extension may be what you're looking for:
Decrease solution loading and re-compilation times dramatically by filtering projects not relevant for the current task.
The extension loads just the projects defined in filters (load filters must be defined before using them).
You've to add reference to "System.ComponentModel.Composition" assembly. This reference is in framework 4.0.
If you've problem referencing or finding this assembly, refer to this answer:
https://stackoverflow.com/a/6310236/2617201
I am evaluating SubSonic for use in Phase 2 of a large project. This is an ASP.NET project, with 700 tables in a SQL Server database.
We are planning for our domain model to consist of POCO classes to assist with an offline access requirements we have. I believe that the SimpleRepository pattern would be among my best options.
Since I have a database already, however, the migration assistance doesn't help me. Are there T4 templates for SimpleRepository that I just overlooked? How do I 'turn off' migration? If I missed something in the Wiki, point me there, otherwise get me started and I'll write up a Wiki entry for y'all when we get there.
I'd suggest you look at the linq templates. They're generated from your database just like the ActiveRecord templates but give you POCOs instead.
Alternatively you can just use the simple templates and never run migrations, migrations only happen when you explicitly tell them to (by specifying SimpleRepositoryOptions.RunMigrations while creating your repository) so it's not so much that you need to turn them off, just don't turn them on.