Entity Framework migrations on legacy database - entity-framework-5

We have several legacy SQL Server databases that we occasionally make schema changes to. We currently have a utility written in C++ that allows users to update their DB's with these schema changes. The utility currently generates dynamic sql to create all DB objects. I am looking into redoing this and thought EF migrations might be a good way to go. I have read up a bit on the subject and I have a general idea of how it works. But I'm having a bit of a hard time figuring out how I would set it up to replace our current procedure (or if it is even possible). Currently, a client could be on any one of a number of previous versions. I'm assuming I would have to go back to the oldest possible version and create my model/initial migration from that, then generate incremental migrations for each version change in order to support updates from all versions. Is that a correct assumption? Also, currently our clients could be using sql server 2000, 2005, or 2008. Would this have any effect on how I would set things up (or if I even could)? Further, the goal is to create a utility with a (C# - probably WPF) UI that the user can use to manipulate the migrations (up or down, preferably). I've seen a lot of examples of how to manipulate migrations from command-line within package manager but not a lot of stuff on how to create a utility with a friendly UI for upgrading/downgrading DB's in production. Also, I have not seen anything that shows how to create stored procedures in a migration (our DBs rely on some stored procedures). I'm assuming that, if nothing else, I can use the Sql() method to generate a SQL query to create a SP. Is that correct? Is there a better way?
I know my questions are a bit non-specific and I apologize for that. But I'm still in the beginning processes of learning this and I'd like to get an idea of whether or not this is a good way to go. Any guidance would be greatly appreciated.
Thanks,
Dennis

Firstly, on SQL Server support, Entity Framework doesn't really support SQL Server 2000. See this question:
EntityFramework SQL Server 2000?
On the question of supporting all the multiple versions, you have the right idea about needing to generate an initial migration for the oldest version first then incrementally altering the model and generating migrations to support the later versions. This will be a pain as the migrations are opinionated about how they represent the model in the database and you will be doing a lot of messing about to end up with a model and a set of migrations that fully represent that. Specific concerns are indexes, column lengths, data types, stored procedures, triggers, functions, partitioning.
The Sql() function gets you around most issues, though also helpful in the migrations are functions like CreateIndex and AlterColumn.
For automating this, the migrations are definitely available as powershell cmdlets which are themselves just .Net objects so can be called programmatically.
As this question is a year old, I assume you will have made a decision on whether to do this. My opinion is that it is hard to see that it's worth the effort. If you were re-platforming the code base that uses this database to Entity Framework then it would make sense. Otherwise there are bound to be better tools out there for database version management. My first port of call would be Redgate.

Related

migrate unidata database which is multivalue to sql using dotnet code

i want to migrate unidata database which is multivalue to sql using dotnet code.IS this possible,one of the possibility is through SSIS but this will consume lot of time becouse we have to do ETL process to all the tables in DB .So was looking for a dot net code where i can connect to Unidatadb and migrate data to sql
You're probably getting downvoted because this is an awfully general question, and it's not particularly a programming question, but rather a big project.
One piece of advice is to flip things around and extract the information from the Unidata side "exploding" out the multivalues into flat tables that your ETL process can consume. And the challenge there (apart from writing Unibasic code) is identifying which multivalued fields are associated with each other. Unless you have very good documentation that can be tough to do.

Is it a good idea to perform data migration with generic language?

There are two kinds of migration. One is to update database schema during the development period. The other is to migrate existing data into a new system (with different schema).
There are a lot of tools available for the former scenario, such as Flyway, Liqubase. However, I am not aware of tools for the latter purpose.
We are currently using PL/SQL to do the migration. However, not all our Java developers have a DBA background. I wonder if anyone has an experience of using generic languages (Java, Scala, C#, etc.) with database access libraries (Hibernate, NHibernate, etc.) to perform the migration.
I'm unsure what the question is, but if I understand you correctly;
Sure you can develop an application in a(ny) language that reads data from a data source and puts it into a data target.
A data migration between data sources does not have to be SQL to SQL only (in case source and target are relational databases)
In fact it often makes sense to have an application between the source/target if there's logic which needs to handle or transform data between various structures or between various data sources.
For example if migrating data from one ERP system into an e-commerce system (just an example).
Another advantage to doing it via an application, is that you often can include more tools/features for reporting and error handling.
Especially if the integration/migration should run often, such error handling/reporting to verify the data movement is beneficial.
Also if the data source and data target are located in different areas/on different servers, it can be easier to do the migration via an application, to avoid opening up needlessly between servers and linking them together.
So basically - such an application (Java, C# ... anything) would read data from a data source, transform the data into the data structure of the target and then store it in the data target.
Making an application to do things, is just another tool in a developers toolbox.
However, if the data migration is basically a 1:1 movement of data from one structure to another duplicate of that structure and no transformation exists; then the situation would be faster/easier to handle directly in SQL or using a data-sync program.
Even if not "DBA background" (not many developers have DBA background, but that shouldn't prevent people from learning SQL, as it's also just another language. You don't need to be a DBA to be able to write SQL effectively)
So - in conclusion. Yes, you can write an application and yes, it can be a good idea. But as almost everything within our field, then it's a case-by-case/situation-by-situation evaluation whether or not it is the "better" way.

node-mongo-native migration framework

I'm working on a node.js server, and using MongoDB with node-mongo-native.
I'm looking for a db migration framework, similar to Rails migrations. Any recommendations?
I'm not aware of a specific native Node.js tool for doing MongoDB migrations .. but you do have the option of using tools written in other languages (for example, Mongoid Rails Migrations).
It's worth noting that the approach to Schema design and data modelling in MongoDB is different from relational databases. In particular, there is no requirement for a collection to have a consistent or predeclared schema so many of the traditional migration actions such as adding and removing columns are not required.
However .. migrations which involve data transformations can still be useful.
If your application is expecting data to be in a certain format (eg. you want to split a "name" field into "first name" and "last name") there are several strategies you could use if the idea of using migration tools written in another programming language isn't appealing:
handle data differences in your application logic, so old and new data formats are both acceptable (perhaps "upgrading" records to match a newer format as they are updated)
write a script to do a once off data migration
contribute MongoDB helpers to node-migrate
I've just finished writing a basic migration framework based on node-mongo-native: https://github.com/afloyd/mongo-migrate. It will allow you to migrate up & down, as well as migrating up/down to a specific revision number. It was initially based on node-migrate, but obviously needed to be changed a bit to make it work.
The revision history is stored in mongodb and not on the file system like node-migrate, allowing collaboration on the same project using a single database. Otherwise each developer running migrations could cause migrations to run more than once against a database.
The migrations themselves are file-based, also helping with collaboration on a single project where each developer is (or is not) not using the same database. So when each dev runs the migration, all migration files not already run against his/her database will be run.
Check out the documentation for more info.

Subsonic - let customers switch the database

I am new to subsonic and I'd like to know about the best practices regarding the following scenario:
Subsonic supports multiple database systems, e.g. SQLServer and MySQL. Our customers need to decide while deploying our application to their servers, which database system should be used. Long story short: the providerName, normally specified within the application configuration, should be configurable after the application is finished.
How can this be done? Do I have to generate seperate data libraries for each database system I want to support?
Thank you in advance
Marco
No you do not need to genarate seperate libraries.
How ever you can not use direct sql string as you understand but you need to go always using subsonic sql create code.
Also is good to make some tests on the diferent databases, because not all code have been 100% testes on every case.

SubSonic-based app that connects to multiple databases

I currently developed an app that connects to SQL Server 2005 database, so my DAL objects where generated using information from that DB.
It will also be possible to connect to an Oracle and MySQL db, all with the same table structures (aside from the normal differences in fields, such as varbinary(max) in SQL Server and BLOB in Oracle, and so on). For this purpose, I already defined multiple connection strings and multiple SubSonic providers for the different DB's the app will run on.
My question is, if I generated my objects using a SQL Server database, should the generated objects work transparently with the other DB's or do I need to generate a different DAL for each database engine I use? Should I be aware of any possible bugs I may encounter while performing these operations?
Thanks in advance for any advice on this issue.
I'm using SubSonic 2.2 by the way....
From what I've been able to test so far, I can't see an easy way to achieve what I'm trying to do.
The ideal situation for me would have been to generate SubSonic objects using SQL Server for example, and just be able to switch dynamically to MySQL by just creating at runtime the correct Provider for it along with its connection string. I got to a point where my app would correctly connect from SQL Server to a MySQL DB, but there's a point where the app fails since SubSonic internally generates queries of the form
SELECT * FROM dbo.MyTable
which MySQL doesn't support obviously. I also noticed queries that enclosed table names with brackets ([]), so it seems that there are a number of factors that would limit the use of one Provider along multiple DB engines.
I guess my only other option is to sort it out with multiple generated providers, although I must admit it does not make me comfortable knowing that I'll have N copies of basically the same classes along my project.
I would really love to hear from anyone else if they've had similar experiences. I'll be sure to post my results once I get everything sorted out and working for my project.
Has any of this changed in 3.0? This would definitely be a worthy reason for me to upgrade if life is any easier on this matter...

Resources