How to remove Loopback model and references from database - node.js

I removed a model fully from my app: deleted the model.js and model.json from models, deleted a relation in another model, and erased it from model-config.json.
However, the table created for the model, and the column in the other model remain in the DB (in all environments). I tried auto-migrating, but they're still there.
Do I need to manually go through all databases and drop the table and column manually, or can I tell LB to pick up the changes on its own somehow?

Do I need to manually go through all databases and drop the table and column manually, or can I tell LB to pick up the changes on its own somehow?
LoopBack is not able to detect which models were removed and drop the corresponding database tables.
As you have discovered yourself, the solution is to go through the databases and drop the tables manually.
BTW I don't recommend using LoopBack's autoupdate/automigrate functionality in production and highly advocate for maintaining a set of migration scripts as described e.g. in Martin Fowler's excellent article Evolutionary Database Design.
LoopBack does not support migration scripts yet, but we are discussing how to implement them for LoopBack 4+, see https://github.com/strongloop/loopback-next/issues/487

Have you looked into using the built-in API?
https://apidocs.strongloop.com/loopback/#app-deletemodelbyname

Ended up doing it manually - in 3 databases :(
I'm closing the question, but willing to reopen if someone had a good answer.

Related

dynamic ORM in node.js+mongodb

Is it possible to create a model where the relationships are dynamically generated by the application?
I saw the KeystoneJS project that does a nice job of defining the model (see: http://keystonejs.com/docs/database/#relationship-definitions)
But these need to be defined by node, I'm interested in creating these within the application. Are there any ORMs or framework projects that already do that? I I've seen frameworks like the MODxCMS that allow users to create additional fields, by putting everything from the custom (templatevar) values into one table. think mongodb would be great for setting this up without this single table approach.
Any idea how to go about setting this kind of system up? I'm not sure where to start.
I guess mongoose might help you here. And you may want to have a look at mongo-relation too.

Entity Framework Model first and Database first Model Design

We use model first for tables and relations and database first for views and stored procedures.
If we change the model we have to:
-generate database
-create views and procedures
-add the procedures and the views to the model
-remap function call of procedures manually
This costs much time because the model changes often or has failures.
Does anyboy knows a workaround to automatically integrate the views and procedures in the model?
You could automate the process by creating your own template for generating DDL from SSDL. By default EF designer uses SSDLToSQL10.tt file but you could create your own .tt file which would generate DDL that better suits your needs. This should address 1) and 2). Once you have the database you could now update your model from the database. This should adress 3). Finally to address 4) you could write a Model Generation Extension that would tweak the model the designer builds from the database in the OnAfterModelGenerated/OnAfterModelUpdated method. (Be aware - some of the extension points in the designer are weird to say the least and might be confusing/hard to implement).
Another option you may want to explore is to use Code First and Migrations. With Migrations you could evolve your database instead of constantly creating/deleting it. If you need, you can use SQL to define a migration so you have full control of how your database looks like. Code First does not support some of the features supported by ModelFirst/DatabaseFirst (e.g. TVFs/FunctionImports) so you may want to check first if what's supported is enough for you.

Can I use MODEL-FIRST in EF5 withOUT losing the data in DB?

I am wondering about the model-first approach. I wish to design a new database using the model designer in VS2012. The new features of the model designer such as coloring and splitting up model sections are wonderful. Hopefully there will be purpose for using the model designer beyond initially creating a new database.
I would like to perform the following steps...
using the model designer, visually design and push the model to create the initial database and a table
add data to the table
make a change to the table in the model designer (e.g. add a field)
push the changes to the database (i.e. update the database)
NOT LOSE MY DATA FROM STEP 2. Also, just to clear any confusion... did I mention that I DON'T WISH TO LOSE THE DATA?
Please, please tell me this obvious need (i.e. the need to evolve the the tables and their fields without losing data, starting from scratch) has not been overlooked in iteration FIVE of EF.
This page on EF (http://msdn.microsoft.com/en-us/data/ee712907.aspx) makes things sound that the developer has equal choices between coding first and modeling first. To me, the intro video on the page creates a similar impression.
It would be nice if there were a simple menu option or better yet just a way to establish "automatic pushes to DB" upon changes to the model. That way whenever changes are made and the SAVE button is clicked, a dialog could appear "Update database?".
I see that using code-first there is a migrations option. I cannot seem to find the same for model-first. And I don't understand why this wouldn't be possible... after all the code that I would have written in code-first does indeed exist - it was created by the model-first code generation.
I'm keeping my fingers crossed in hopes someone will have a simple solution, perhaps something I've just overlooked and all this rambling/venting is in vain. :-)
You really have to use code-first if you want to modify your database when the model changes. Even then it's not some magical automated process but you'll have to script the changes.
With model first your best option is to generate a new database each time and create a change script (DDL) by using a tool like Redgate's SQL Compare or a Visual Studio Sql Server Database Project.
I'd like to add that it is virtually impossible to synchronize a database automatically with a model. Some changes require manual intervention, e.g. removing a field and adding another field cannot be distinguished from renaming/retyping a field. Some changed can easily be done in a model, but would require a table rebuild script in Sql Server (e.g. changing field order), or a combination of modified content and structure (e.g. making a field not null, adding a foreign key).
At the moment the only thing to do is:
Copy your database file... (backup)
Allow EF to recreate the database according to model
Per table copy-paste your records from backup to your new db.
This is not that easy as you need to copy paste in a specific order because of relations and it will only be good for minor changes such as adding columns and new tables or removing scalar columns or removing tables.
But I am certain that this is the begining of a correct approach to deal with the problem which later on can be automated by writing a more generic migration app between two databases which share same table names and relations.
Deeper problems begin when the relations are not the same / table names changed / column names changed.

How can I alter the incoming documents on replication in CouchDB

I need to replicate in CouchDB data from one database to another but in the process I want to alter the documents being replicated over,
mostly stripping out particular fields (but other applications mentioned in comments).
The replication would always be 100% one way (but other applications mentioned in comments could use bi-directional and sync)
I would prefer if this process did not increment their revision ID but that might be asking for too much.
But I don't see any of the design document functions that do what I am trying to do.
As it seems doesn't do this, what plans are there for adding this? And meanwhile, what workarounds are there?
No, there is no out-of-the-box solution, as this would defy the whole purpose and logic of multi-master, MVCC logic.
The only option I can see here is to create your own solution, but I would not call this a replication, but rather ETL (Extract, Transform, Load). And for ETL there are tools available that will let you do the trick, like (mixing open source and commercial here):
Scriptella
CloverETL
Pentaho Data Integration, or to be more specific Kettle
Jespersoft ETL
Talend have some tools as well
There is plenty more of ETL tools on the market.
I believe the best approach here would be to break out the fields you want to filter out into a separate document and then filter out the document during replication.
Of course the best way would be to have built-support for this, but a workaround which occurs to me would be, instead of here using the built-in replication, to code and use a custom replication which will do the additional needed alterations/transformations, still using rather than going beneith, the other built-ins, and with good coding, in many situations (especially if each master can push to its slaves), it feels this could be nearly as efficient.
This requires efficient triggers be put on each source/master to detect any changes, which I believe CouchDB does offer (or at least PouchDB appears to), which would then copy the changes to another location also doing the full alterations.
If the source of the change is unable to push the change to the final destination, this fixed store may to be local to it where the destination can pull from -- which could get pretty expensive especially in multi-master, as each location has to not only store & maintain its own data but also the data (being sent) of everyone it sends to.
This replicate would also place each source document's revision ID in the the document's copy...
...that is ideally, including essential if the copy was to be {updated, aka a master}, too.
...in form of either:
ideally the normal "_rev" property. Indeed this looks quite possible per it ("preserve their revisions ID") already done by the normal replication algorithm using the builtin "Bulk Docs API" which seemingly our varient would use, too
otherwise have a new copy object (with its own _rev) plus another field as "_rev_original" ntelling the original rev. But well that would work?
Clearly such copy could be created no problem.
Probably no big if the destination is just reading the data.
Seems hairy if the destination is also writing the data. As we'd now have to merge with these non-standard revisions. But doable.
Relevant to this (coding an a custom/improved replication (to do this apparently-missing functionality) ideally without altering Pouch and especially Couch source code), as starter/basis material (the standard method), here's the normal Couch replication algorithm which unfortunately doens't clearly say it only uses builtin ops but it looks like it, and also the official overview of what it does; I'm suspecting Pouch implements this, likely in Pouch's replicate.js (latest release as of 2014.07).
Futher implementation particulars? - those who would know, please put it here.
This is a "community wiki" answer so please extend it.
Also please comment links & details of anyone/system already doing or trying to do this or similar.

Subsonic 3.0 LINQ Templates with Multiple Databases

I'm evaluating SubSonic 3.0 for use in our business as a replacement for our POCO objects. I'm new to SubSonic, literally installing it yesterday. I've gotten to the point where I can connect to one database using the 3.0 LINQ T4 Templates, and have been wooed by the promise of being able to connect to multiple databases in one application using SubSonic.
My issue is I can't find any documentation on how to use the T4 Templates with multiple databases (e.g. adding another connection string, setting up the Settings.ttinclude etc).
I've searched Google and Stackoverflow for an answer to see how this would be done or if its even possible. Any help would be appreciated.
So I seemed to be able to make it work by adding another connectionString to the web.config, and then adding a 2nd set of templates for that connectionString, it works, but it doesn't seem 'clean' or even really that DRY to me.
It also seems that I could do almost the same thing with the .NET Built in LINQ by adding multiple .dbml files.
Can anyone give me some reasoning at this point why we shouldn't just use the built in LINQ support over a 3rd party ORM like SubSonic?
Cross posting from the subsonic mailing list:
Oh yeah I do this all the time, the trick is two copies of the templates(easy) or editing the templates to iterate over two sets of tables(harder). In the second settings.tt change the name of the connection string to reflect the other database. You might also want to change the namespace so that you don't have conflicts where table names are the same. It seems hacky but I don't think it is because it allows you to make changes to the templates for each database independently.
If you really want only one set of templates the easiest way to go about it is to edit SQLServer.tt (or your choice of database) and override how LoadTables works such that it will accept a list of connections rather than a single one. I have to say this is a pain and it is going to be much harder than having 2 copies of the files.
(In reply to your answer of your question)
Can anyone give me some reasoning at this point why I shouldn't just use the built in LINQ over a 3rd party ORM like SubSonic?
On immediate thought: SubSonic supports more than just Microsoft Sql Server.

Resources