I wonder if there's a tool for automatic migrations between different DBMS's using the Persistent package. In theory, it should be relatively easy to do, so I thought there should be a tool already written to do it.
One (maybe hacky) solution is to just create a program that uses mkPersist twice in different modules with the same definitions but different backend configurations, and to then manually perform the copy operation.
There is, however, not a tool currently available to do this as far as I know.
Related
I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.
I have an app with multiple updates on the AppStore already, funny thing happened, I thought that the lightweight migration happens automatically, however, my recent discovery that I need to add the
NSDictionary *storeOptions = #{NSMigratePersistentStoresAutomaticallyOption:#YES, NSInferMappingModelAutomaticallyOption:#YES};
to my persistentStoreCoordinator shook my confidence when I realized I already have 5 core data models.
The question is: when I add the above line to the next version of the app, is it going to work for everyone when they update? Because right now everything that happens when they open the app .. is a fancy CRASH.
Thx
It will work if automatic lightweight migration is possible for the migration you're trying to perform. Whether this will work depends on the differences between the model used for the existing data and the current version of the model. Many common changes permit automatic lightweight migration but not all. You'll need to review the docs on this kind of migration and decide whether it will work in your case.
If it doesn't work, there are other ways to handle it, for example by creating a mapping file to tell Core Data how to make changes that it can't infer automatically.
We're running several apps against the same Memcached, so I'd like to configure different prefixes for all apps using Rack::Attack. By default, several apps would overwrite each others' cache.
I've seen the prefix accessor in Rack::Attack::Cache and there's even a low-level spec for it but there are no examples on how to use it.
According to the README and the introductory blogpost, I never have to deal with Rack::Attack::Cache but always with the higher-level Rack::Attack.
So, how can two or more apps use the same memcached for Rack::Attack without overwriting each others' cache keys?
Rack::Attack.cache.prefix = "custom-prefix"
Rack::Attack.cache is an instance of the Rack::Attack::Cache class.
We have several legacy SQL Server databases that we occasionally make schema changes to. We currently have a utility written in C++ that allows users to update their DB's with these schema changes. The utility currently generates dynamic sql to create all DB objects. I am looking into redoing this and thought EF migrations might be a good way to go. I have read up a bit on the subject and I have a general idea of how it works. But I'm having a bit of a hard time figuring out how I would set it up to replace our current procedure (or if it is even possible). Currently, a client could be on any one of a number of previous versions. I'm assuming I would have to go back to the oldest possible version and create my model/initial migration from that, then generate incremental migrations for each version change in order to support updates from all versions. Is that a correct assumption? Also, currently our clients could be using sql server 2000, 2005, or 2008. Would this have any effect on how I would set things up (or if I even could)? Further, the goal is to create a utility with a (C# - probably WPF) UI that the user can use to manipulate the migrations (up or down, preferably). I've seen a lot of examples of how to manipulate migrations from command-line within package manager but not a lot of stuff on how to create a utility with a friendly UI for upgrading/downgrading DB's in production. Also, I have not seen anything that shows how to create stored procedures in a migration (our DBs rely on some stored procedures). I'm assuming that, if nothing else, I can use the Sql() method to generate a SQL query to create a SP. Is that correct? Is there a better way?
I know my questions are a bit non-specific and I apologize for that. But I'm still in the beginning processes of learning this and I'd like to get an idea of whether or not this is a good way to go. Any guidance would be greatly appreciated.
Thanks,
Dennis
Firstly, on SQL Server support, Entity Framework doesn't really support SQL Server 2000. See this question:
EntityFramework SQL Server 2000?
On the question of supporting all the multiple versions, you have the right idea about needing to generate an initial migration for the oldest version first then incrementally altering the model and generating migrations to support the later versions. This will be a pain as the migrations are opinionated about how they represent the model in the database and you will be doing a lot of messing about to end up with a model and a set of migrations that fully represent that. Specific concerns are indexes, column lengths, data types, stored procedures, triggers, functions, partitioning.
The Sql() function gets you around most issues, though also helpful in the migrations are functions like CreateIndex and AlterColumn.
For automating this, the migrations are definitely available as powershell cmdlets which are themselves just .Net objects so can be called programmatically.
As this question is a year old, I assume you will have made a decision on whether to do this. My opinion is that it is hard to see that it's worth the effort. If you were re-platforming the code base that uses this database to Entity Framework then it would make sense. Otherwise there are bound to be better tools out there for database version management. My first port of call would be Redgate.
Ok, here's the thing.
I have a good JS background, had my share of JS in the past, and have lots of cool bare-bones tools I take with me from project to project that act like a library.
I'm trying to formulate work with CouchDB.
Now, after getting used to luxury of cool tools that you wrote and simplify the language for you - I find it a little frustrating to write many things in bare-bones manner.
I'm looking for a way I can load to the database context a limited, highly efficient and generic set of tools that focus on the pure language and makes the work with the language much more groovy (and gosh, no, im not talking about jquery or any of the even more busty libraries out there).
If on top of that, there could be found a way where I can add to the execution context of the couchDB JS engine some of my own logic tools (BL model functions) - it would present a great and admirable power and make couchDB the new home for a JavaScript-er like me.
Maybe I'm aiming too low.
I'd be satisfied with a way I can allocate a set of extensions even for a specific database, and I don't mind do it for every database in separate. Or worse - to add it to every design document, so I can teach for example several views in the same design-doc what a Person is, what a Worker is, and use their methods to retrieve data from them according to logic in a reusably coded manner.
Can anybody point me the the way?
Whatever way you can point me - I'll be very verrry grateful.
If there are ways for all of these - then great.
Trust me to know the difference of what logic belongs to what layer...
You open my possibilities - I promise to use them :D
CouchDB now supports code sharing as CommonJS modules.
http://docs.couchbase.org/couchdb-release-1.1/index.html#couchdb-release-1.1-commonjs
http://caolanmcmahon.com/posts/commonjs_modules_in_couchdb
In this way, you can share your javascript modules between views, lists, and shows in the same design doc. (Server-side)
Also, you can load these modules on the browser side with this library:
https://github.com/couchapp/couchapp/blob/master/couchapp/templates/vendor/couchapp/_attachments/jquery.couch.app.js
You also might want to look at Kanso:
http://kansojs.org/
It does a really good job of making your javascript work seemless between the server and client.
You can find some helpful tools here : https://github.com/vivekpathak/casters
The running examples and test cases may particularly help you.