I am using prisma with data pipeline. i am fetching data from different database and create new portal with few changes in database. Database can also change in production time, i don't want to lose ant data from data factory. is it possible to run migration without cause data lost.
You have to use prisma migrate resolve in order to ignore some migrations before running prisma migrate deploy, here you go more information
Related
I want to automate my RDB. I usually use SQLDeveloper to compile, execute and save my PL SQL scripts to the database. Now I wish to build and deploy the scripts directly through gitlab, using ci/cd pipeline. I am supposed to use Oracle Cloud for this purpose. I don't know how to achieve this, any help would be greatly appreciated.
Requirements: Build and deploy PL-SQL scripts to the database using gitlab, where the password and username for the database connection are picked from vault on the cloud, not hardcoded. Oracle cloud should be used for the said purpose.
If anyone knows how to achieve this, please guide.
There are tools like Liquibase and Flyway. Those tools do no do miracles.
Liquibase has a list of changes (XML or YAML) to be applied on a database schema (eventually with undo step).
Then it has a journal table in each database environment, so i can track which changes were applied and which were not.
It can not do mighty schema comparisons like SQL Developer or Toad does.
It also can not prevent situations where applied DML change on prod database goes kaboom, because the DML change was just successfully tested on 1000x smaller data set.
But yet it is better than nothing and it can be integrated with ansible/gitlab and other CI/DC tools.
You have a functional sample, using Liquibase integration with sqlcl in my project Oracle CI/CD demo.
To be totally honest
It's a little out-of-date, because I use a trick for rollback because in the moment of writting, Liquibase tagging was not supported. Currently it's supported
The final integration with Jenkins is not done, but it's obvious
I am struggling with a problem connected with the migrations in knex on the production environment (heroku). I have made some changes and created 8 new migrations to update the model. After creating new migrations I have reached 47 migration files. After uploading the changes to the Heroku I executed as usual knex migrate:latest and there was no effect. I have received the message Already up to date.
I have decided to take a look at the knex_migrations table to see if maybe something is wrong there. The result that I can see that the new migrations were not written into that database (please see the picture for reference.
When we take a look at the migrations it looks like this:
Clearly we can see that knex was able to run "20200216191040_receiver_company.js" as last one.
When I take a look at the knex_migrations_lock table the only entry that is existing over there looks like this:
Does anyone know how to force knex to run the migrations? The version of knex that I use it 0.16.3.
EDIT:
I want to add that on the local machine I have no problems with running the migrations. Additionally, I have checked if the migration files are present on the server and after pulling the current state our of the Heroku I can see all of the migration files that were not executed.
If the change of the migration is not implemented in the database, you can try to manually remove the record of the migration from the knex_migrations or you migrations table,
and try to re-apply the knex migrate:latest command.
but before you try this way, make sure to keep a backup, or run it on your local machine or development server first.
When creating a second migration and attempting to migrate up to add it, the task fails because the first migration was already run. It seems to be running (and failing) rather than skipping previously successful/complete migrations.
I have tried with --no-check-order but I don't want to do that everytime I got new migrations to add.
I have my database locally.
Using node-pg-migrate and postgreSQL
In a general way, the order of migrations matters as you can e.g. alter the same type and the last run migration wins.
It can happen e.g. if you have pull requests and merge them to base branch in a different order than the migrations were created.
You can now e.g. rename migration so it is sorted on disk in the correct order (before not-run-migrations) and update records in pgmigrations table, or you can put new migrations aside (so they do not trigger this error) and run as many down migrations as needed, then restore migrations definitions on disk and run up migrations.
Consider the following deployment workflow, where staging and product use the same database:
Deploy application to Azure Website staging slot
Run EF code first migrations in staging (updating the shared database)
Test application in staging
Swap endpoints so that what was deployed to staging is now in the production slot
This process worked for a few releases, but today it didn't. The issue is around migrations ran in the Staging slot against the same database as the Production slot. We're every careful to not rename or remove any columns that would break production between steps 2 and 4. Today's migrations included a new foreign key and a script to create some tables for a forum control (there are no entities for these new tables, it's managed entirely by a third party control - we only added the migration so that we didn't have to manually run the install script).
We got the standard AutomaticMigrationsDisabledException below in the production slot after step 2.
System.Data.Entity.Migrations.Infrastructure.AutomaticMigrationsDisabledException:
Unable to update database to match the current model because there are
pending changes and automatic migration is disabled. Either write the
pending model changes to a code-based migration or enable automatic
migration. Set DbMigrationsConfiguration.AutomaticMigrationsEnabled to
true to enable automatic migration.
Is there a list of known changes that cause this exception? Or are migrations just not designed to be used this way?
We've also considered something like what SO does, but we'd rather not recreate the wheel if we don't have to.
I have a nodeJS app with PostgreSQL database.
I've modified source code, and now, to deploy it i'll need to deploy source code to server from git repo. Changes will be applied, server will be restarted and that's all.
Is there similar approach for deploying Postgres changes? Or i'll always need to manually write bunch of queries, which will modify database schema?