Sequelize-cli how to create seed files from an existing database? - node.js

Issue:
In order to start in a clean environment for developing stuff for a web app I would appreciate to be able to retrieve some data from an existing DB (let say the 10 first lines of every tables) in order to create a sequelize seed file per table. It will then be possible to seed an empty DB with these data into the corresponding models and migrations.
I have found the tool named sequelize-auto which seems to work fine to generate a model file from an exsting DB (beware of not already having for example a uses.js model ; it will be overwritten !) : https://github.com/sequelize/sequelize-auto.
This tool will create a model file, but neither a migration or a seed file.
Question:
Is there a way to build a seed file from an existing database?

Found this cool module
you can create (dump) seed with command
npx sequeliseed generate table_name --config
https://www.npmjs.com/package/sequeliseed

Related

Knex JS Check if Knex seed file already seeds

I don't want to duplicate records in table, so if I run seed file more than once, it will not duplicate the record on the table in the database.
Is there a way to check if the file is already seeded?
or in seed files, perhaps we can check if this record already exist, dont insert. something like that

How to initialize Alembic on an existing DB

I have an existing app which uses SQLAlchemy for DB access. It works well.
Now I want to introduce DB migrations, and I read alembic is the recommended way. Basically I want to start with the "current DB state" (not empty DB!) as revision 0, and then track further revisions going forward.
So I installed alembic (version 1.7.3) and put
from my_project.db_tables import Base
target_metadata = Base.metadata
into my env.py. The Base is just standard SQLAlchemy Base = sqlalchemy.ext.declarative.declarative_base() within my project (again, which works fine).
Then I ran alembic revision --autogenerate -m "revision0", expecting to see an upgrade() method that gets me to the current DB state from an empty DB. Or maybe an empty upgrade(), since it's the first revision, I don't know.
Instead, the upgrade() method is full of op.drop_index and op.drop_table calls, while downgrade() is all op.create_index and op.create_table. Basically the opposite of what I expected.
Any idea what's wrong?
What's the recommended way to "initialize" migrations from an existing DB state?
OK, I figured it out.
The existing production DB has lots of stuff that alembic revision --autogenerate is not picking up on. That's why its generated migration scripts are full of op.drops in upgrade(), and op.creates in downgrade().
So I'll have to manually clean up the generated scripts every time. Or automate this cleanup Python script cleanup somehow programmatically, outside of Alembic.

point of making a stub file- knex

for some test project, I avoided explicitly creating a stub, but, looking at the doc, they mentioned something like creating a stub file:
knex migrate:make --stub
which according to configurations we specify created <filename>.stub. I looked at someone's starter pack, and saw that it was a file similar to migrations file.
So, from what I knew from my novice experience, altering, creating table is possible from migrations, and filling test data is possible from seeds. So, why do we need a .stub file for both the migrations and seeds?(which by the way seem similar) What's the whole concept of the .stub file?
The stub file is just a template to be copy-pasted as your new migration.
E.g. you call knex migrate:make new_migration
knex will create a file $timestamp_new_migration.$extension and the contents of the file will be the content of your stub file.

Reverse bulk_insert alembic

So I have a case where the migration script only adds data from a specific seed file. This is written in the update part of the migration script.
But, in the downgrade part of the migration script, I want to reverse the inserts of the seed data file.
As bulk_insert, is there anything that will allow me to do the reverse

Disable wrapping migration in a transaction with Node db-migrate

I need to use db-migrate to add an index to a Postgres database with CREATE INDEX CONCURRENTLY. However, db-migrate wraps all migrations in a transaction by default, and trying to create a concurrent index inside a transaction results in this error code:
CREATE INDEX CONCURRENTLY cannot run inside a transaction block
I can't find any way to disable transactions as part of the db-migrate options, either CLI options or (preferably) as a configuration directive on the migration itself. Any idea if this can be accomplished?
It turns out that this can be solved on the command line by using --non-transactional. Reading the source, I can see that this sets an internal flag called notransactions, but it's not clear to me whether this can be set as part of the migration configuration or must be passed on the command line.
I kept getting the errors even when running with --non-transactional flag.
The solution for me was to run with --non-transactional AND have each CREATE INDEX CONCURRENTLY statement in its own separate migration file. It turns out you can't have more than one of it in the same file (same transaction block).

Resources