I would like to include Postgres interaction into my integration tests, i.e. not mock the database part, and I need help on figuring out the best way to do the test cleanup.
My setup is NodeJS, Postgres, Sequelize, Karma+Mocha. Currently, before running the tests a new database is created and migrated, after each test I run a raw query that truncates all the tables, and after all tests cases are finished the test database is dropped. As you probably guessed it, the execution time for running tests like this is pretty slow.
I was wondering if there is a way to speed the process up. Is there an in-memory psql database that I could use for my test cases (I've search for one for a while but couldn't find it), or something like that.
To be more precise, I'm looking for a way to clear the database after a test wrote something to it, in a way that does not require truncating all the tables after every test case.
Incorporated https://stackoverflow.com/a/12082038/2018521 into my cleanup:
afterEach(async () => {
await db.sequelize.query(`
DO
$func$
BEGIN
EXECUTE
(SELECT 'TRUNCATE TABLE ' || string_agg(oid::regclass::text, ', ') || ' RESTART IDENTITY CASCADE'
FROM pg_class
WHERE relkind = 'r' -- only tables
AND relnamespace = 'public'::regnamespace
);
END
$func$;
`);
});
Truncate now runs almost instantly.
Related
Does Prisma support running one-time DML statements, such as UPDATE, automatically?
For example, let's say we want all emails in a table to be lowercase. We make a change in our API so that all future accounts/emails are lowercase, however we want to update EXISTING emails to be lowercase too.
Running npx prisma generate and npx prisma migrate executes DDL to keep your schema in-sync. However, I do not see a place to hold database "patch" files. These files generally are run once in order to update existing records in a database.
Prisma doesn't support running one-time DML statements automatically.
You would need to use something like a cron if you want to run some function at specific intervals.
And for one time you could just invoke the function once.
As for your particular use case you could achieve it by using the below function.
async function main() {
const result = await prisma.$executeRaw`UPDATE "User" SET email=lower(email)`;
console.log(result);
}
You could know more about getting Raw Database Access from this Guide
I have created my first backend server using node, express, sequelize and nodemon. I have also set-up the tables I want and initialise the data I want inputted into set tables when it runs (to save me time inputting it again). There is also separate data that I have entered manually on the pgamdin browser and other data I have entered via the CRUD functions. At the minute I am unable to enter large volumes of data as my tables keep flagging errors and it looks like they are being overwritten.
I am having trouble getting my server to run stable and I wasn't sure if it is related to the below bit of code in my server.js file. Does the 'force true' mean that the existing table data will be overwritten each time the server is run or do I need to add a "if tables exist" type function? if I was to add say a foreign key would the whole table be overwritten?
I assumed the best practice was to have the code for creating my tables and I have previously created my own tables on phpmyadmin when I used PHP. However as it is my first time creating a backend server and using sequelize ORM and don't want to keep losing the data I have entered.
db.sequelize.sync({ force: true})
.then(() => {
console.log(`Drop and resync database with { force: true } `)
initial();
});
You are using {force: true} which is equivalent to DROP TABLE IF EXISTS. So whenever you restart your server it drops your existing table along with data. So, if you don't want to lose your data, I suggest you remove {force: true}
the {force: true } creates a clean state whenever you restart so it is expected to see your data goes away. This sometimes works in development when you are testing out things and you might change your schema as you are developing the application but not ideal for production.
You could disable it from the first place or you can add a checker when you are in production mode to not drop your tables.
here is a good example:
if (process.env.NODE_ENV == "production") {
throw new Error("Forced sync is disabled in production");
// do non force sync in here
}
I'm wondering if there is a way to have Sequelize append the database name to a specific query.
When Sequelize runs a query it looks like this:
SELECT "desired_field" FROM "user_account" AS "user_account" WHERE "user_account"."username" = 'jacstrong' LIMIT 1;
And it returns nothing.
However when I run a manual query from the command line it returns the data I want.
SELECT "desired_field" FROM database_name."user_account" AS "user_account" WHERE "user_account"."username" = 'jacstrong' LIMIT 1;
Is there any way to make Sequelize do this?
Note: Everything is running fine in my production environment, but I exported the production db and ran pg_restore on my local machine and the application isn't connecting to it correctly.
i do this all the time.... before you do the backup make sure you tick the Dump Options --> Do Not Save --> Owner to yes.... Sometimes mine still looks like it fails but really it doesn't... i also don't bother dropping the whole database all the time, i just drop the schema i am restoring... so in reality you could just go ahead and create your database locally with whatever credentials your dev environment is using and just drop the desired schema/schemas and restore the db with no owner when you wanna blow your data away
I use knex.js and it's good query builder for PostgreSQL. I haven't found any docs to explain how to do the migration rollback in a right way.
For now, I just write the reverse migrate operation in the down function to do the migration rollback. Is this a correct way?
import * as Knex from 'knex';
exports.up = async (knex: Knex): Promise<any> => {
await knex.schema.raw(`
ALTER TABLE IF EXISTS "GOOGLE_CHANNEL"
ADD COLUMN IF NOT EXISTS google_channel_ad_group_cpc_bid INTEGER NOT NULL DEFAULT 0;
`);
await knex.schema.raw(`
UPDATE "GOOGLE_CHANNEL" as gc
SET
google_channel_ad_group_cpc_bid = 7
FROM "CAMPAIGN_TEMPLATE" as ct
WHERE ct.campaign_channel_id = gc.campaign_channel_id;
`);
};
exports.down = async (knex: Knex): Promise<any> => {
// TODO: migration rollback
await knex.schema.raw(``);
};
I have two concerns:
If there are a lot of SQL statements in up function, I have to do write a lot of SQL statements in down function too in order to rollback the migration.
Why doesn't knex.js do the migration rollback without writing the reverse operation for us? I mean, knex.js can take a snapshot or record a savepoint of the database.
Yes, to rollback you use the down function of a migration script. When you run knex migrate:rollback the down function will run. Knex has meta tables in the database that are used to figure out what migrations that have run or not.
For example:
exports.up = function (knex, Promise) {
return knex.schema
.createTable('role', function (table) {
table.increments('role_id').primary();
table.string('title').notNullable().unique();
table.string('description');
table.integer('level').notNullable(),
})
.createTable('user_account', function (table) {
table.increments('user_id').primary();
table.integer('role_id').references('role_id').inTable('role').notNullable();
table.string('username').notNullable().unique();
table.string('passwordHashed').notNullable();
table.string('email', 50).notNullable().unique();
});
};
exports.down = function (knex, Promise) {
return knex.schema
.dropTable('user_account')
.dropTable('role');
};
Here I create two tables in the up function. The user_account has a foreign key constraint, and links with the role table, which means I have to drop the user_account table before the role table in the down function.
In your case, you use a update statement. In the down function you have to either make a new update with a hard-coded value (the old one before the migration), or make sure you store the old value in a history table.
As for your concerns:
Yes, if you add a lot of stuff, you also have to add a lot of code to reverse whatever you are doing. However, you can skip making the down scripts, but then you won't be able to rollback. Some (many?) choose to only go forward and never rollback. If they have to fix something they don't rollback but make a new migration script with the fix.
I would recommend you to create the down functions in the beginning. You can consider skipping making them when the time is right. People who don't make down functions usually have to test their migrations more thoroughly in a test or staging environment before deploying to production. This is to make sure it works, because they can't rollback after all.
I can't really answer for the Knex creators here. However, what you are describing as a potential solution is basically a backup of the database before a migration is done. After all, a migration does more than just change the layout of the tables, etc. A migration script will typically add or remove new rows as well. You can use the backup approach, but you have to take the backups yourself.
Knex is a fairly simple query builder. If you want the migration scripts to be written for you, you might want to go for a full-blown OR mapper.
I have been using phantomjs for doing some heavy lifting for me in a server side dom environment. Till now I have been putting by data structures in-memory (i.e. doing nothing special with them) and everything was fine.
But recently under some use cases i started running into following problems:
memory usage becoming too high making swap to kick in and seriously effecting my performance.
not being able to resume from the last save point since in-memory data structures are not persistent (obviously)
This forced me to look for a database solution to be used on phantom but again I am running into issues while deciding on a solution:
I don't want my performance to get too effected.
it has to be persistent and queryable
how do i even connect to a database from inside phantom script.
Can anyone guide me to a satisfactory solution?
Note: I have almost decided on sqlite but connecting to it from phantom is still an issue. Nodejs provides sqlite3 node module, i am trying to browserify it for phantom.
Note Note: Browserify didn't worked! Back to ground zero!! :-(
Thanx in advance!
Phantomjs' filesystem API allows you to read and write binary files with:
buf = fs.read(FILENAME, 'b') and
fs.write(FILENAME, buf, 'b')
sql.js (https://github.com/kripken/sql.js/) gives you a javascript SQLite
implementation you can run in phantomjs.
Combine the 2 and you have a fast, persistent, queryable SQL database.
Example walkthrough
Get javascript SQLite implementation (saving to /tmp/sql.js)
$ wget https://raw.githubusercontent.com/kripken/sql.js/master/js/sql.js -O /tmp/sql.js
Create a test SQLite database using the command-line sqlite3 app (showing it is persistent and external to your phantomjs application).
sqlite3 /tmp/eg.db
sqlite> CREATE TABLE IF NOT EXISTS test(id INTEGER PRIMARY KEY AUTOINCREMENT, created INTEGER NOT NULL DEFAULT CURRENT_TIMESTAMP);
sqlite> .quit
Save this test phantomjs script to add entries to the test database and verify behaviour.
$ cat /tmp/eg.js
var fs = require('fs'),
sqlite3 = require('./sql.js'),
dbfile = '/tmp/eg.db',
sql = 'INSERT INTO test(id) VALUES (NULL)',
// fs.read returns binary 'string' (not 'String' or 'Uint8Array')
read = fs.read(dbfile, 'b'),
// Database argument must be a 'string' (binary) not 'Uint8Array'
db = new sqlite3.Database(read),
write,
uint8array;
try {
db.run(sql);
} catch (e) {
console.error('ERROR: ' + e);
phantom.exit();
}
// db.export() returns 'Uint8Array' but we must pass binary 'string' to write
uint8array = db.export();
write = String.fromCharCode.apply(null, Array.prototype.slice.apply(uint8array));
fs.write(dbfile, write, 'b');
db.close();
phantom.exit();
Run the phantomjs script to test
$ /usr/local/phantomjs-2.0.0-macosx/bin/phantomjs /tmp/eg.js
Use external tool to verify changes were persisted.
sqlite3 /tmp/eg.db
sqlite> SELECT * FROM test;
id created
1 2015-03-28 10:21:09
sqlite>
Some things to keep in mind:
The database is modified on disk only when you call fs.write.
Any changes you make are invisible to external programs accessing the same SQLite database file until you call fs.write.
The entire database is read into memory with fs.read.
You may want to have different OS files for different tables -- or versions of tables -- depending on your application and the amount of data in the tables, to address the memory requirements you mentioned.
Passing what is returned by sqlite3.export() to fs.write will corrupt the SQLite database file on disk (it will no longer be a valid SQLite database file).
Uint8Array is NOT the correct type for the fs.write parameter.
Writing a binary data in phantomjs works like this:
var db_file = fs.open(db_name, {mode: 'wb', charset: ''});
db_file.write(String.fromCharCode.apply(null, db.export()));
db_file.close();
You have to set the charset to '' because otherwise the writing goes wrong.