Knex JS Check if Knex seed file already seeds - node.js

I don't want to duplicate records in table, so if I run seed file more than once, it will not duplicate the record on the table in the database.
Is there a way to check if the file is already seeded?
or in seed files, perhaps we can check if this record already exist, dont insert. something like that

Related

How to write "Drop index xxx if exists" in spanner?

I don't see a way to do so, could anyone know how to archive in spanner?
drop index testindex1 if exists
Our scenario:
On day 10, we created an index testindex1, and this change (schema file) may get deployed to some or all production environments
On day 30, we decided we actually don't need testindex1, so we would like to drop it if it is there. We are not sure which production databases it has been created in.
Is there a way to ignore the not-found error in the middle when running a batch of DDL statements?
DROP ... IF EXISTS is not supported in Cloud Spanner.
What is the use case of this? Can you ignore the not-found error from the drop?

PostgreSQL ERROR: could not open file "base/.../..."

There are many queue_promotion_n tables where n is from 1 to 100.
There is an error on the 73 table with a fairly simple query
SELECT count(DISTINCT queue_id)
FROM "queue_promotion_73"
WHERE status_new > NOW() - interval '3 days';
ERROR: could not open file "base/16387/357386324.1" (target block
200005): No such file or directory
Uptime DB 23 days. How to fix it?
Check that you have up-to-date backups (or verify that your DB replica is in sync)
PostgreSQL wiki recommends stopping DB and rsync whole all PostgreSQL files to a safe location.
File where the table is physically stored seems to be missing. You can check where PostgreSQL stores data on disk using:
SELECT pg_relation_filepath('queue_promotion_73');
pg_relation_filepath
----------------------
base/16387/357386324
(1 row)
If you are sure that your hard drives/RAID controller works fine, you can try rebuilding the table. It is a good idea to try this on a replica or backup snapshot of the database first.
VACUUM FULL queue_promotion_73;
Check again the relation path:
SELECT pg_relation_filepath('queue_promotion_73');
it should be different and hopefully with all required files.
The cause could be related to a hardware issue, make sure to check DB consistency.

Sequelize-cli how to create seed files from an existing database?

Issue:
In order to start in a clean environment for developing stuff for a web app I would appreciate to be able to retrieve some data from an existing DB (let say the 10 first lines of every tables) in order to create a sequelize seed file per table. It will then be possible to seed an empty DB with these data into the corresponding models and migrations.
I have found the tool named sequelize-auto which seems to work fine to generate a model file from an exsting DB (beware of not already having for example a uses.js model ; it will be overwritten !) : https://github.com/sequelize/sequelize-auto.
This tool will create a model file, but neither a migration or a seed file.
Question:
Is there a way to build a seed file from an existing database?
Found this cool module
you can create (dump) seed with command
npx sequeliseed generate table_name --config
https://www.npmjs.com/package/sequeliseed

CouchDB bulk update won't create document if needed

I am optimizing a script I wrote last year that reads documents from a source Couch db, modified the doc and writes the new doc into a destination Couch db.
So the previous version of the script did the following
1.read a document from source db
2.modify document
3.writes doc into destination db
What I'm trying to do is to pile the docs to write in a list and then write a bulk of the (let's say 100) to the destination db to optimize perfomances.
What I found out is that when the bulk upload has to write a list of docs into the destination db if there is a doc in the list which has an "_id" which does not exist into the destination db, then that document won't be written.
The return value will have "success: true" even if after they copy happened there is no such doc in the destination db.
I tried disabling "delayed_commits" and using the flag "all_or_nothing" but nothing has changed. Cannot find info on stackoverflow / documentation so I'm quite lost.
Thanks
To the future generations: what I was experiencing is known Bug and it should be fixed in the next release.
https://issues.apache.org/jira/browse/COUCHDB-1415
The actual workaround is to write a document that is slighty different each time. I added a useless field called "timestamp" that has as value the timestamp of when i run my script.

Clearing XHProf Data not working

I got XHProf working with XHgui. I like to clear or restart fresh profiling for certain site or globally. How do i clear/reset XHprof? I assume i have to delete logs in Mongo DB but I am not familiar with Mongo and I don't know the tables it stores info.
To clear XHProf with XHGui, log into mongo db and clear the collection - results as following:
mongo
db.results.drop()
The first line log into mongo db console. The last command, drops collection results that is is going to be recreated by the XHGui on the next request that is profiled
Some other useful commands:
show collections //shows all collection
use results //the same meaning as mysql i believe
db.results.help() //to find out all commands available for collection results
I hope it helps
I have similar issue in my Drupal setup, using Devel module in Drupal.
After a few check & reading on how xhprof library save the data, i'm able to figure out where the data is being saved.
The library will check is there any defined path in php.ini
xhprof.output_dir
if there's nothing defined in your php.ini, it will get the system temp dir.
sys_get_temp_dir()
In short, print out these value to find the xhprof data:
$xhprof_dir = ini_get("xhprof.output_dir");
$systemp_dir = sys_get_temp_dir();
if $xhprof_dir doesn't return any value, check the $systemp_dir, xhprof data should be there with .xhprof extension.

Resources