How do I insert records in a node-pg-migrate migration? - node.js

I'm trying to use node-pg-migrate to handle migrations for an ExpressJS app. I can translate most of the SQL dump into pgm.func() type calls, but I can't see any method for handling actual INSERT statements for initial data in my solution's lookup tables.

It is possible using the pgm.sql catch all:
pgm.sql(`INSERT INTO users (username, password, created, forname, surname, department, reviewer, approver, active) VALUES
('rd#example.com', 'salty', '2019-12-31 11:00:00', 'Richard', 'Dyce', 'MDM', 'No', 'No', 'Yes');`)
Note the use of backtick (`) to allow breaking the SQL statement across multiple lines.

You can use raw sql if you needed.
Create a migration file with the extension .sql and write usual requests.
This article has a great example.
My example:
-- Up Migration
CREATE TABLE users
(
id BIGSERIAL NOT NULL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(50) NOT NULL,
password VARCHAR(50) NOT NULL,
class_id INTEGER NOT NULL,
created_at DATE NOT NULL,
updated_at DATE NOT NULL
);
CREATE TABLE classes
(
id INTEGER NOT NULL PRIMARY KEY,
name VARCHAR(50) NOT NULL,
health INTEGER NOT NULL,
damage INTEGER NOT NULL,
attack_type VARCHAR(50) NOT NULL,
ability VARCHAR(50) NOT NULL,
created_at DATE NOT NULL,
updated_at DATE NOT NULL
);
INSERT INTO classes (id,
name,
health,
damage,
attack_type,
ability,
created_at,
updated_at)
VALUES (0,
'Thief',
100,
25,
'Archery Shot',
'Run Away',
NOW(),
NOW());
-- Down Migration
DROP TABLE users;
DROP TABLE classes;

Related

SyntaxException: line 2:10 no viable alternative at input 'UNIQUE' > (...NOT EXISTS books ( id [UUID] UNIQUE...)

I am trying the following codes to create a keyspace and a table inside of it:
CREATE KEYSPACE IF NOT EXISTS books WITH REPLICATION = { 'class': 'SimpleStrategy',
'replication_factor': 3 };
CREATE TABLE IF NOT EXISTS books (
id UUID PRIMARY KEY,
user_id TEXT UNIQUE NOT NULL,
scale TEXT NOT NULL,
title TEXT NOT NULL,
description TEXT NOT NULL,
reward map<INT,TEXT> NOT NULL,
image_url TEXT NOT NULL,
video_url TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
But I do get:
SyntaxException: line 2:10 no viable alternative at input 'UNIQUE'
(...NOT EXISTS books ( id [UUID] UNIQUE...)
What is the problem and how can I fix it?
I see three syntax issues. They are mainly related to CQL != SQL.
The first, is that NOT NULL is not valid at column definition time. Cassandra doesn't enforce constraints like that at all, so for this case, just get rid of all of them.
Next, Cassandra CQL does not allow default values, so this won't work:
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
Providing the current timestamp for created_at is something that will need to be done at write-time. Fortunately, CQL has a few of built-in functions to make this easier:
INSERT INTO books (id, user_id, created_at)
VALUES (uuid(), 'userOne', toTimestamp(now()));
In this case, I've invoked the uuid() function to generate a Type-4 UUID. I've also invoked now() for the current time. However now() returns a TimeUUID (Type-1 UUID) so I've nested it inside of the toTimestamp function to convert it to a TIMESTAMP.
Finally, UNIQUE is not valid.
user_id TEXT UNIQUE NOT NULL,
It looks like you're trying to make sure that duplicate user_ids are not stored with each id. You can help to ensure uniqueness of the data in each partition by adding user_id to the end of the primary key definition as a clustering key:
CREATE TABLE IF NOT EXISTS books (
id UUID,
user_id TEXT,
...
PRIMARY KEY (id, user_id));
This PK definition will ensure that data for books will be partitioned by id, containing multiple user_id rows.
Not sure what the relationship is between books and users is, though. If one book can have many users, then this will work. If one user can have many books, then you'll want to switch the order of the keys to this:
PRIMARY KEY (user_id, id));
In summary, a working table definition for this problem looks like this:
CREATE TABLE IF NOT EXISTS books (
id UUID,
user_id TEXT,
scale TEXT,
title TEXT,
description TEXT,
reward map<INT,TEXT>,
image_url TEXT,
video_url TEXT,
created_at TIMESTAMP,
PRIMARY KEY (id, user_id));

Integrity error while running migrate to add new column in Django

I have a MYSQL table with few million entries. While trying to add a new column to this table using Django migration after some time (possibly due to huge data) is failing with below error:
Error django.db.utils.IntegrityError: (1062, "Duplicate entry '123456-softwareengineer' for key 'api_experience_entity_id_123a45b6c789def0_uniq'")
I checked manually but the entity_id(123456) didn't had any duplicate entries. I was able to see that this entry was being updated when the migration was running.
What could be the possible solution to perform migration without affecting the data and near to no downtime for system?
Below are the details of my table:
SHOW CREATE TABLE api_experience;`
`CREATE TABLE `api_experience` ( `id` int(11) NOT NULL AUTO_INCREMENT, `type` varchar(100) NOT NULL, `duration` varchar(200) NOT NULL, `created_at` datetime NOT NULL, `updated_at` datetime NOT NULL, `entity_id` int(11) NOT NULL, `designation` varchar(100), PRIMARY KEY (`id`), UNIQUE KEY `api_experience_entity_id_123a45b6c789def0_uniq` (`entity_id`,`type`), KEY `api_experience_599dcce2` (`type`), KEY `api_experience_5527459a` (`designation`), KEY `api_experience_created_at_447d412d906baea1_uniq` (`created_at`), CONSTRAINT `api_entity_id_3fbe8eb2deb42063_fk_api_entity_id` FOREIGN KEY (`entity_id`) REFERENCES `api_entity` (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=773570676 DEFAULT CHARSET=utf8`
Have tried with running migration in low traffic at midnight but didn't help.

Postgres: complex foreign key constraints

I have this schema
CREATE TABLE public.item (
itemid integer NOT NULL,
itemcode character(100) NOT NULL,
itemname character(100) NOT NULL,
constraint PK_ITEM primary key (ItemID)
);
create unique index ak_itemcode on Item(ItemCode);
CREATE TABLE public.store (
storeid character(20) NOT NULL,
storename character(80) NOT NULL,
constraint PK_STORE primary key (StoreID)
);
CREATE TABLE public.storeitem (
storeitemid integer NOT NULL,
itemid integer NOT NULL,
storeid character(20) NOT NULL,
constraint PK_STOREITEM primary key (ItemID, StoreID),
foreign key (StoreID) references Store(StoreID),
foreign key (ItemID) references Item(ItemID)
);
create unique index ak_storeitemid on StoreItem (StoreItemID);
And here is the data on those tables
insert into Item (ItemID, ItemCode,ItemName)
Values (1,'abc','abc');
insert into Item (ItemID, ItemCode,ItemName)
Values (2,'def','def');
insert into Item (ItemID, ItemCode,ItemName)
Values (3,'ghi','ghi');
insert into Item (ItemID, ItemCode,ItemName)
Values (4,'lmno','lmno');
insert into Item (ItemID, ItemCode,ItemName)
Values (5,'xyz','xyz');
insert into Store (StoreID, StoreName)
Values ('B1','B1');
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (1,'B1',1);
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (2,'B1',2);
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (3,'B1',3);
Now I created this new table
CREATE TABLE public.szdata (
storeid character(20) NOT NULL,
itemcode character(100) NOT NULL,
textdata character(20) NOT NULL,
constraint PK_SZDATA primary key (ItemCode, StoreID)
);
I want to have the foreign key constraints set so that it will fail when you try to insert record which is not in StoreItem. For example this must fail
insert into SZData (StoreID, ItemCode, TextData)
Values ('B1', 'xyz', 'text123');
and this must pass
insert into SZData (StoreID, ItemCode, TextData)
Values ('B1', 'abc', 'text123');
How do I achieve this without complex triggers but using table constraints?
I prefer solution without triggers. SZData table is just for accepting input from external world and it is for single purpose.
Also database import export must not be impacted
I figured out having a function to execute on constraint will solve this issue.
The function is_storeitem does the validation. I believe this feature can be used for even complex validations
create or replace function is_storeitem(pItemcode nchar(40), pStoreId nchar(20)) returns boolean as $$
select exists (
select 1
from public.storeitem si, public.item i, public.store s
where si.itemid = i.itemid and i.itemcode = pItemcode and s.Storeid = pStoreId and s.storeid = si.storeid
);
$$ language sql;
create table SZData
(
StoreID NCHAR(20) not null,
ItemCode NCHAR(100) not null,
TextData NCHAR(20) not null,
constraint PK_SIDATA primary key (ItemCode, StoreID),
foreign key (StoreID) references Store(StoreID),
foreign key (ItemCode) references Item(ItemCode),
CONSTRAINT ck_szdata_itemcode CHECK (is_storeitem(Itemcode,StoreID))
);
This perfectly works with postgres 9.6 or greater.

pgadmin 4 container : How to import servers and make them visible to every LDAP user?

I have a pgadmin container running in ECS. The container is using a specific LDAP_BIND_USER user to bind to the LDAP server but then every user is connecting to pgadmin with their own credentials.
I have modified the entrypoint.sh to fetch the list of RDS instances and generate the servers.json file when the container is first started, this is done by this command defined in the entrypoint.sh:
/venv/bin/python3 /pgadmin4/setup.py --load-servers "${PGADMIN_SERVER_JSON_FILE}" --user ${PGADMIN_DEFAULT_EMAIL}
The problem
As the container is running in SERVER mode, I need to specify a user (PGADMIN_DEFAULT_EMAIL) to whom these servers are going to be visible when doing the import. This means that only this user will see the server list.
Is there a way to import the servers for every user that connects?
So, sadly it is not possible! You can only import servers to a particular user and as the users are coming from LDAP, they are created in the sqlite db just once the users connects for the first time.
There is a workaround which it is not nice but it does the job:
Create and import the db tables (servergroup and server) manually!
This is how the user table looks like:
CREATE TABLE user (
id INTEGER NOT NULL,
username VARCHAR(256) NOT NULL,
email VARCHAR(256),
password VARCHAR(256),
active BOOLEAN NOT NULL,
confirmed_at DATETIME,
masterpass_check VARCHAR(256),
auth_source VARCHAR(256) NOT NULL DEFAULT 'internal',
PRIMARY KEY (id),
UNIQUE (username, auth_source),
CHECK (active IN (0, 1))
);
This is how the server table looks like:
CREATE TABLE server (
id INTEGER NOT NULL,
user_id INTEGER NOT NULL,
servergroup_id INTEGER NOT NULL,
name VARCHAR(128) NOT NULL,
host VARCHAR(128),
port INTEGER,
maintenance_db VARCHAR(64) NOT NULL,
username VARCHAR(64),
password VARCHAR(64),
role VARCHAR(64),
ssl_mode VARCHAR(16) NOT NULL CHECK(ssl_mode IN
( 'allow' , 'prefer' , 'require' , 'disable' ,
'verify-ca' , 'verify-full' )
),
comment VARCHAR(1024),
discovery_id VARCHAR(128),
hostaddr TEXT(1024),
db_res TEXT,
passfile TEXT,
sslcert TEXT,
sslkey TEXT,
sslrootcert TEXT,
sslcrl TEXT,
sslcompression INTEGER DEFAULT 0,
bgcolor TEXT(10),
fgcolor TEXT(10),
service TEXT,
use_ssh_tunnel INTEGER DEFAULT 0,
tunnel_host TEXT,
tunnel_port TEXT,
tunnel_username TEXT,
tunnel_authentication INTEGER DEFAULT 0,
tunnel_identity_file TEXT, connect_timeout INTEGER DEFAULT 0, tunnel_password TEXT(64), save_password INTEGER DEFAULT 0, shared BOOLEAN,
PRIMARY KEY(id),
FOREIGN KEY(user_id) REFERENCES "user_old"(id),
FOREIGN KEY(servergroup_id) REFERENCES servergroup(id)
);
And this is how the servergroup table looks like:
CREATE TABLE servergroup (
id INTEGER NOT NULL,
user_id INTEGER NOT NULL,
name VARCHAR(128) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(user_id) REFERENCES "user_old" (id),
UNIQUE (user_id, name)
);
Looking at the server table closer we can see that for each id there is a user_id and servergroup_id associated, so we can create these tables manually and import the values before the server is started.
The catch: There is a server line for each user, so just iterate over the lenght of users and everyone will see the servers in their console

Bad distributed join plan: result table shard keys do not match

We are very new to memsql/mysql and we are trying to play around with a memsql installation.
It is installed on a CentOS7 virtual machine and we are running version 5.1.0 of MemSQL.
We are receiving the error from one of the queries we are attempting:
ERROR 1889 (HY000): Bad distributed join plan: result table shard keys do not match. Please contact MemSQL support at support#memsql.com.
On one of our queries
We have two tables:
CREATE TABLE `MyObjects` (
`Id` INT NOT NULL AUTO_INCREMENT,
`Name` VARCHAR(128) NOT NULL,
`Description` VARCHAR(256) NULL,
`Boolean` BIT NOT NULL,
`Int8` TINYINT NOT NULL,
`Int16` SMALLINT NOT NULL,
`Int32` MEDIUMINT NOT NULL,
`Int64` INT NOT NULL,
`Float` DOUBLE NOT NULL,
`DateCreated` TIMESTAMP NOT NULL,
SHARD KEY (`Id`),
PRIMARY KEY (`Id`)
);
CREATE TABLE `MyObjectDetails` (
`MyObjectId` INT,
`Int32` MEDIUMINT NOT NULL,
SHARD KEY (`MyObjectId`),
INDEX (`MyObjectId`)
);
And here is the SQL we are executing and getting the error.
memsql> SELECT mo.`Id`,mo.`Name`,mo.`Description`,mo.`Boolean`,mo.`Int8`,mo.`Int16`,
mo.`Int32`,mo.`Int64`,mo.`Float`,mo.`DateCreated`,mods.`MyObjectId`,
mods.`Int32` FROM
( SELECT
mo.`Id`,mo.`Name`,mo.`Description`,mo.`Boolean`,mo.`Int8`,
mo.`Int16`,mo.`Int32`,mo.`Int64`,mo.`Float`,mo.`DateCreated`
FROM `MyObjects` mo LIMIT 10 ) AS mo
LEFT JOIN `MyObjectDetails` mods ON mo.`Id` = mods.`MyObjectId` ORDER BY `Name` DESC;
ERROR 1889 (HY000): Bad distributed join plan: result table shard keys do not match. Please contact MemSQL support at support#memsql.com.
Does anyone know why we are receiving this error, and if there is a possible change we can make to help alleviate this issue?
The one thing we do know is it has something to do with the inner select as if I pull it out and do the join it works, however we only get 10 total rows from the join. What we are attempting is getting the top 10 from the main table and include all of the details from the right.
We also tried changing the MyObjectDetails table to have an empty SHARD KEY, but that resulted in the same error.
SHARD KEY()
We also added an auto-incrementing Id column to the details table and put the shard on that column, and yet still received the same error.
Thanks in advance for any help.
UPDATE:
I contacted MemSQL through email (huge props to their customer service by the way -- very fast response time, less than a couple hours)
But from what Mike stated I changed the table to be a REFERENCE table and removed the SHARD KEY part of the create table statement. Once I did this, I was able to run the queries. I am not 100% sure on what ramifications this will have but it fixed my issue at hand. Thanks
CREATE REFERENCE TABLE `MyObjects` (
`Id` INT NOT NULL AUTO_INCREMENT,
`Name` VARCHAR(128) NOT NULL,
`Description` VARCHAR(256) NULL,
`Boolean` BIT NOT NULL,
`Int8` TINYINT NOT NULL,
`Int16` SMALLINT NOT NULL,
`Int32` MEDIUMINT NOT NULL,
`Int64` INT NOT NULL,
`Float` DOUBLE NOT NULL,
`DateCreated` TIMESTAMP NOT NULL,
PRIMARY KEY (`Id`)
);
Thanks to Mike Gallegos for looking into this, adding a summary of his answer here:
The error message here is bad, but the reason for the error is that MemSQL does not currently support a distributed left join where the left side (the Limit subquery in this case) has a LIMIT operator. If you cannot rewrite the query to do the limit after the join, then you could change the MyObjects table to a reference table to work around the issue.

Resources