Sequelizejs HasMany ternary relationship - node.js

I want to make in Sequelize a ternary relationship as seen in the picture below
Image
https://www.dropbox.com/s/v8bgsir2qqw6ccv/relationship%20.jpg
If I apply this code in sequelize
A.hasMany(B);
A.hasMany(C);
B.hasMany(C);
C.hasMany(A);
C.hasMany(B);
The resulting SQL code is as follows
CREATE TABLE IF NOT EXISTS `a_b_` (
PRIMARY KEY (`BId`,`AId`)
)
CREATE TABLE IF NOT EXISTS `a_c_` (
`CId` int(11) NOT NULL DEFAULT '0',
`AId` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`CId`,`AId`)
)
CREATE TABLE IF NOT EXISTS `b_c_` (
PRIMARY KEY (`CId`,`BId`)
)
But the result should be
CREATE TABLE IF NOT EXISTS `a_b_` (
PRIMARY KEY (`BId`,`AId`)
)
CREATE TABLE IF NOT EXISTS `a_b_c_` (
`AId` int(11) NOT NULL DEFAULT '0',
`BId` int(11) NOT NULL DEFAULT '0',
`CId` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`AId`,`BId`, 'CId')
)
I just can not get create a table with pk (AId, BId, CId) someone could indicate me which way to go or what I can do.
Thanks so much.

Related

Integrity error while running migrate to add new column in Django

I have a MYSQL table with few million entries. While trying to add a new column to this table using Django migration after some time (possibly due to huge data) is failing with below error:
Error django.db.utils.IntegrityError: (1062, "Duplicate entry '123456-softwareengineer' for key 'api_experience_entity_id_123a45b6c789def0_uniq'")
I checked manually but the entity_id(123456) didn't had any duplicate entries. I was able to see that this entry was being updated when the migration was running.
What could be the possible solution to perform migration without affecting the data and near to no downtime for system?
Below are the details of my table:
SHOW CREATE TABLE api_experience;`
`CREATE TABLE `api_experience` ( `id` int(11) NOT NULL AUTO_INCREMENT, `type` varchar(100) NOT NULL, `duration` varchar(200) NOT NULL, `created_at` datetime NOT NULL, `updated_at` datetime NOT NULL, `entity_id` int(11) NOT NULL, `designation` varchar(100), PRIMARY KEY (`id`), UNIQUE KEY `api_experience_entity_id_123a45b6c789def0_uniq` (`entity_id`,`type`), KEY `api_experience_599dcce2` (`type`), KEY `api_experience_5527459a` (`designation`), KEY `api_experience_created_at_447d412d906baea1_uniq` (`created_at`), CONSTRAINT `api_entity_id_3fbe8eb2deb42063_fk_api_entity_id` FOREIGN KEY (`entity_id`) REFERENCES `api_entity` (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=773570676 DEFAULT CHARSET=utf8`
Have tried with running migration in low traffic at midnight but didn't help.

Postgres: complex foreign key constraints

I have this schema
CREATE TABLE public.item (
itemid integer NOT NULL,
itemcode character(100) NOT NULL,
itemname character(100) NOT NULL,
constraint PK_ITEM primary key (ItemID)
);
create unique index ak_itemcode on Item(ItemCode);
CREATE TABLE public.store (
storeid character(20) NOT NULL,
storename character(80) NOT NULL,
constraint PK_STORE primary key (StoreID)
);
CREATE TABLE public.storeitem (
storeitemid integer NOT NULL,
itemid integer NOT NULL,
storeid character(20) NOT NULL,
constraint PK_STOREITEM primary key (ItemID, StoreID),
foreign key (StoreID) references Store(StoreID),
foreign key (ItemID) references Item(ItemID)
);
create unique index ak_storeitemid on StoreItem (StoreItemID);
And here is the data on those tables
insert into Item (ItemID, ItemCode,ItemName)
Values (1,'abc','abc');
insert into Item (ItemID, ItemCode,ItemName)
Values (2,'def','def');
insert into Item (ItemID, ItemCode,ItemName)
Values (3,'ghi','ghi');
insert into Item (ItemID, ItemCode,ItemName)
Values (4,'lmno','lmno');
insert into Item (ItemID, ItemCode,ItemName)
Values (5,'xyz','xyz');
insert into Store (StoreID, StoreName)
Values ('B1','B1');
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (1,'B1',1);
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (2,'B1',2);
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (3,'B1',3);
Now I created this new table
CREATE TABLE public.szdata (
storeid character(20) NOT NULL,
itemcode character(100) NOT NULL,
textdata character(20) NOT NULL,
constraint PK_SZDATA primary key (ItemCode, StoreID)
);
I want to have the foreign key constraints set so that it will fail when you try to insert record which is not in StoreItem. For example this must fail
insert into SZData (StoreID, ItemCode, TextData)
Values ('B1', 'xyz', 'text123');
and this must pass
insert into SZData (StoreID, ItemCode, TextData)
Values ('B1', 'abc', 'text123');
How do I achieve this without complex triggers but using table constraints?
I prefer solution without triggers. SZData table is just for accepting input from external world and it is for single purpose.
Also database import export must not be impacted
I figured out having a function to execute on constraint will solve this issue.
The function is_storeitem does the validation. I believe this feature can be used for even complex validations
create or replace function is_storeitem(pItemcode nchar(40), pStoreId nchar(20)) returns boolean as $$
select exists (
select 1
from public.storeitem si, public.item i, public.store s
where si.itemid = i.itemid and i.itemcode = pItemcode and s.Storeid = pStoreId and s.storeid = si.storeid
);
$$ language sql;
create table SZData
(
StoreID NCHAR(20) not null,
ItemCode NCHAR(100) not null,
TextData NCHAR(20) not null,
constraint PK_SIDATA primary key (ItemCode, StoreID),
foreign key (StoreID) references Store(StoreID),
foreign key (ItemCode) references Item(ItemCode),
CONSTRAINT ck_szdata_itemcode CHECK (is_storeitem(Itemcode,StoreID))
);
This perfectly works with postgres 9.6 or greater.

Possible to interleave a new table into a secondary index table?

I'm gonna guess no, but secondary indexes seem a lot like tables in that you can directly select from them FORCE_INDEX and even JOIN on them:
JOIN MyTable#{FORCE_INDEX=anIndexToUseFromMyTable} AS myTable
So maybe you can create a new table interleaved into an index?
Example
CREATE TABLE Foo (
primaryId STRING(64) NOT NULL,
secondaryId STRING(64) NOT NULL,
modifiedAt TIMESTAMP NOT NULL OPTIONS (allow_commit_timestamp=true),
) PRIMARY KEY (primaryId);
-- Index we would like to interleave into for another table
CREATE INDEX FooSecondaryIdIndex ON Foo(secondaryId);
-- interleave this table into the index above
-- and support DELETE CASCADE
CREATE TABLE Bar (
secondaryId STRING(64) NOT NULL,
extraData STRING(64) NOT NULL,
modifiedAt TIMESTAMP NOT NULL OPTIONS (allow_commit_timestamp=true),
) PRIMARY KEY (secondaryId),
INTERLEAVE IN PARENT Foo#{FORCE_INDEX=FooSecondaryIdIndex} ON DELETE CASCADE;
Well... it doesn’t look like that is supported:
Error parsing Spanner DDL statement: CREATE TABLE Bar ( secondaryId STRING(64) NOT NULL, extraData STRING(64) NOT NULL, modifiedAt TIMESTAMP NOT NULL OPTIONS (allow_commit_timestamp=true), ) PRIMARY KEY (secondaryId), INTERLEAVE IN PARENT Foo#{FORCE_INDEX=FooSecondaryIdIndex} ON DELETE CASCADE : Syntax error on line 6, column 25: Expecting 'EOF' but found '#'

How do I insert records in a node-pg-migrate migration?

I'm trying to use node-pg-migrate to handle migrations for an ExpressJS app. I can translate most of the SQL dump into pgm.func() type calls, but I can't see any method for handling actual INSERT statements for initial data in my solution's lookup tables.
It is possible using the pgm.sql catch all:
pgm.sql(`INSERT INTO users (username, password, created, forname, surname, department, reviewer, approver, active) VALUES
('rd#example.com', 'salty', '2019-12-31 11:00:00', 'Richard', 'Dyce', 'MDM', 'No', 'No', 'Yes');`)
Note the use of backtick (`) to allow breaking the SQL statement across multiple lines.
You can use raw sql if you needed.
Create a migration file with the extension .sql and write usual requests.
This article has a great example.
My example:
-- Up Migration
CREATE TABLE users
(
id BIGSERIAL NOT NULL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(50) NOT NULL,
password VARCHAR(50) NOT NULL,
class_id INTEGER NOT NULL,
created_at DATE NOT NULL,
updated_at DATE NOT NULL
);
CREATE TABLE classes
(
id INTEGER NOT NULL PRIMARY KEY,
name VARCHAR(50) NOT NULL,
health INTEGER NOT NULL,
damage INTEGER NOT NULL,
attack_type VARCHAR(50) NOT NULL,
ability VARCHAR(50) NOT NULL,
created_at DATE NOT NULL,
updated_at DATE NOT NULL
);
INSERT INTO classes (id,
name,
health,
damage,
attack_type,
ability,
created_at,
updated_at)
VALUES (0,
'Thief',
100,
25,
'Archery Shot',
'Run Away',
NOW(),
NOW());
-- Down Migration
DROP TABLE users;
DROP TABLE classes;

Using MATERIALIZED VIEW in Cassandra gives error

I have following table.
CREATE TABLE test_x (id text PRIMARY KEY, type frozen<mycustomtype>);
mycustomtype is defined as follows,
CREATE TABLE mycustomtype (
id uuid PRIMARY KEY,
name text
)
And i have created following materialized view for queries based on mycustometype filed.
CREATE MATERIALIZED VIEW test_x_by_mycustomtype_name AS
SELECT id, type
FROM test_x
WHERE type IS NOT NULL
PRIMARY KEY (id, type)
WITH CLUSTERING ORDER BY (type ASC)
With above view i hope to execute following query.
select id from test_x_by_mycustomtype_name where type =
{id: a3e64f8f-bd44-4f28-b8d9-6938726e34d4, name: 'Sample'};
But the query fails saying i need to use 'ALLOW FILTERING'. I created the view not to use ALLOW FILTERING. Why this error is happening here since i have used the part of primary key of the view ?
In you view, the type column is still clustering key. Hence, ALLOW FILTER should be used. You can change the view as per below and retry
CREATE MATERIALIZED VIEW test_x_by_mycustomtype_name_2 AS
SELECT id, type
FROM test_x
WHERE type IS NOT NULL
PRIMARY KEY (type, id)
WITH CLUSTERING ORDER BY (id ASC);
cqlsh:test> select id from test_x_by_mycustomtype_name_2 where type = {id: a3e64f8f-bd44-4f28-b8d9-6938726e34d4, name: 'Sample'};
id
----
Change the order of the primary key of materialized view
CREATE MATERIALIZED VIEW test_x_by_mycustomtype_name AS
SELECT id, type
FROM test_x
WHERE type IS NOT NULL
PRIMARY KEY (type, id)
WITH CLUSTERING ORDER BY (type ASC);

Resources