OrchardCMS developers/users,
I have
public class MyContentPartRecord : ContentPartRecord
{ ... }
I want to change it to
public class MyContentPartRecord : ContentPartVersionRecord
{ ... }
In order to be able to create drafts for this part.
I add to Migraions
SchemaBuilder.AlterTable(
"MyContentPartRecord",
table =>
table
.AddColumn<int>("ContentItemRecord_id")
);
I run the app and the result is on get a content item of MyContentPart it creates an empty published version.
In db for MyContentPartRecord table:
Id [..fields..] ContentItemRecord_Id
657 NULL,.. 67
67 'MYDATA',... NULL
How to create a valid draftable MyContentPart?
UPDATE. I've tried with no success to add:
SchemaBuilder.ExecuteSql(#"
UPDATE MyContentPartRecord
SET ContentItemRecord_id = t2.ContentItemRecord_id
from MyContentPartRecord t1 inner join Orchard_Framework_ContentItemVersionRecord t2 on t1.id= t2.id
");
as it seems that orchard can't work with old records from MyContentPartRecord table as they don't have ContentItemRecord_id set.
You won't be able to do that without a manual SQL script.
The Id means a different thing for those two:
for ContentPartRecord it's a foreign key to an Id of a ContentItemRecord
for ContentPartVersionRecord it's a foreign key to an Id of a ContentItemVersionRecord
So after adding a new column ContentItemRecord_id you need to
first, copy existing data from Id column to ContentItemRecord_id and then
fill Id column with proper ids of the Latest version of each of those items. Version records are kept in Orchard_Framework_ContentItemVersionRecord table.
With the Piotr's help here is a solution. The following to be put in Migrations.cs:
SchemaBuilder.AlterTable(
"MyContentPart",
table =>
table
.AddColumn<int>("ContentItemRecord_id")
);
SchemaBuilder.ExecuteSql(#"
ALTER TABLE MyModuleName_MyContentPart
DROP CONSTRAINT PK__MyModuleName_W__3214EC072C83793F
");
SchemaBuilder.ExecuteSql(#"
INSERT INTO MyModuleName_MyContentPart
(Id, ContentItemRecord_id, Field1, Field2)
SELECT t3.Id AS id, t2.Id AS ContentItemRecord_id, t2.Field1, t2Field2
FROM MyModuleName_MyContentPart AS t2 LEFT OUTER JOIN
Orchard_Framework_ContentItemVersionRecord AS t3 ON t2.Id = t3.ContentItemRecord_id
WHERE (t3.Latest = 1) AND (NOT (t3.Id IS NULL))
");
SchemaBuilder.ExecuteSql(#"
DELETE FROM MyModuleName_MyContentPart
WHERE ContentItemRecord_id is NULL
");
SchemaBuilder.ExecuteSql(#"
ALTER TABLE MyModuleName_MyContentPart
ADD CONSTRAINT PK_MyModuleName_MyContentPart_ID PRIMARY KEY (Id)
");
UPDATE.
The final solution:
SchemaBuilder.AlterTable(
"MyContentPart",
table =>
table
.AddColumn<int>("ContentItemRecord_id")
);
SchemaBuilder.ExecuteSql(#"
ALTER TABLE MyModuleName_MyContentPart
DROP CONSTRAINT PK__MyModule_W__3214EC072C83793F
");
SchemaBuilder.ExecuteSql(#"
INSERT INTO MyModuleName_MyContentPart
(Id, ContentItemRecord_id, Field1)
SELECT V.Id as Id
,T.Id as ContentItemRecord_id
,Field1
FROM [MyModuleName_MyContentPart] T
LEFT OUTER JOIN
Orchard_Framework_ContentItemVersionRecord AS V ON V.ID in
(select top(1) Id
from Orchard_Framework_ContentItemVersionRecord
where ContentItemRecord_id = T.ID
order by latest desc, id desc)
");
SchemaBuilder.ExecuteSql(#"
DELETE FROM MyModuleName_MyContentPart
WHERE ContentItemRecord_id is NULL
");
SchemaBuilder.ExecuteSql(#"
ALTER TABLE MyModuleName_MyContentPart
ADD CONSTRAINT PK_MyModuleName_MyContentPart_ID PRIMARY KEY (Id)
");
}
Here's a more generic solution that can go in your migration.cs file based on #Artjom and #PiotrSzmyd's soltuion. This solution handles the fact that the automatically generated primary key may be named differently for each user of your module. Also the table name may be prefixed if a user has defined a global database prefix (eg. when using multi-tenant).
// Manually add the column that is required for the part to be a ContentPartVersionRecord
SchemaBuilder.AlterTable("MyCustomPartRecord", table => table.AddColumn<int>("ContentItemRecord_id"));
// Get table name
var tablePrefix = String.IsNullOrEmpty(_shellSettings.DataTablePrefix) ? "" : _shellSettings.DataTablePrefix + "_";
var tableName = tablePrefix + "MyModule_MyCustomPartRecord";
// Drop the primary key
SchemaBuilder.ExecuteSql(string.Format(#"
DECLARE #primaryKeyName NVARCHAR(MAX)
SELECT #primaryKeyName = constraint_name
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'Primary Key' and TABLE_NAME = '{0}'
EXEC(N'ALTER TABLE {0} DROP CONSTRAINT ' + #primaryKeyName)
", tableName));
// Migrate IDs to ContentItemRecord_id
SchemaBuilder.ExecuteSql(string.Format(#"
INSERT INTO {0} (Id, ContentItemRecord_id, Category_Id, ItemCode, Name, Description, DisplayOrder, Location, MaintenanceFrequency, MaintenanceFrequencyMultiplier, MaintenanceStartDate, Notes, IsEnabled)
SELECT V.Id as Id, T.Id as ContentItemRecord_id, Category_Id, ItemCode, Name, Description, DisplayOrder, Location, MaintenanceFrequency, MaintenanceFrequencyMultiplier, MaintenanceStartDate, Notes, IsEnabled
FROM {0} T
LEFT OUTER JOIN
{1}Orchard_Framework_ContentItemVersionRecord AS V ON V.ID in
(select top(1) Id
from {1}Orchard_Framework_ContentItemVersionRecord
where ContentItemRecord_id = T.ID
order by latest desc, id desc)
", tableName, tablePrefix));
// Remove old rows (no ContentItemRecord_id value)
SchemaBuilder.ExecuteSql(string.Format(#"
DELETE FROM {0}
WHERE ContentItemRecord_id is NULL
", tableName));
// Re-add the primary key
SchemaBuilder.ExecuteSql(string.Format(#"
ALTER TABLE {0}
ADD CONSTRAINT PK_{0}_Id PRIMARY KEY (Id)
", tableName));
Related
I have a node server accessing a postgres database through a npm package, pg, and have a working query that returns the data, but I think it may be able to be optimized. The data model is of versions and features, one version has many feature children. This query pattern works in a few contexts for my app, but it looks clumsy. Is there a cleaner way?
SELECT
v.*,
coalesce(
(SELECT array_to_json(array_agg(row_to_json(x))) FROM (select f.* from app_feature f where f.version = v.id) x ),
'[]'
) as features FROM app_version v
CREATE TABLE app_version(
id SERIAL PRIMARY KEY,
major INT NOT NULL,
mid INT NOT NULL,
minor INT NOT NULL,
date DATE,
description VARCHAR(256),
status VARCHAR(24)
);
CREATE TABLE app_feature(
id SERIAL PRIMARY KEY,
version INT,
description VARCHAR(256),
type VARCHAR(24),
CONSTRAINT FK_app_feature_version FOREIGN KEY(version) REFERENCES app_version(id)
);
INSERT INTO app_version (major, mid, minor, date, description, status) VALUES (0,0,0, current_timestamp, 'initial test', 'PENDING');
INSERT INTO app_feature (version, description, type) VALUES (1, 'store features', 'New Feature')
INSERT INTO app_feature (version, description, type) VALUES (1, 'return features as json', 'New Feature');
The subquery in FROM clause may not be needed.
select v.*,
coalesce((select array_to_json(array_agg(row_to_json(f)))
from app_feature f
where f.version = v.id), '[]') as features
from app_version v;
And my 5 cents. Pls. note that id is primary key of app_version so it's possible to group by app_version.id only.
select v.*, coalesce(json_agg(to_json(f)), '[]') as features
from app_version v join app_feature f on f.version = v.id
group by v.id;
You could move the JSON aggregation into a view, then join to the view:
create view app_features_json
as
select af.version,
json_agg(row_to_json(af)) as features
from app_feature af
group by af.version;
The use that view in a join:
SELECT v.*,
fj.features
FROM app_version v
join app_features_json afj on afj.version = v.id
I have this schema
CREATE TABLE public.item (
itemid integer NOT NULL,
itemcode character(100) NOT NULL,
itemname character(100) NOT NULL,
constraint PK_ITEM primary key (ItemID)
);
create unique index ak_itemcode on Item(ItemCode);
CREATE TABLE public.store (
storeid character(20) NOT NULL,
storename character(80) NOT NULL,
constraint PK_STORE primary key (StoreID)
);
CREATE TABLE public.storeitem (
storeitemid integer NOT NULL,
itemid integer NOT NULL,
storeid character(20) NOT NULL,
constraint PK_STOREITEM primary key (ItemID, StoreID),
foreign key (StoreID) references Store(StoreID),
foreign key (ItemID) references Item(ItemID)
);
create unique index ak_storeitemid on StoreItem (StoreItemID);
And here is the data on those tables
insert into Item (ItemID, ItemCode,ItemName)
Values (1,'abc','abc');
insert into Item (ItemID, ItemCode,ItemName)
Values (2,'def','def');
insert into Item (ItemID, ItemCode,ItemName)
Values (3,'ghi','ghi');
insert into Item (ItemID, ItemCode,ItemName)
Values (4,'lmno','lmno');
insert into Item (ItemID, ItemCode,ItemName)
Values (5,'xyz','xyz');
insert into Store (StoreID, StoreName)
Values ('B1','B1');
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (1,'B1',1);
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (2,'B1',2);
insert into StoreItem (StoreItemID, StoreID, ItemID)
Values (3,'B1',3);
Now I created this new table
CREATE TABLE public.szdata (
storeid character(20) NOT NULL,
itemcode character(100) NOT NULL,
textdata character(20) NOT NULL,
constraint PK_SZDATA primary key (ItemCode, StoreID)
);
I want to have the foreign key constraints set so that it will fail when you try to insert record which is not in StoreItem. For example this must fail
insert into SZData (StoreID, ItemCode, TextData)
Values ('B1', 'xyz', 'text123');
and this must pass
insert into SZData (StoreID, ItemCode, TextData)
Values ('B1', 'abc', 'text123');
How do I achieve this without complex triggers but using table constraints?
I prefer solution without triggers. SZData table is just for accepting input from external world and it is for single purpose.
Also database import export must not be impacted
I figured out having a function to execute on constraint will solve this issue.
The function is_storeitem does the validation. I believe this feature can be used for even complex validations
create or replace function is_storeitem(pItemcode nchar(40), pStoreId nchar(20)) returns boolean as $$
select exists (
select 1
from public.storeitem si, public.item i, public.store s
where si.itemid = i.itemid and i.itemcode = pItemcode and s.Storeid = pStoreId and s.storeid = si.storeid
);
$$ language sql;
create table SZData
(
StoreID NCHAR(20) not null,
ItemCode NCHAR(100) not null,
TextData NCHAR(20) not null,
constraint PK_SIDATA primary key (ItemCode, StoreID),
foreign key (StoreID) references Store(StoreID),
foreign key (ItemCode) references Item(ItemCode),
CONSTRAINT ck_szdata_itemcode CHECK (is_storeitem(Itemcode,StoreID))
);
This perfectly works with postgres 9.6 or greater.
I have following table.
CREATE TABLE test_x (id text PRIMARY KEY, type frozen<mycustomtype>);
mycustomtype is defined as follows,
CREATE TABLE mycustomtype (
id uuid PRIMARY KEY,
name text
)
And i have created following materialized view for queries based on mycustometype filed.
CREATE MATERIALIZED VIEW test_x_by_mycustomtype_name AS
SELECT id, type
FROM test_x
WHERE type IS NOT NULL
PRIMARY KEY (id, type)
WITH CLUSTERING ORDER BY (type ASC)
With above view i hope to execute following query.
select id from test_x_by_mycustomtype_name where type =
{id: a3e64f8f-bd44-4f28-b8d9-6938726e34d4, name: 'Sample'};
But the query fails saying i need to use 'ALLOW FILTERING'. I created the view not to use ALLOW FILTERING. Why this error is happening here since i have used the part of primary key of the view ?
In you view, the type column is still clustering key. Hence, ALLOW FILTER should be used. You can change the view as per below and retry
CREATE MATERIALIZED VIEW test_x_by_mycustomtype_name_2 AS
SELECT id, type
FROM test_x
WHERE type IS NOT NULL
PRIMARY KEY (type, id)
WITH CLUSTERING ORDER BY (id ASC);
cqlsh:test> select id from test_x_by_mycustomtype_name_2 where type = {id: a3e64f8f-bd44-4f28-b8d9-6938726e34d4, name: 'Sample'};
id
----
Change the order of the primary key of materialized view
CREATE MATERIALIZED VIEW test_x_by_mycustomtype_name AS
SELECT id, type
FROM test_x
WHERE type IS NOT NULL
PRIMARY KEY (type, id)
WITH CLUSTERING ORDER BY (type ASC);
I have a table with schema below :
create table xx(
bucket_id int,
like_count int,
photo_id int,
username text,
PRIMARY KEY(bucket_id,like_count,photo_id)
) WITH CLUSTERING ORDER BY (like_count DESC)
Here i can fetch all records in in descending order of like_count. But i need to update like_count at some point in my app, which i am not able to do because its part of primary key.
If i remove it from primary key, i can not get sorted results based on like_count. What would be correct way to tackle this problem in cassandra?
I am afraid Cassandra is not a good fit for dealing with mutable orders. (Consider Redis Sorted Sets instead)
With that said, you can actually achieve this using CAS-like semantics (compare-and-set) and light-weight transactions which will make your update around 20x slower.
You will also need an additional table that will serve as a lookup for current like_count per bucket_id/photo_id.
create table yy (
bucket_id int,
photo_id int,
like_count int,
PRIMARY KEY((bucket_id,photo_id))
)
Then do a light-weight-transactional delete from xx followed (if success) by an re-insert into xx and update to yy:
Some pseudo code:
//CAS loop (supposedly in a function of args: bucket_id, photo_id, username, new_score)
for (;;) {
//read current score (the assumption here is that the bucket_id/photo_id entry already exists in both xx and yy)
ResultSet rs1 = select like_count from yy where bucket_id = ? and photo_id = ?
int old_score = rs1.one().getInt(0)
//same score don't do anything
if (new_score == old_score) break;
//attempt to delete using light-weight transaction (note usage of IF EXISTS)
ResultSet r2 = delete from xx where bucket_id = ? and photo_id = ? and like_count = old_score IF EXISTS
if (rs2.one().getBool(0)) {
//if delete was successful, reinsert with the new score
insert bucket_id, photo_id, photo_id, username, like_count into xx values (?, ?, ?, new_score)
//update lookup table
update yy set like_count = new_score where bucket_id = ? and photo_id = ?
//we are done!
break;
}
//delete was not successful, someone already updated the score
//try again in a next CAS iteration
}
Remove like_count from PRIMARY KEY definition and perform sorting on the application. If this change happens very rarely on few keys you can think to remove the whole entry and rewrite it with updated value but I don't recommend this solution.
HTH,
Carlo
Here's the issue. I have 2 tables that I am currently using in a pivot to return a single value, MAX(Date). I have been asked to return additional values associated with that particular MAX(Date). I know I can do this with an OVER PARTITION but it would require me doing about 8 or 9 LEFT JOINS to get the desired output. I was hoping there is a way to get my existing PIVOT to return these values. More specifically, let's say each MAX(Date) has a data source and we want that particular source to become part of the output. Here is a simple sample of what I am talking about:
Create table #Email
(
pk_id int not null identity primary key,
email_address varchar(50),
optin_flag bit default(0),
unsub_flag bit default(0)
)
Create table #History
(
pk_id int not null identity primary key,
email_id int not null,
Status_Cd char(2),
Status_Ds varchar(20),
Source_Cd char(3),
Source_Desc varchar(20),
Source_Dttm datetime
)
Insert into #Email
Values
('test#test.com',1,0),
('blank#blank.com',1,1)
Insert into #History
values
(1,'OP','OPT-IN','WB','WEB','1/2/2015 09:32:00'),
(1,'OP','OPT-IN','WB','WEB','1/3/2015 10:15:00'),
(1,'OP','OPT-IN','WB','WEB','1/4/2015 8:02:00'),
(2,'OP','OPT-IN','WB','WEB','2/1/2015 07:22:00'),
(2,'US','UNSUBSCRIBE','EM','EMAIL','3/2/2015 09:32:00'),
(2,'US','UNSUBSCRIBE','ESP','SERVICE PROVIDER','3/2/2015 09:55:00'),
(2,'US','UNSUBSCRIBE','WB','WEB','3/2/2015 10:15:00')
;with dates as
(
select
email_id,
[OP] as [OptIn_Dttm],
[US] as [Unsub_Dttm]
from
(
select
email_id,
status_cd,
source_dttm
from #history
) as src
pivot (min(source_dttm) for status_cd in ([OP],[US])) as piv
)
select
e.pk_id as email_id,
e.email_address,
e.optin_flag,
/*WANT TO GET THE OPTIN SOURCE HERE*/ /*<-------------*/
d.OptIn_Dttm,
e.unsub_flag,
d.Unsub_Dttm
/*WANT TO GET THE UNSUB SOURCE HERE*/ /*<-------------*/
from #Email e
left join dates d on e.pk_id = d.email_id