What is MySQL best way to partition the archive table? - partition

I have below table and want to partition this table please suggest the better way. this table will be use in joins mostly with Problemid column. This table will have history of the each problemid/ticket i.t at what time this ticket moved to with state etc.
CREATE TABLE `sd_servicerequest_history` (
`ProblemId` int(11) NOT NULL,
`CurrentTime` datetime NOT NULL,
`NatureOfChange` varchar(255) NOT NULL,
`ActionPerformedBy` varchar(255) NOT NULL,
`HistoryID` int(10) unsigned NOT NULL AUTO_INCREMENT,
`OldValue` varchar(5120) NOT NULL,
`NewValue` varchar(5120) NOT NULL,
`Parameter` varchar(255) NOT NULL,
`FIELDID` int(10) unsigned NOT NULL DEFAULT '0',
`ChildOf` int(10) unsigned NOT NULL DEFAULT '0',
`OldStateID` int(10) unsigned NOT NULL DEFAULT '0',
`NewStateID` int(10) unsigned NOT NULL DEFAULT '0',
`Userid` int(10) DEFAULT '0',
PRIMARY KEY (`HistoryID`),
KEY `FK_servicehistory_ProblemId` (`ProblemId`),
KEY `ChildOfIndex` (`ChildOf`),
KEY `Userid` (`Userid`),
CONSTRAINT `FK_servicehistory_1` FOREIGN KEY (`Userid`) REFERENCES `userdetails` (`userid`) ON DELETE CASCADE,
CONSTRAINT `FK_servicehistory_ProblemId` FOREIGN KEY (`ProblemId`) REFERENCES `sd_servicereqmaster` (`ProblemId`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8
I have tried partition using problemid but I can not pass the problemid in where clause

Related

While migrating database using npm run migrate:latest some of table are migrate but remaining are not migrate here is the error

Using environment: development
migration file "20210506103358_table_account_master.js" failed
migration failed with error: create table account_master (account_id int unsigned not null auto_increment primary key, mobile_no varchar(12) not null, password varchar(256) not null, user_first_name varchar(100)
not null, user_last_name varchar(100) not null, phase_no varchar(50) not null, building_no varchar(50) not null, flat_no varchar(50) not null, owner varchar(3) not null, email_id varchar(100) not null, birth_year varchar(10) not null, is_active tinyint(1) not null default '0', update_date timestamp not null default CURRENT_TIMESTAMP, verify_by varchar(100), verify_date timestamp, approve_status tinyint(1) not null default '0', gender varchar(6) not null, reset_password_token varchar(256)) - ER_INVALID_DEFAULT: Invalid default value for 'verify_date'
create table account_master (account_id int unsigned not null auto_increment primary key, mobile_no varchar(12) not null, password varchar(256) not null, user_first_name varchar(100) not null, user_last_name varchar(100) not null, phase_no varchar(50) not null, building_no varchar(50) not null, flat_no varchar(50) not null, owner varchar(3) not null, email_id varchar(100) not null, birth_year varchar(10) not null, is_active tinyint(1) not null default '0', update_date timestamp not null default CURRENT_TIMESTAMP, verify_by varchar(100), verify_date timestamp, approve_status tinyint(1) not null default '0', gender varchar(6) not null, reset_password_token varchar(256)) - ER_INVALID_DEFAULT: Invalid default value for 'verify_date'
Error: ER_INVALID_DEFAULT: Invalid default value for 'verify_date'
at Query.Sequence._packetToError (C:\gcpl\gcpl-sports\node_modules\mysql\lib\protocol\sequences\Sequence.js:47:14)
at Query.ErrorPacket (C:\gcpl\gcpl-sports\node_modules\mysql\lib\protocol\sequences\Query.js:79:18)
at Protocol._parsePacket (C:\gcpl\gcpl-sports\node_modules\mysql\lib\protocol\Protocol.js:291:23)
at Parser._parsePacket (C:\gcpl\gcpl-sports\node_modules\mysql\lib\protocol\Parser.js:433:10)
at Parser.write (C:\gcpl\gcpl-sports\node_modules\mysql\lib\protocol\Parser.js:43:10)
at Protocol.write (C:\gcpl\gcpl-sports\node_modules\mysql\lib\protocol\Protocol.js:38:16)
at Socket. (C:\gcpl\gcpl-sports\node_modules\mysql\lib\Connection.js:88:28)
at Socket. (C:\gcpl\gcpl-sports\node_modules\mysql\lib\Connection.js:526:10)
at Socket.emit (node:events:526:28)
at addChunk (node:internal/streams/readable:315:12)
--------------------
at Protocol._enqueue (C:\gcpl\gcpl-sports\node_modules\mysql\lib\protocol\Protocol.js:144:48)
at Connection.query (C:\gcpl\gcpl-sports\node_modules\mysql\lib\Connection.js:198:25)
at C:\gcpl\gcpl-sports\node_modules\knex\lib\dialects\mysql\index.js:132:18
at new Promise ()
at Client_MySQL._query (C:\gcpl\gcpl-sports\node_modules\knex\lib\dialects\mysql\index.js:126:12)
at executeQuery (C:\gcpl\gcpl-sports\node_modules\knex\lib\execution\internal\query-executioner.js:37:17)
at Client_MySQL.query (C:\gcpl\gcpl-sports\node_modules\knex\lib\client.js:144:12)
at C:\gcpl\gcpl-sports\node_modules\knex\lib\execution\transaction.js:363:24
at new Promise ()
at Client_MySQL.trxClient.query (C:\gcpl\gcpl-sports\node_modules\knex\lib\execution\transaction.js:358:12)

Sequelize multiple foreign keys added instead of one

I have ended up with multiple foreign keys for some reason. It's weird because I'm trying to use the recipeID as the foreign key for a left join later on. For that operation, sequelize choses to use RecipeURL, which is not defined anywhere in the schema below.
const Recipe = sequelize.define('Recipe', {
URL: {
type: DataTypes.STRING(512),
allowNull: false,
unique: true,
primaryKey: true
},
contentID: {
type: DataTypes.UUID,
allowNull: true
},
source: {
type: DataTypes.STRING,
allowNull: false
},
title: {
type: DataTypes.STRING,
allowNull: true
},
isRecipe: {
type: DataTypes.BOOLEAN,
allowNull: true,
defaultValue: null
},
ContentsURL: {
type: DataTypes.STRING(512),
allowNull: true
},
ScreenshotURL: {
type: DataTypes.STRING(512),
allowNull: true
},
});
const Comment = sequelize.define('Comment', {
ID: {
type: DataTypes.STRING,
primaryKey: true,
allowNull: false
},
text: {
type: DataTypes.TEXT,
allowNull: false
},
name: {
type: DataTypes.TEXT,
allowNull: true
},
date: {
type: DataTypes.DATE,
allowNull: true
}
});
Recipe.hasMany(Comment, { as: "comments" });
Comment.belongsTo(Recipe, {
foreignKey: "recipeID",
as: "recipe",
});
(async () => {
await sequelize.sync({alter: true, force: true})
process.exit(1)
})();
Running it:
$ node db.js
Executing (default): DROP TABLE IF EXISTS `Comments`;
Executing (default): DROP TABLE IF EXISTS `Recipes`;
Executing (default): DROP TABLE IF EXISTS `Recipes`;
Executing (default): CREATE TABLE IF NOT EXISTS `Recipes` (`URL` VARCHAR(512) NOT NULL UNIQUE , `contentID` CHAR(36) BINARY, `source` VARCHAR(255) NOT NULL, `title` VARCHAR(255), `isRecipe` TINYINT(1) DEFAULT NULL, `ContentsURL` VARCHAR(512), `ScreenshotURL` VARCHAR(512), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`URL`)) ENGINE=InnoDB;
Executing (default): SHOW FULL COLUMNS FROM `Recipes`;
Executing (default): SELECT CONSTRAINT_NAME as constraint_name,CONSTRAINT_NAME as constraintName,CONSTRAINT_SCHEMA as constraintSchema,CONSTRAINT_SCHEMA as constraintCatalog,TABLE_NAME as tableName,TABLE_SCHEMA as tableSchema,TABLE_SCHEMA as tableCatalog,COLUMN_NAME as columnName,REFERENCED_TABLE_SCHEMA as referencedTableSchema,REFERENCED_TABLE_SCHEMA as referencedTableCatalog,REFERENCED_TABLE_NAME as referencedTableName,REFERENCED_COLUMN_NAME as referencedColumnName FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE where TABLE_NAME = 'Recipes' AND CONSTRAINT_NAME!='PRIMARY' AND CONSTRAINT_SCHEMA='recipe' AND REFERENCED_TABLE_NAME IS NOT NULL;
Executing (default): ALTER TABLE `Recipes` CHANGE `contentID` `contentID` CHAR(36) BINARY;
Executing (default): ALTER TABLE `Recipes` CHANGE `source` `source` VARCHAR(255) NOT NULL;
Executing (default): ALTER TABLE `Recipes` CHANGE `title` `title` VARCHAR(255);
Executing (default): ALTER TABLE `Recipes` CHANGE `isRecipe` `isRecipe` TINYINT(1) DEFAULT NULL;
Executing (default): ALTER TABLE `Recipes` CHANGE `ContentsURL` `ContentsURL` VARCHAR(512);
Executing (default): ALTER TABLE `Recipes` CHANGE `ScreenshotURL` `ScreenshotURL` VARCHAR(512);
Executing (default): ALTER TABLE `Recipes` CHANGE `createdAt` `createdAt` DATETIME NOT NULL;
Executing (default): ALTER TABLE `Recipes` CHANGE `updatedAt` `updatedAt` DATETIME NOT NULL;
Executing (default): SHOW INDEX FROM `Recipes`
Executing (default): DROP TABLE IF EXISTS `Comments`;
Executing (default): CREATE TABLE IF NOT EXISTS `Comments` (`ID` VARCHAR(255) NOT NULL , `text` TEXT NOT NULL, `name` TEXT, `date` DATETIME, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `RecipeURL` VARCHAR(512), `recipeID` VARCHAR(512), PRIMARY KEY (`ID`), FOREIGN KEY (`RecipeURL`) REFERENCES `Recipes` (`URL`) ON DELETE SET NULL ON UPDATE CASCADE, FOREIGN KEY (`recipeID`) REFERENCES `Recipes` (`URL`) ON DELETE SET NULL ON UPDATE CASCADE) ENGINE=InnoDB;
Executing (default): SHOW FULL COLUMNS FROM `Comments`;
Executing (default): SELECT CONSTRAINT_NAME as constraint_name,CONSTRAINT_NAME as constraintName,CONSTRAINT_SCHEMA as constraintSchema,CONSTRAINT_SCHEMA as constraintCatalog,TABLE_NAME as tableName,TABLE_SCHEMA as tableSchema,TABLE_SCHEMA as tableCatalog,COLUMN_NAME as columnName,REFERENCED_TABLE_SCHEMA as referencedTableSchema,REFERENCED_TABLE_SCHEMA as referencedTableCatalog,REFERENCED_TABLE_NAME as referencedTableName,REFERENCED_COLUMN_NAME as referencedColumnName FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE where TABLE_NAME = 'Comments' AND CONSTRAINT_NAME!='PRIMARY' AND CONSTRAINT_SCHEMA='recipe' AND REFERENCED_TABLE_NAME IS NOT NULL;
Executing (default): ALTER TABLE `Comments` CHANGE `text` `text` TEXT NOT NULL;
Executing (default): ALTER TABLE `Comments` CHANGE `name` `name` TEXT;
Executing (default): ALTER TABLE `Comments` CHANGE `date` `date` DATETIME;
Executing (default): ALTER TABLE `Comments` CHANGE `createdAt` `createdAt` DATETIME NOT NULL;
Executing (default): ALTER TABLE `Comments` CHANGE `updatedAt` `updatedAt` DATETIME NOT NULL;
Executing (default): SELECT CONSTRAINT_CATALOG AS constraintCatalog, CONSTRAINT_NAME AS constraintName, CONSTRAINT_SCHEMA AS constraintSchema, CONSTRAINT_TYPE AS constraintType, TABLE_NAME AS tableName, TABLE_SCHEMA AS tableSchema from INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE table_name='Comments' AND constraint_name = 'Comments_ibfk_1' AND TABLE_SCHEMA = 'recipe';
Executing (default): ALTER TABLE `Comments` DROP FOREIGN KEY `Comments_ibfk_1`;
Executing (default): SELECT CONSTRAINT_CATALOG AS constraintCatalog, CONSTRAINT_NAME AS constraintName, CONSTRAINT_SCHEMA AS constraintSchema, CONSTRAINT_TYPE AS constraintType, TABLE_NAME AS tableName, TABLE_SCHEMA AS tableSchema from INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE table_name='Comments' AND constraint_name = 'Comments_ibfk_2' AND TABLE_SCHEMA = 'recipe';
Executing (default): ALTER TABLE `Comments` DROP FOREIGN KEY `Comments_ibfk_2`;
Executing (default): ALTER TABLE `Comments` ADD FOREIGN KEY (`RecipeURL`) REFERENCES `Recipes` (`URL`) ON DELETE SET NULL ON UPDATE CASCADE;
Executing (default): ALTER TABLE `Comments` ADD FOREIGN KEY (`recipeID`) REFERENCES `Recipes` (`URL`) ON DELETE SET NULL ON UPDATE CASCADE;
Look at the last two lines. Why is two foreign keys added? I think this might be a caching issue or something similar. So if I could just editing an internal schema file of sorts and comment out this RecipeURL reference it would perhaps solve the problem.
Because you have declared it like that.
Recipe.hasMany(Comment, { as: "comments" });
Here, sequelize automatically picks up your primary key as the tables foreign Key.
Your primary key is not id. Your primary key that you have set is URL.
Comment.belongsTo(Recipe, {
foreignKey: "recipeID",
as: "recipe",
});
In this code you are overriding the default foreignKey. You are calling it as recipeID.
specify the same on Recipe association and you'll see only one foreign key.

External table Azure Synapse does't returning data

I'm trying to create a EXTERNAL TABLE, mapping it from a Blob Storage following this tutorial Load Contoso retail data to Synapse SQL.
But I'm getting this error when I query the table:
Failed to execute query. Error: HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopExecutionException: Too many columns in the line.
My files configurations are:
A: Create a database scoped credential
CREATE DATABASE SCOPED CREDENTIAL ServiceNow_AzureStorageCredential_ADSL
WITH
IDENTITY = 'usr_adsl_servicnow'
,SECRET = 'my_key'
;
B: Create an external data source
CREATE EXTERNAL DATA SOURCE ServiceNowBlobStorage
WITH (
TYPE = HADOOP,
LOCATION = 'abfss://<servicenow container>#<account storage>.blob.core.windows.net',
CREDENTIAL = ServiceNow_AzureStorageCredential_ADSL
);
C: Create the file format to be read from blob storage
CREATE EXTERNAL FILE FORMAT ServiceNowFileFormatCSV
WITH
( FORMAT_TYPE = DELIMITEDTEXT
, FORMAT_OPTIONS ( FIELD_TERMINATOR = ','
, STRING_DELIMITER = '"'
, FIRST_ROW = 2
, DATE_FORMAT = 'dd/MM/yyyy HH:mm:ss'
, USE_TYPE_DEFAULT = TRUE
, Encoding = 'UTF8'
)
);
D: Create the external table
CREATE EXTERNAL TABLE [asb].[incidents] (
[number] [nvarchar](30) NOT NULL,
[opened] [datetime] NOT NULL,
[resolved] [datetime] NULL,
[updated] [datetime] NULL,
[short_description] [nvarchar](2000) NOT NULL,
[urgency] [nvarchar](65) NOT NULL,
[resolve_time] [nvarchar](100) NULL,
[business_service] [nvarchar](100) NULL,
[what_is_the_system] [nvarchar](100) NULL,
[problem] [nvarchar](65) NULL,
[parent] [nvarchar](65) NULL,
[where_is_the_problem] [nvarchar](100) NULL,
[child_incidents] [nvarchar](30) NULL,
[parent_incident] [nvarchar](65) NULL,
[impact] [nvarchar](65) NULL,
[severity] [nvarchar](65) NULL,
[incident_state] [nvarchar](100) NULL,
[company] [nvarchar](30) NULL,
[business_duration] [nvarchar](65) NULL,
[duration] [nvarchar](65) NULL,
[created] [nvarchar](65) NULL,
[catalog_item] [nvarchar](100) NULL,
[priority] [nvarchar](100) NULL,
[state] [nvarchar](65) NULL,
[category] [nvarchar](30) NULL,
[assignment_group] [nvarchar](100) NULL,
[location] [nvarchar](200) NULL,
[ETLLoadID] [nvarchar](65) NULL,
[LoadDate] [nvarchar](65) NULL,
[UpdateDate] [nvarchar](65) NULL
)
WITH
(
LOCATION='servicenow-tables/incidents/incidents.csv'
, DATA_SOURCE = ServiceNowBlobStorage
, FILE_FORMAT = ServiceNowFileFormatCSV
, REJECT_TYPE = VALUE
, REJECT_VALUE = 0
);
I only use nvarchar data type to avoid converting error Error converting data type VARCHAR to DATETIME in this test.
My file format is:
number,opened,resolved,updated,short_description,urgency,task_type,resolve_time,business_service,what_is_the_system,problem,parent,where_is_the_problem,child_incidents,parent_incident,impact,severity,incident_state,company,business_duration,duration,created,catalog_item,priority,state,category,assignment_group,location
"INC0020620","15/05/2020 10:42:39","19/05/2020 12:49:36","26/05/2020 13:00:02","Problemas de divergência nos valores","3 - Baixo(a)","Incidente","45,2620486","PDV",,"PRB0040714","",,"0","","1 - Alto(a)","3 - Baixo(a)","Encerrado","Lojas S.A.","18 Horas 6 Minutos","4 Dias 2 Horas 6 Minutos","15/05/2020 10:42:39","Problemas de divergência nos valores","3 - Moderado","Encerrado","Sistemas/Aplicações","TI_N2_SIS","ADMINISTRACAO"
I tried many forms to fix this but without success.
I just did a simple search for "," on the top line it gave 27 counts which means we have 28 fields . Doing the same on the bottom line give me 28 counts which means that we have 29 fields . So you have more extra value in the second line hence the error , I think the issue is below value
"PRB0040714","",,"0
hopt this helps

How Do I Fix this Sequelize Database Error when trying to do a POST method?

I am using Postman to test a POST route for my React application using Sequelize, node and express. I am getting this error below
{
"name": "SequelizeDatabaseError",
"parent": {
"code": "ER_NO_DEFAULT_FOR_FIELD",
"errno": 1364,
"sqlState": "HY000",
"sqlMessage": "Field 'title' doesn't have a default value",
"sql": "INSERT INTO `Trips` (`id`,`createdAt`,`updatedAt`) VALUES (DEFAULT,?,?);",
"parameters": [
"2019-12-01 00:50:42",
"2019-12-01 00:50:42"
]
},
"original": {
"code": "ER_NO_DEFAULT_FOR_FIELD",
"errno": 1364,
"sqlState": "HY000",
"sqlMessage": "Field 'title' doesn't have a default value",
"sql": "INSERT INTO `Trips` (`id`,`createdAt`,`updatedAt`) VALUES (DEFAULT,?,?);",
"parameters": [
"2019-12-01 00:50:42",
"2019-12-01 00:50:42"
]
},
"sql": "INSERT INTO `Trips` (`id`,`createdAt`,`updatedAt`) VALUES (DEFAULT,?,?);",
"parameters": [
"2019-12-01 00:50:42",
"2019-12-01 00:50:42"
]
}
The schema for my table is as follows
CREATE TABLE Trips (
id INT NOT NULL AUTO_INCREMENT,
title varchar(255) NOT NULL,
location varchar(255) DEFAULT NULL,
Description varchar(255) DEFAULT NULL,
tripDate datetime DEFAULT NULL,
image varchar(255) DEFAULT NULL,
createdAt timestamp default current_timestamp,
updatedAt timestamp,
PRIMARY KEY (id)
);
I tried changing the various columns to be DEFAULT NULL but even when I input data into those fields, I am getting back null in the database. I added images of my code.
React Form
Trips Controller
Trips Model
Trips Router
-Sam
You have 2 problems:
1) At your SQL declaration and in your model declaration you're missing the default value for the title column.
Your SQL table declaration should like this:
CREATE TABLE Trips (
id INT NOT NULL AUTO_INCREMENT,
title varchar(255) DEFAULT NULL, -- Or any other default value
location varchar(255) DEFAULT NULL,
Description varchar(255) DEFAULT NULL,
tripDate datetime DEFAULT NULL,
image varchar(255) DEFAULT NULL,
createdAt timestamp default current_timestamp,
updatedAt timestamp,
PRIMARY KEY (id)
);
According to this declaration your model declaration should be:
const Trips = sequelize.define('Trips', {
title: {
type: DataTypes.STRING,
defaultValue: null // or whatever you would like
},
location:DataTypes.STRING,
Description:DataTypes.STRING,
tripDate:DataTypes.DATEONLY,
image:DataTypes.STRING
},
If you still get this error after modifiying these two your problem is on the client
side, for some reason the title data doesn't pass to the server and therefore is
undefined at req.body
1.by default it will look for createdAt,updatedAt in your model schema.
so either you add the createdAt,updatedAt else if you don't want these 2 fields then set timestamps:false in model schema.
2.you should add id field in your model schema because sequelize always require a id field with primary key in your model schema.Its necessary in every model.
const Trips = sequelize.define('Trips',{
id: {
allowNull: false,
autoIncrement: true,
primaryKey: true,
type: Sequelize.INTEGER
},
title:DataTypes.STRING,
location:DataTypes.STRING,
Description:DataTypes.STRING,
tripDate:DataTypes.DATEONLY,
image:DataTypes.STRING
},
{timestamps:false});

Implementing DDD (Red Book): Why did he made the Collaborators as Value Objects?

Im pertaining to the Red Book by Vaughn Vernon.
On the Collaboration Bounded Context he made the Author, Member, Participant, Creator etc as Value Objects where the fields are stored inline with the Entity they are bound to.
Lets say if you make a Discussion which has one Creator, then the fields of Creator (id, name, email) will be stored in the same table (tbl_discussions).
This is also true for the Forum. (see the schema below)
DROP DATABASE IF EXISTS iddd_collaboration;
CREATE DATABASE iddd_collaboration;
USE iddd_collaboration;
SET FOREIGN_KEY_CHECKS=0;
CREATE TABLE `tbl_dispatcher_last_event` (
`event_id` bigint(20) NOT NULL,
PRIMARY KEY (`event_id`)
) ENGINE=InnoDB;
CREATE TABLE `tbl_es_event_store` (
`event_id` bigint(20) NOT NULL auto_increment,
`event_body` text NOT NULL,
`event_type` varchar(250) NOT NULL,
`stream_name` varchar(250) NOT NULL,
`stream_version` int(11) NOT NULL,
KEY (`stream_name`),
UNIQUE KEY (`stream_name`, `stream_version`),
PRIMARY KEY (`event_id`)
) ENGINE=InnoDB;
CREATE TABLE `tbl_vw_calendar` (
`calendar_id` varchar(36) NOT NULL,
`description` varchar(500),
`name` varchar(100) NOT NULL,
`owner_email_address` varchar(100) NOT NULL,
`owner_identity` varchar(50) NOT NULL,
`owner_name` varchar(200) NOT NULL,
`tenant_id` varchar(36) NOT NULL,
KEY `k_owner_identity` (`owner_identity`),
KEY `k_tenant_id` (`name`,`tenant_id`),
PRIMARY KEY (`calendar_id`)
) ENGINE=InnoDB;
CREATE TABLE `tbl_vw_calendar_entry` (
`calendar_entry_id` varchar(36) NOT NULL,
`alarm_alarm_units` int(11) NOT NULL,
`alarm_alarm_units_type` varchar(10) NOT NULL,
`calendar_id` varchar(36) NOT NULL,
`description` varchar(500),
`location` varchar(100),
`owner_email_address` varchar(100) NOT NULL,
`owner_identity` varchar(50) NOT NULL,
`owner_name` varchar(200) NOT NULL,
`repetition_ends` datetime NOT NULL,
`repetition_type` varchar(20) NOT NULL,
`tenant_id` varchar(36) NOT NULL,
`time_span_begins` datetime NOT NULL,
`time_span_ends` datetime NOT NULL,
KEY `k_calendar_id` (`calendar_id`),
KEY `k_owner_identity` (`owner_identity`),
KEY `k_repetition_ends` (`repetition_ends`),
KEY `k_tenant_id` (`tenant_id`),
KEY `k_time_span_begins` (`time_span_begins`),
KEY `k_time_span_ends` (`time_span_ends`),
PRIMARY KEY (`calendar_entry_id`)
) ENGINE=InnoDB;
CREATE TABLE `tbl_vw_calendar_entry_invitee` (
`id` int(11) NOT NULL auto_increment,
`calendar_entry_id` varchar(36) NOT NULL,
`participant_email_address` varchar(100) NOT NULL,
`participant_identity` varchar(50) NOT NULL,
`participant_name` varchar(200) NOT NULL,
`tenant_id` varchar(36) NOT NULL,
KEY `k_calendar_entry_id` (`calendar_entry_id`),
KEY `k_participant_identity` (`participant_identity`),
KEY `k_tenant_id` (`tenant_id`),
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
CREATE TABLE `tbl_vw_calendar_sharer` (
`id` int(11) NOT NULL auto_increment,
`calendar_id` varchar(36) NOT NULL,
`participant_email_address` varchar(100) NOT NULL,
`participant_identity` varchar(50) NOT NULL,
`participant_name` varchar(200) NOT NULL,
`tenant_id` varchar(36) NOT NULL,
KEY `k_calendar_id` (`calendar_id`),
KEY `k_participant_identity` (`participant_identity`),
KEY `k_tenant_id` (`tenant_id`),
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
CREATE TABLE `tbl_vw_discussion` (
`discussion_id` varchar(36) NOT NULL,
`author_email_address` varchar(100) NOT NULL,
`author_identity` varchar(50) NOT NULL,
`author_name` varchar(200) NOT NULL,
`closed` tinyint(1) NOT NULL,
`exclusive_owner` varchar(100),
`forum_id` varchar(36) NOT NULL,
`subject` varchar(100) NOT NULL,
`tenant_id` varchar(36) NOT NULL,
KEY `k_author_identity` (`author_identity`),
KEY `k_forum_id` (`forum_id`),
KEY `k_tenant_id` (`tenant_id`),
KEY `k_exclusive_owner` (`exclusive_owner`),
PRIMARY KEY (`discussion_id`)
) ENGINE=InnoDB;
CREATE TABLE `tbl_vw_forum` (
`forum_id` varchar(36) NOT NULL,
`closed` tinyint(1) NOT NULL,
`creator_email_address` varchar(100) NOT NULL,
`creator_identity` varchar(50) NOT NULL,
`creator_name` varchar(200) NOT NULL,
`description` varchar(500) NOT NULL,
`exclusive_owner` varchar(100),
`moderator_email_address` varchar(100) NOT NULL,
`moderator_identity` varchar(50) NOT NULL,
`moderator_name` varchar(200) NOT NULL,
`subject` varchar(100) NOT NULL,
`tenant_id` varchar(36) NOT NULL,
KEY `k_creator_identity` (`creator_identity`),
KEY `k_tenant_id` (`tenant_id`),
KEY `k_exclusive_owner` (`exclusive_owner`),
PRIMARY KEY (`forum_id`)
) ENGINE=InnoDB;
CREATE TABLE `tbl_vw_post` (
`post_id` varchar(36) NOT NULL,
`author_email_address` varchar(100) NOT NULL,
`author_identity` varchar(50) NOT NULL,
`author_name` varchar(200) NOT NULL,
`body_text` text NOT NULL,
`changed_on` datetime NOT NULL,
`created_on` datetime NOT NULL,
`discussion_id` varchar(36) NOT NULL,
`forum_id` varchar(36) NOT NULL,
`reply_to_post_id` varchar(36),
`subject` varchar(100) NOT NULL,
`tenant_id` varchar(36) NOT NULL,
KEY `k_author_identity` (`author_identity`),
KEY `k_discussion_id` (`discussion_id`),
KEY `k_forum_id` (`forum_id`),
KEY `k_reply_to_post_id` (`reply_to_post_id`),
KEY `k_tenant_id` (`tenant_id`),
PRIMARY KEY (`post_id`)
) ENGINE=InnoDB;
Now if the User change his email, then you have to update/synchronize all the tables to reflect the change.
Im just wondering why did he came up with that solution? What were the things he considered?
Also are there any alternative? Like achieving the same code, but persist them differently?
From the book:
There is not effort made to keep Collborator Value instances
synchronized with the Identity and Access Context. They are immutable
and can only be fully replaced, not modified. p.468 para. 1
That means synchronization will basically occur when the value will get replaced.
If a Collaborator name or e-mail address changes in the Identity
and Access Context, such changes won't be automatically updated in
the Collaboration Context. Those kinds of changes rarely occur, so
the team made the decision to keep this particular design simple and
not attempt to sycnrhonize changes in the remote Context with objects
in their local Context. p.469 para. 1
You will also want to read out p.476 para.2 Can You Handle the Responsibility. In this section Vaughn demonstrates how complex it might be to keep data synchronized between bounded contexts when having to deal with out of order message consumption. It is also outlined that to guarantee the order of messages we may not rely on a complex messaging infrastructure but simply pull messages from the remote context (e.g. through a RESTful notification service2).
It's always a question of trade-offs and what is acceptable for your domain.
2. Such approach is described at p.312 sec. Publishing Notifications as RESTful Resources.

Resources