AzureMobileClient DatabaseFirst manually create datatable fail? - azure

I try to user AzureMobileClient in my XamarinForm with Database First model. For now I do not use offline sync.
So I use this script to create the table in my AZURE SQL DB:
CREATE TABLE [dbo].[TodoItems] (
-- This must be a string suitable for a GUID
[Id] NVARCHAR (128) NOT NULL,
-- These are the system properties
[Version] ROWVERSION NOT NULL,
[CreatedAt] DATETIMEOFFSET (7) NOT NULL,
[UpdatedAt] DATETIMEOFFSET (7) NULL,
[Deleted] BIT NOT NULL,
-- These are the properties of our DTO not included in EntityFramework
[Text] NVARCHAR (MAX) NULL,
[Complete] BIT NOT NULL,
);
CREATE CLUSTERED INDEX [IX_CreatedAt]
ON [dbo].TodoItems([CreatedAt] ASC);
ALTER TABLE [dbo].[TodoItems]
ADD CONSTRAINT [PK_dbo.TodoItems] PRIMARY KEY NONCLUSTERED ([Id] ASC);
CREATE TRIGGER [TR_dbo_TodoItems_InsertUpdateDelete] ON [dbo].[TodoItems]
AFTER INSERT, UPDATE, DELETE AS
BEGIN
UPDATE [dbo].[TodoItems]
SET [dbo].[TodoItems].[UpdatedAt] = CONVERT(DATETIMEOFFSET,
SYSUTCDATETIME())
FROM INSERTED WHERE inserted.[Id] = [dbo].[TodoItems].[Id]
END;
Based on the sample TodoItem provided by Azure. I can do a GetAllItems without any bug (the table is empty for now). But when I try to insert a item I got this error on my Azure backend:
{[Message, The operation failed with the following error: 'Cannot insert the value NULL into column 'CreatedAt', table 'TechCenterCentaur.dbo.TodoItems'; column does not allow nulls. INSERT fails.The statement has been terminated.'.]}
Normally Azure is supposed to take care of that automatically?
I just do that in my XF code:
TodoItem cl = new TodoItem();
cl.Name = "Test";
await _todoTable.InsertAsync(cl);
The call is well made to the backend with a TodoItem containing only Test and all the other fields are null. The exception occurred in the backend :
public async Task<IHttpActionResult> PostTodoItem(TodoItem item)
{
try
{
TodoItem current = await InsertAsync(item); //crash here
return CreatedAtRoute("Tables", new { id = current.Id }, current);
}
catch (System.Exception e)
{
throw;
}
}
Any suggestion?

Ok I found the solution. The problem was in my SQL table. I was missing the 2 ALTER TABLE for new GUID for ID and the CreatedDate.
Here my new script:
USE [TechCenterCentaur]
GO
/****** Object: Table [dbo].[TodoItems] Script Date: 2017-11-08 11:09:14
******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[TodoItems](
[Id] [nvarchar](128) NOT NULL,
[Text] [nvarchar](max) NULL,
[Complete] [bit] NOT NULL,
[Version] [timestamp] NOT NULL,
[CreatedAt] [datetimeoffset](7) NOT NULL,
[UpdatedAt] [datetimeoffset](7) NULL,
[Deleted] [bit] NOT NULL,
CONSTRAINT [PK_dbo.TodoItems] PRIMARY KEY NONCLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
GO
ALTER TABLE [dbo].[TodoItems] ADD DEFAULT (newid()) FOR [Id]
GO
ALTER TABLE [dbo].[TodoItems] ADD DEFAULT (sysutcdatetime()) FOR
[CreatedAt]
GO
CREATE TRIGGER [TR_dbo_TodoItems_InsertUpdateDelete] ON [dbo].[TodoItems]
AFTER INSERT, UPDATE, DELETE AS
BEGIN
UPDATE [dbo].[TodoItems]
SET [dbo].[TodoItems].[UpdatedAt] = CONVERT(DATETIMEOFFSET,
SYSUTCDATETIME())
FROM INSERTED WHERE inserted.[Id] = [dbo].[TodoItems].[Id]
END;
GO

Related

Why does this happen when inserting thousands of rows into an Oracle table with Node.js?

I am new in Oracle and I am looking for the best way for insert thousands (maybe millions) of records into a table.
I have seen other questions and answers about this situation, but in this answer the PL/SQL code use TWO associate arrays of scalar types (PSL_INTEGER) and works as table columns, I need the same but with ONE nested table of record/complex type for insert in the table as a row.
First of all, I have this code in Node.js (TypeScript) using the oracledb package (v 5.1.0):
let data: Array<DataModel>;
// data's variable is populated with data and 'DataModel' is an interface,
// data is an array with a the exact table's structure:
// [
// { C_ONE: 'mike', C_TWO: 'hugman', C_THREE: '34', ... with other 12 columns },
// { C_ONE: 'robert', C_TWO: 'zuck', C_THREE: '34', ... with other 12 columns },
// { C_ONE: 'john', C_TWO: 'gates', C_THREE: '34', ... with other 12 columns }
// ]
let context;
try {
context = await oracledb.getConnection({
user: 'admin',
password: 'admin',
connectString: 'blabla'
});
const result = await context.execute(
// My SP
'BEGIN PACKAGE_TEST.SP_TEST_STRESS(:p_data, :p_status); END;',
{
// My JSON Array
p_data: {
type: 'PACKAGE_TEST.T_STRESS',
val: data
},
// Variable for check if all success or fails... this doesn't matters :)
p_status: {
type: oracledb.NUMBER,
val: 1,
dir: oracledb.BIND_OUT
}
},
{ autoCommit: true }
);
console.log(result);
if ((result.outBinds as { p_status: number }).p_status === 0) {
// Correct
}
else {
// Failed
}
} catch (error) {
// bla bla for errors
} finally {
if (context) {
try {
await context.close();
} catch (error) {
// bla bla for errors
}
}
}
And the PL/SQL code for my sotore procedure:
CREATE OR REPLACE PACKAGE PACKAGE_TEST
IS
TYPE R_STRESS IS RECORD
(
C_ONE VARCHAR(50),
C_TWO VARCHAR(500),
C_THREE VARCHAR(10),
C_FOUR VARCHAR(100),
C_FIVE VARCHAR(10),
C_SIX VARCHAR(100),
C_SEVEN VARCHAR(50),
C_EIGHT VARCHAR(50),
C_NINE VARCHAR(50),
C_TEN VARCHAR(50),
C_ELEVEN VARCHAR(50),
C_TWELVE VARCHAR(50),
C_THIRTEEN VARCHAR(300),
C_FOURTEEN VARCHAR(100),
C_FIVETEEN VARCHAR(300),
C_SIXTEEN VARCHAR(50)
);
TYPE T_STRESS IS VARRAY(213627) OF R_STRESS;
PROCEDURE SP_TEST_STRESS
(
P_DATA_FOR_PROCESS T_STRESS,
P_STATUS OUT NUMBER
);
END;
/
CREATE OR REPLACE PACKAGE BODY PACKAGE_TEST
IS
PROCEDURE SP_TEST_STRESS
(
P_DATA_FOR_PROCESS T_STRESS,
P_STATUS OUT NUMBER
)
IS
BEGIN
DBMS_OUTPUT.put_line('started');
BEGIN
FORALL i IN 1 .. P_DATA_FOR_PROCESS.COUNT
INSERT INTO TEST_STRESS
(
C_ONE,
C_TWO,
C_THREE,
C_FOUR,
C_FIVE,
C_SIX,
C_SEVEN,
C_EIGHT,
C_NINE,
C_TEN,
C_ELEVEN,
C_TWELVE,
C_THIRTEEN,
C_FOURTEEN,
C_FIVETEEN,
C_SIXTEEN
)
VALUES
(
P_DATA_FOR_PROCESS(i).C_ONE,
P_DATA_FOR_PROCESS(i).C_TWO,
P_DATA_FOR_PROCESS(i).C_THREE,
P_DATA_FOR_PROCESS(i).C_FOUR,
P_DATA_FOR_PROCESS(i).C_FIVE,
P_DATA_FOR_PROCESS(i).C_SIX,
P_DATA_FOR_PROCESS(i).C_SEVEN,
P_DATA_FOR_PROCESS(i).C_EIGHT,
P_DATA_FOR_PROCESS(i).C_NINE,
P_DATA_FOR_PROCESS(i).C_TEN,
P_DATA_FOR_PROCESS(i).C_ELEVEN,
P_DATA_FOR_PROCESS(i).C_TWELVE,
P_DATA_FOR_PROCESS(i).C_THIRTEEN,
P_DATA_FOR_PROCESS(i).C_FOURTEEN,
P_DATA_FOR_PROCESS(i).C_FIVETEEN,
P_DATA_FOR_PROCESS(i).C_SIXTEEN
);
EXCEPTION
WHEN OTHERS THEN
p_status := 1;
END;
P_STATUS := 0;
END;
END;
And my target table:
CREATE TABLE TEST_STRESS
(
C_ONE VARCHAR(50),
C_TWO VARCHAR(500),
C_THREE VARCHAR(10),
C_FOUR VARCHAR(100),
C_FIVE VARCHAR(10),
C_SIX VARCHAR(100),
C_SEVEN VARCHAR(50),
C_EIGHT VARCHAR(50),
C_NINE VARCHAR(50),
C_TEN VARCHAR(50),
C_ELEVEN VARCHAR(50),
C_TWELVE VARCHAR(50),
C_THIRTEEN VARCHAR(300),
C_FOURTEEN VARCHAR(100),
C_FIVETEEN VARCHAR(300),
C_SIXTEEN VARCHAR(50)
);
An intersting behavior happens with this scenario:
If I send my JSON Array with 200 rows, this works perfectly, I don't know the exact time it takes to complete successfully, but
I can tell it's milliseconds.
If I send my JSON Array with 200,000 rows, this takes three or four minutes to wait, the promise is resolved and it throws me an exception of type: ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
This happens when passing the JSON Array to the procedure parameter, it seems that when processing it it will cost too much.
Why does this happen in the second scenario?
Is there a limitation on
the number of rows in the NESTED TABLE TYPES or is any configuration (default) with Node.js?
Oracle suggests
increasing pga_aggregate_limit but seeing it in my SQLDeveloper with
"show parameter pga;" It is 3G, does it mean that the information I
am sending is exceeding 3 GB? Is normal?
Is there a more viable solution that
does not affect the performance of the database?
Appreciate your help.
Each server process gets its own PGA, so I'm guessing this is causing the total aggregate PGA, over all the processes currently running, to go over 3 GB.
I assume this is happening because of what's going on inside your package, but you only show the specification, so there's no way to tell what's happening there.
You're not using a nested table type. You're using a varray. A varray has a maximum length of 2,147,483,647.
It sounds like you're doing something inside your procedure to use too much memory. Maybe you need to process the 200,000 rows in chunks? With no more information about what you're trying to do, can you use some other process to load your data, like sqlldr?

Inserting multiple rows into SQL Server from Node.js

I am working on a project that will upload some records to SQL Server from a node.js program. Right now, this is my approach (inside an async function):
con = await sql.connect(`mssql://${SQL.user}:${SQL.password}#${SQL.server}/${SQL.database}?encrypt=true`);
for (r of RECORDS) {
columns = `([column1], [column2], [column3])`;
values = `(#col1, #col2, #col3)`;
await con
.request()
.input("col1", sql.Int, r.col1)
.input("col2", sql.VarChar, r.col2)
.input("col3", sql.VarChar, r.col3)
.query(`INSERT INTO [dbo].[table1] ${columns} VALUES ${values}`);
}
Where records is an array of objects in the form:
RECORDS = [
{ col1: 1, col2: "asd", col3: "A" },
{ col1: 2, col2: "qwerty", col3: "B" },
// ...
];
This code works, nevertheless, I have the feeling that it is not efficient at all. I have an upload of around 4k records and it takes roughly 10 minutes, it does not look good.
I believe if I can create a single query - instead of wrapping single inserts inside a for loop - with all the record values it will be faster, and I know there is a syntax for reaching that in SQL:
INSERT INTO table1 (column1, column2, column3) VALUES (1, "asd", "A"), (2, "qwerty", "B"), (...);
However I cannot find any documentation from mssql module for node on how to prepare the parameterized inputs to do everything in a single transaction.
Can anyone guide me into the right direction?
Thanks in advance.
Also, very similar to the bulk insert, you can use a table valued parameter.
sql.connect("mssql://${SQL.user}:${SQL.password}#${SQL.server}/${SQL.database}?encrypt=true")
.then(() => {
const table = new sql.Table();
table.columns.add('col1', sql.Int);
table.columns.add('col2', sql.VarChar(20));
table.columns.add('col3', sql.VarChar(20));
// add data
table.rows.add(1, 'asd', 'A');
table.rows.add(2, 'qwerty', 'B');
const request = new sql.Request();
request.input('table1', table);
request.execute('procMyProcedure', function (err, recordsets, returnValue) {
console.dir(JSON.stringify(recordsets[0][0]));
res.end(JSON.stringify(recordsets[0][0]));
});
});
And then for the SQL side, create a user defined table type
CREATE TYPE typeMyType AS TABLE
(
Col1 int,
Col2 varchar(20),
Col3 varchar(20)
)
And then use this in the stored procedure
CREATE PROCEDURE procMyProcedure
#table1 typeMyType READONLY
AS
BEGIN
INSERT INTO table1 (Col1, Col2, Col3)
SELECT Col1, Col2, Col3
FROM #MyRecords
END
This gives you more control over the data and lets you do more with the data in sql before you actually insert.
As pointed out by #JoaquinAlvarez, bulk insert should be used as replied here: Bulk inserting with Node mssql package
For my case, the code was like:
return await sql.connect(`mssql://${SQL.user}:${SQL.password}#${SQL.server}/${SQL.database}?encrypt=true`).then(() => {
table = new sql.Table("table1");
table.create = true;
table.columns.add("column1", sql.Int, { nullable: false });
table.columns.add("column2", sql.VarChar, { length: Infinity, nullable: true });
table.columns.add("column3", sql.VarChar(250), { nullable: true });
// add here rows to insert into the table
for (r of RECORDS) {
table.rows.add(r.col1, r.col2, r.col3);
}
return new sql.Request().bulk(table);
});
The SQL data types have to match (obviously) the column type of the existing table table1. Note the case of column2, which is a column defined in SQL as varchar(max).
Thanks Joaquin! I went down on the time significantly from 10 minutes to a few seconds

Import Export data from a CSV file to SQL Server DB?

I am working on a web application where i have to do a import export functionality. i am new in this so i want your suggestions and there are some issues that i am facing to do this functionality,
So first i have a JSON array from Frontend (AngularJs)
[
{
"MyName": "Shubham",
"UserType": "Premium",
"DialCode": "India",
"ContactNumber": "9876543210",
"EmailAddress": "Contact#Shubh.com"
"Country": "India",
"Notes": "Notes-Notes-Notes-Notes"
},
{
"MyName": "Shubham 2",
"UserType": "Free Trial",
"DialCode": "India",
"ContactNumber": "123456789",
"EmailAddress": "Contact2#Shubh.com"
"Country": "India",
"Notes": "Notes-Notes-Notes-Notes"
} ]
Now i am converting this array to a XML in NodeJs Using XMLbuilder
Like This
<UserXML>
<MyName>Shubham</MyName>
<UserType>Premium</UserType>
<DialCode>India</DialCode>
<ContactNumber>9876543210</ContactNumber>
<EmailAddress>Contact#Shubh.com</EmailAddress>
<Country>India</Country>
<Notes>Notes-Notes-Notes-Notes</Notes>
</UserXML>
<UserXML>
<MyName>Shubham 2</MyName>
<UserType>Free Trial</UserType>
<DialCode>India</DialCode>
<ContactNumber>123456789</ContactNumber>
<EmailAddress>Contact2#Shubh.com</EmailAddress>
<Country>India</Country>
<Notes>Notes2-Notes2-Notes2-Notes2</Notes>
</UserXML>
Now i am using this XML in SQL Server to insert these 2 records into my user table
now the issue is i have another table where country code and country name is saved i am using Foreign Key ref in my user table
i have made a sample code
DROP TABLE IF EXISTS #Country
DROP TABLE IF EXISTS #user
DROP TABLE IF EXISTS #UserType
CREATE TABLE #Country
(
Id INT PRIMARY KEY identity(1 ,1),
Name VARCHAR(50) NOT NULL,
DialCode VARCHAR(50) NOT NULL
)
CREATE TABLE #UserType
(
Id INT PRIMARY KEY identity(1 ,1),
Name VARCHAR(50) NOT NULL
)
CREATE TABLE #user
(
Id INT PRIMARY KEY IDENTITY(1 ,1),
Name VARCHAR(50) NOT NULL,
UserTypeId INT NOT NULL,
DialCodeId INT NOT NULL,
ContactNumber VARCHAR(50) NOT NULL,
EmailAddress VARCHAR(50) NOT NULL,
CountryId INT NOT NULL,
Notes VARCHAR(50) NOT NULL
FOREIGN KEY(CountryId) REFERENCES #Country(Id),
FOREIGN KEY(UserTypeId) REFERENCES #UserType(Id)
);
INSERT INTO #Country (Name,DialCode)
VALUES ('India','+91'),
('Dubai','+971'),
('U.S','+1') ;
INSERT INTO #UserType (Name)
VALUES ('Premium'),
('Free Trial');
CASE 1 (Working Fine) if i have a single record then there is no issue by using this apporch
declare #xml xml = '<UserXML>
<Name>Shubham CASE-1</Name>
<UserType>Premium</UserType>
<DialCode>India</DialCode>
<ContactNumber>9876543210</ContactNumber>
<EmailAddress>Contact#Shubh.com</EmailAddress>
<Country>India</Country>
<Notes>Notes-Notes-Notes-Notes CASE-1</Notes>
</UserXML>'
Now i have to check the country name/User Type and match it with the country table to get the id
DECLARE #CountryId INT
,#UserType INT
SELECT #CountryId = id FROM #Country WHERE Name LIKE ''+(select U.Items.value('./DialCode[1]','NVARCHAR(200)') as DialCode FROM #xml.nodes('/UserXML') U(Items))+'%'
SELECT #UserType = id FROM #UserType WHERE Name LIKE ''+(select U.Items.value('./UserType[1]','NVARCHAR(200)') as UserType FROM #xml.nodes('/UserXML') U(Items))+'%'
INSERT INTO #user
SELECT
U.Item.query('./Name').value('.','VARCHAR(100)') Name,
#UserType,
#CountryId,
U.Item.query('./ContactNumber').value('.','VARCHAR(100)') ContactNumber,
U.Item.query('./EmailAddress').value('.','VARCHAR(100)') EmailAddress,
#CountryId,
U.Item.query('./Notes').value('.','VARCHAR(100)') Notes
FROM #Xml.nodes('/UserXML') AS U(Item)
CASE-2 (Well the Isssue is here) if i have multiple records then how can i check every node and then make a join or something like that to make my insert query work fine
declare #xml2 xml = '<UserXML>
<Name>Shubham CASE-2</Name>
<UserType>Premium</UserType>
<DialCode>India</DialCode>
<ContactNumber>9876543210</ContactNumber>
<EmailAddress>Contact#Shubh.com</EmailAddress>
<Country>India</Country>
<Notes>Notes-Notes-Notes-Notes CASE-2</Notes>
</UserXML>
<UserXML>
<Name>Shubham 2 CASE-2</Name>
<UserType>Free Trial</UserType>
<DialCode>Dubai</DialCode>
<ContactNumber>123456789</ContactNumber>
<EmailAddress>Contact2#Shubh.com</EmailAddress>
<Country>Dubai</Country>
<Notes>Notes2-Notes2-Notes2-Notes2 CASE-2</Notes>
</UserXML>'
DECLARE #CountryId2 INT
,#UserType2 INT
SELECT #CountryId2 = id FROM #Country WHERE Name LIKE ''+(select U.Items.value('./DialCode[1]','NVARCHAR(200)') as DialCode FROM #xml2.nodes('/UserXML') U(Items))+'%'
SELECT #UserType2 = id FROM #UserType WHERE Name LIKE ''+(select U.Items.value('./UserType[1]','NVARCHAR(200)') as UserType FROM #xml2.nodes('/UserXML') U(Items))+'%'
INSERT INTO #user
SELECT
U.Item.query('./Name').value('.','VARCHAR(100)') Name,
#UserType,
#CountryId,
U.Item.query('./ContactNumber').value('.','VARCHAR(100)') ContactNumber,
-- U.Item.query('./EmailAddress').value('.','VARCHAR(100)') EmailAddress,
#CountryId,
U.Item.query('./Notes').value('.','VARCHAR(100)') Notes
FROM #xml2.nodes('/UserXML') AS U(Item)
Please If you have any suggestions Or any other better approach for doing this task then help me out i am new to this so i don't know about the best approach for doing this task
You can read JSON directly and perform the JOIN
SELECT entity.*
FROM OPENROWSET (BULK N'd:\temp\file.json', SINGLE_CLOB) as j
CROSS APPLY OPENJSON(BulkColumn)
WITH(
MyName nvarchar(100)
,UserType nvarchar(100)
,DialCode nvarchar(100)
,ContactNumber nvarchar(100)
,EmailAddress nvarchar(100)
,Country nvarchar(100)
,Notes nvarchar(500)
) AS entity
More infos about import JSON documents and the OPENJSON

How to join table that contains no data yet with sqlite

I am trying to join two tables: users and favourites. There is a possibility that a user has no favourites yet and when I tried to INNER JOIN the two I didn't get back the user without any favourites. Is there any way to join even if the second table has no data for that user?
I created the users tabel with the following code:
db.run(`CREATE TABLE Users(
UserId INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
Name TEXT NOT NULL,
Password TEXT NOT NULL,
Phone VARCHAR,
Email TEXT,
RestaurantId INTEGER,
FOREIGN KEY(RestaurantId) REFERENCES Restaurants(RestaurantId))`, (err) => {
if(err) {
console.error(err.message);
} else {
//insert some values
var insert = 'INSERT INTO Users (Name, Password, Phone, Email, RestaurantId) VALUES (?, ?, ?, ?, ?)';
db.run(insert, [
'Liam',
'blabla',
'+32412345678',
'email#email.com',
1
]);
}
}
);
And the favourites table with:
db.run(`CREATE TABLE Favourites(
FavouriteId INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
UserId INTEGER NOT NULL,
RestaurantId INTEGER NOT NULL,
FOREIGN KEY(UserId) REFERENCES Users(UserId),
FOREIGN KEY(RestaurantId) REFERENCES Restaurants(RestaurantId))`, (err) => {
if(err) {
console.error(err.message);
} else {
//insert some values
var insert = 'INSERT INTO Favourites (UserId, RestaurantId) VALUES (?, ?)';
db.run(insert, [
1,
1
]);
}
}
);
There is no problem with the data that exists in the table that was generated after these inserts. The problem only exists when a new user without favourites gets added to the database.
You are looking for LEFT JOIN. Take a look at the documentation: https://www.w3resource.com/sqlite/sqlite-left-join.php.
LEFT JOIN returns all the records on the left side of the join, with the matched records from the right side.

error while mapping database schema

When I try to map with subsonic 3.0.0.3 database, i get error:
"Running transformation: System.InvalidOperationException: Sequence contains more than one matching element..."
Where i should look for error?
SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='TRADITIONAL';
CREATE SCHEMA IF NOT EXISTS osm2 DEFAULT CHARACTER SET cp1251 COLLATE cp1251_general_ci;
USE osm2;
-- Table mydb.sw_profile
CREATE TABLE IF NOT EXISTS osm2.sw_profile (
id INT NOT NULL ,
name VARCHAR(45) NOT NULL ,
configuration VARCHAR(500) NOT NULL ,
comments VARCHAR(45) NULL ,
PRIMARY KEY (id) )
ENGINE = InnoDB;
-- Table mydb.sw_type
CREATE TABLE IF NOT EXISTS osm2.sw_type(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
ports_num INT NOT NULL,
trunc_ports VARCHAR (45) NOT NULL,
supports_dhcp TINYINT (1) NOT NULL,
PRIMARY KEY (id)
)
ENGINE = INNODB;
-- Table mydb.sw
CREATE TABLE IF NOT EXISTS osm2.sw(
id INT NOT NULL,
sn VARCHAR (45) NULL,
mac VARCHAR (45) NOT NULL,
ip VARCHAR (45) NOT NULL,
comments VARCHAR (45) NULL,
sw_profile_id INT NOT NULL,
sw_type_id INT NOT NULL,
PRIMARY KEY (id),
INDEX fk_sw_sw_profile (sw_profile_id ASC),
INDEX fk_sw_sw_type1 (sw_type_id ASC),
CONSTRAINT fk_sw_sw_profile
FOREIGN KEY (sw_profile_id)
REFERENCES mydb.sw_profile (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT fk_sw_sw_type1
FOREIGN KEY (sw_type_id)
REFERENCES mydb.sw_type (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB
DEFAULT CHARACTER SET = cp1251;
-- Table mydb.port
CREATE TABLE IF NOT EXISTS osm2.port(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
state TINYINT (1) NOT NULL,
user_id INT NULL,
sw_id INT NOT NULL,
PRIMARY KEY (id),
INDEX fk_port_sw1 (sw_id ASC),
CONSTRAINT fk_port_sw1
FOREIGN KEY (sw_id)
REFERENCES mydb.sw (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB;
-- Table mydb.vlan
CREATE TABLE IF NOT EXISTS osm2.vlan(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
tag VARCHAR (45) NOT NULL,
comments VARCHAR (500) NULL,
PRIMARY KEY (id)
)
ENGINE = INNODB;
-- Table mydb.address
CREATE TABLE IF NOT EXISTS osm2.address(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
short_name VARCHAR (45) NOT NULL,
comments VARCHAR (45) NULL,
sw_id INT NOT NULL,
PRIMARY KEY (id),
INDEX fk_address_sw1 (sw_id ASC),
CONSTRAINT fk_address_sw1
FOREIGN KEY (sw_id)
REFERENCES mydb.sw (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB;
-- Table mydb.tariff
CREATE TABLE IF NOT EXISTS osm2.tariff(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
price DOUBLE NOT NULL,
speed VARCHAR (45) NOT NULL,
PRIMARY KEY (id)
)
ENGINE = INNODB;
-- Table mydb.client
CREATE TABLE IF NOT EXISTS osm2.client(
id INT NOT NULL,
utm_id VARCHAR (45) NULL,
utm_login VARCHAR (45) NULL,
ip VARCHAR (45) NOT NULL,
ip_second VARCHAR (45) NULL,
contacts VARCHAR (500) NULL,
comments VARCHAR (500) NULL,
act VARCHAR (500) NULL,
vlan_id INT NOT NULL,
address_id INT NOT NULL,
tariff_id INT NOT NULL,
PRIMARY KEY (id),
INDEX fk_client_vlan1 (vlan_id ASC),
INDEX fk_client_address1 (address_id ASC),
INDEX fk_client_tariff1 (tariff_id ASC),
CONSTRAINT fk_client_vlan1
FOREIGN KEY (vlan_id)
REFERENCES mydb.vlan (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT fk_client_address1
FOREIGN KEY (address_id)
REFERENCES mydb.address (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT fk_client_tariff1
FOREIGN KEY (tariff_id)
REFERENCES mydb.tariff (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB;
-- Table mydb.port_has_vlan
CREATE TABLE IF NOT EXISTS osm2.port_has_vlan(
port_id INT NOT NULL,
vlan_id INT NOT NULL,
PRIMARY KEY (port_id, vlan_id),
INDEX fk_port_has_vlan_port1 (port_id ASC),
INDEX fk_port_has_vlan_vlan1 (vlan_id ASC),
CONSTRAINT fk_port_has_vlan_port1
FOREIGN KEY (port_id)
REFERENCES mydb.port (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT fk_port_has_vlan_vlan1
FOREIGN KEY (vlan_id)
REFERENCES mydb.vlan (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB;
-- Table mydb.request_state
CREATE TABLE IF NOT EXISTS osm2.request_state(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
PRIMARY KEY (id)
)
ENGINE = INNODB;
-- Table mydb.request_type
CREATE TABLE IF NOT EXISTS osm2.request_type(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
comments VARCHAR (500) NULL,
PRIMARY KEY (id)
)
ENGINE = INNODB;
-- Table mydb.request
CREATE TABLE IF NOT EXISTS osm2.request(
id INT NOT NULL,
date DATETIME NOT NULL,
comments VARCHAR (500) NULL,
log VARCHAR (500) NULL,
request_state_id INT NOT NULL,
request_type_id INT NOT NULL,
client_id INT NOT NULL,
PRIMARY KEY (id),
INDEX fk_request_request_state1 (request_state_id ASC),
INDEX fk_request_request_type1 (request_type_id ASC),
INDEX fk_request_client1 (client_id ASC),
CONSTRAINT fk_request_request_state1
FOREIGN KEY (request_state_id)
REFERENCES mydb.request_state (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT fk_request_request_type1
FOREIGN KEY (request_type_id)
REFERENCES mydb.request_type (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT fk_request_client1
FOREIGN KEY (client_id)
REFERENCES mydb.client (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB;
-- Table mydb.department
CREATE TABLE IF NOT EXISTS osm2.department(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
comments VARCHAR (45) NOT NULL,
PRIMARY KEY (id)
)
ENGINE = INNODB;
-- Table mydb.group
CREATE TABLE IF NOT EXISTS osm2.group(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
PRIMARY KEY (id)
)
ENGINE = INNODB;
-- Table mydb.account
CREATE TABLE IF NOT EXISTS osm2.account(
id INT NOT NULL,
login VARCHAR (45) NOT NULL,
password VARCHAR (45) NOT NULL,
group_id INT NOT NULL,
PRIMARY KEY (id),
INDEX fk_account_group1 (group_id ASC),
CONSTRAINT fk_account_group1
FOREIGN KEY (group_id)
REFERENCES mydb.group (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB;
-- Table mydb.staff
CREATE TABLE IF NOT EXISTS osm2.staff(
id INT NOT NULL,
name VARCHAR (45) NOT NULL,
contacts VARCHAR (45) NOT NULL,
department_id INT NOT NULL,
account_id INT NOT NULL,
PRIMARY KEY (id),
INDEX fk_staff_department1 (department_id ASC),
INDEX fk_staff_account1 (account_id ASC),
CONSTRAINT fk_staff_department1
FOREIGN KEY (department_id)
REFERENCES mydb.department (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT fk_staff_account1
FOREIGN KEY (account_id)
REFERENCES mydb.account (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB;
-- Table mydb.fid
CREATE TABLE IF NOT EXISTS osm2.fid(
id INT NOT NULL,
text VARCHAR (500) NOT NULL,
comments VARCHAR (500) NULL,
PRIMARY KEY (id)
)
ENGINE = INNODB;
-- Table mydb.group_has_fid
CREATE TABLE IF NOT EXISTS osm2.group_has_fid(
group_id INT NOT NULL,
fid_id INT NOT NULL,
PRIMARY KEY (group_id, fid_id),
INDEX fk_group_has_fid_group1 (group_id ASC),
INDEX fk_group_has_fid_fid1 (fid_id ASC),
CONSTRAINT fk_group_has_fid_group1
FOREIGN KEY (group_id)
REFERENCES mydb.group (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT fk_group_has_fid_fid1
FOREIGN KEY (fid_id)
REFERENCES mydb.fid (id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)
ENGINE = INNODB;
SET SQL_MODE=#OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=#OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=#OLD_UNIQUE_CHECKS;
It looks like I found problem.
There are some tables with 2 fields with property primary key (through this tables uses connection many to many) in my schema.
Is it bug?

Resources