I`ve installed my sql workbecnh on linux:
MySqlWorkbench version
Using export -> forward engineer get this test script:
-- MySQL Script generated by MySQL Workbench
-- Sat 06 Feb 2021 01:14:50 PM EET
-- Model: New Model Version: 1.0
-- MySQL Workbench Forward Engineering
SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION';
-- -----------------------------------------------------
-- Schema mydb
-- -----------------------------------------------------
-- -----------------------------------------------------
-- Schema mydb
-- -----------------------------------------------------
CREATE SCHEMA IF NOT EXISTS `mydb` DEFAULT CHARACTER SET utf8 ;
USE `mydb` ;
-- -----------------------------------------------------
-- Table `mydb`.`test`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`test` (
`idtest` INT NOT NULL,
`testcoltes` VARCHAR(45) NULL,
PRIMARY KEY (`idtest`),
UNIQUE INDEX `idtest_UNIQUE` (`idtest` ASC) VISIBLE)
ENGINE = InnoDB;
SET SQL_MODE=#OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=#OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=#OLD_UNIQUE_CHECKS;
And got this error:
CREATE TABLE IF NOT EXISTS `mydb`.`test` ( `idtest` INT NOT NULL, `testcoltes` VARCHAR(45) NULL, PRIMARY KEY (`idtest`), UNIQUE INDEX `idtest_UNIQUE` (`idtest` ASC) VISIBLE) ENGINE = InnoDB Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ') ENGINE = InnoDB' at line 5 0,00027 sec
it has created db, but unable to create any table. I wasn`t even participating in creating this script (it was just exported out of the model) so why am I getting this error?
You are missing a closing bracket:
UNIQUE INDEX `idtest_UNIQUE` (`idtest` ASC) VISIBLE) ENGINE = InnoDB;
Should be:
UNIQUE INDEX (`idtest_UNIQUE` (`idtest` ASC) VISIBLE) ENGINE = InnoDB;
Related
Based on the documentation below,
https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-sql-database
There is a feature to run post SQL script. Would it be possible to run stored procedure from there?
I have tried, it does not seem to be working and currently investigating.
Thanks for your information in advance.
I created a test to prove that the stored procedure can be called in the Post SQL scripts.
I created two tables:
CREATE TABLE [dbo].[emp](
id int IDENTITY(1,1),
[name] [nvarchar](max) NULL,
[age] [nvarchar](max) NULL
)
CREATE TABLE [dbo].[emp_stage](
id int,
[name] [nvarchar](max) NULL,
[age] [nvarchar](max) NULL
)
I created a sotred procedure.
create PROCEDURE [dbo].[spMergeEmpData]
AS
BEGIN
SET IDENTITY_INSERT dbo.emp ON
MERGE [dbo].[emp] AS target
USING [dbo].[emp_stage] AS source
ON (target.[id] = source.[id])
WHEN MATCHED THEN
UPDATE SET name = source.name,
age = source.age
WHEN NOT matched THEN
INSERT (id, name, age)
VALUES (source.id, source.name, source.age);
TRUNCATE TABLE [dbo].[emp_stage]
END
I will copy the csv file into my Azure SQL staging table [dbo].[emp_stage], then use stored porcedure [dbo].[spMergeEmpData] to transfer data from [dbo].[emp_stage] to [dbo].[emp].
Enter the stored procedure name exec [dbo].[spMergeEmpData] in the Post SQL scripts field.
I successfully debugged.
I can see the data are all in TABLE [dbo].[emp].
I have two SQL Azure databases - DatabaseA and DatabaseB on a server hosted in Azure.
I need to access a view on DatabaseA from DatabaseB - namely I need the sys.identity_columns in DatabaseA to be available to me on DatabaseB. So I am creating an external table on DatabaseB that links to this information like this (I didn't include all the columns but I included the one causing the problem)
CREATE EXTERNAL TABLE [SOURCE_SYS].[identity_columns](
[object_id] int not null
,[name] nvarchar(128) null
,[column_id] int not null
,[system_type_id] tinyint not null
,[seed_value] sql_variant null
)
WITH
(
DATA_SOURCE = MyElasticDBQueryDataSrc,
SCHEMA_NAME = 'sys',
OBJECT_NAME = 'identity_columns'
);
When I run this - it works. But when I try to use the result - select * from [SOURCE_SYS].[identity_columns] - I get this error:
Msg 46823, Level 16, State 1, Line 50
Error retrieving data from MyServer.database.windows.net.DatabaseA. The underlying error message received was: 'Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index'.
If I comment out the fields in this table that have the sql_variant datatypes - it works fine but I do need the information in that field and the other two sql_variant fields that exist in the same table. MyElasticDBQueryDataSrc works fine on other similar tables without the sql_variant type.
Can anyone suggest what I might be doing wrong? Or suggest a workaround? I tried using bigints as it is mostly seed values that are either integers or null but that didn't work because it told me it wasn't the same datatype.
Any help much appreciated.
Well - after a weekend of sleep I figured out the answer!
If you use nvarchar(30) in he external table definition - you can then convert it to a bigint in any query you use it in
CREATE EXTERNAL TABLE [SOURCE_SYS].[identity_columns](
[object_id] int not null
,[name] nvarchar(128) null
,[column_id] int not null
,[system_type_id] tinyint not null
,[seed_value] nvarchar(30) null
)
WITH
(
DATA_SOURCE = MyElasticDBQueryDataSrc,
SCHEMA_NAME = 'sys',
OBJECT_NAME = 'identity_columns'
);
Now I can access the value like this:
select cast(isnull([seed_value], 0) as bigint) from SOURCE_SYS.identity_columns
Beware that if you do a select * from - you will need to do the variants separately from the rest of the query - you'll get this error:
Msg 46825, Level 16, State 1, Line 58
The data type of the column 'seed_value' in the external table is different than the column's data type in the underlying standalone or sharded table present on the external source.
Hope this is helpful to someone!
Added an extra column xyz in CashSchemaV1. I am able to start node with H2 db, but it gives the following error when using PostgreSQL:
[ERROR] 14:52:11+0530 [main] internal.NodeStartupLogging.invoke - Exception during node startup: Incompatible schema change detected. Please run the node with database.initialiseSchema=true. Reason: Schema-validation: missing column [xyz] in table [contract_cash_states]
Followed https://docs.corda.r3.com/database-management.html#database-management-scripts
Added the xyz column in https://github.com/corda/corda/blob/master/finance/workflows/src/main/resources/migration/cash.changelog-init.xml
<column name="pennies" type="BIGINT"/>
<column name="xyz" type="NVARCHAR(130)"/>
then added database migration scripts retrospectively to an existing CorDapp.
After this tried to start the node but getting following error:
[ERROR] 14:52:11+0530 [main] internal.NodeStartupLogging.invoke - Exception during node startup: Incompatible schema change detected. Please run the node with database.initialiseSchema=true. Reason: Schema-validation: missing column [xyz] in table [contract_cash_states]
CashSchemaV1.kt https://github.com/corda/corda/blob/master/finance/contracts/src/main/kotlin/net/corda/finance/schemas/CashSchemaV1.kt
#Type(type = "corda-wrapper-binary")
var issuerRef: ByteArray,
#Column (name = "xyz")
var xyz: String
) : PersistentState()
}
Migration script generated cash-schema-v1.changelog-master.sql
--liquibase formatted sql
--changeset R3.Corda.Generated:initial_schema_for_CashSchemaV1
create table contract_cash_states (
output_index int4 not null,
transaction_id varchar(64) not null,
ccy_code varchar(3) not null,
issuer_key_hash varchar(130) not null,
issuer_ref bytea not null,
owner_name varchar(255),
pennies int8 not null,
xyz varchar(255),
primary key (output_index, transaction_id)
);
create index ccy_code_idx on contract_cash_states (ccy_code);
create index pennies_idx on contract_cash_states (pennies);
Schema should be created with all the columns specified in CashSchemaV1
Steps performed to add extra column :
1)Added <column name="xyz" type="NVARCHAR(130)"/> in cash.changelog-init.xml
2)Added <addNotNullConstraint tableName="abc_states" columnName="xyz" columnDataType="NVARCHAR(130)"/> in cash.changelog-v1.xml
Built the cordapp and then ran the node using this , node started successfully.
I create a table in TiDB, with a int field,
while inserting use value '' to this field, I got an error 'Data Truncated'.
My code like this:
CREATE TABLE test(
i1 INT(11),
s1 VARCHAR(16)
)
INSERT INTO test(i1,s1) VALUES ('11','aa'); //ok
INSERT INTO test(i1,s1) VALUES ('','aa'); //Error 'Data Truncated'
INSERT INTO test(i1,s1) VALUES (NULL,'aa') //ok
While in mysql 5.7,the following sql returns ok
INSERT INTO test(i1,s1) VALUES ('','aa');
My TiDB version is :
Release Version: v1.0.6-1-g17c1319
Git Commit Hash: 17c13192136c1f0bf26db6dec994b9f1b43c90f0
Git Branch: release-1.0
UTC Build Time: 2018-01-09 09:07:08
https://github.com/pingcap/tidb/issues/6317
In the case you present, TiDB behaves the same as MySQL. This error is caused by the strict SQL mode. As a workaround, you can:
set ##sql_mode='';
INSERT INTO test(i1,s1) VALUES ('','aa');
I have a large mySQL database that I backup each night via a cron job:
/usr/bin/mysqldump --opt USERNAME -e -h SERVERNAME -uUSER -pPASSWORD > /home/DIRECTORY/backup.sql
It is working well - except when I go to 'restore' the sql file on another server - it takes a long time (about 3 mins)
This is in contrast to using phpMyAdmin - if I do "export" and export the same mySQL database, then import that sql file into another server it only takes 10 seconds.
Question: how do I make "mysqldump" create the same type of sql file that "phpMyAdmin" does?
Example of some FAST version sql (not all of it):
CREATE TABLE IF NOT EXISTS `absence_type` (
`absence_type_ID` int(16) NOT NULL AUTO_INCREMENT,
`name` varchar(30) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`absence_type_ID`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=14 ;
--
-- Dumping data for table `absence_type`
--
INSERT INTO `absence_type` (`absence_type_ID`, `name`) VALUES
(1, 'Sick Leave'),
(2, 'Personal Carers'),
(3, 'Other');
Example of some SLOW version sql (not all of it):
--
-- Table structure for table `absence_type`
--
DROP TABLE IF EXISTS `absence_type`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `absence_type` (
`absence_type_ID` int(16) NOT NULL AUTO_INCREMENT,
`name` varchar(30) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`absence_type_ID`)
) ENGINE=MyISAM AUTO_INCREMENT=14 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
/*!40101 SET character_set_client = #saved_cs_client */;
--
-- Dumping data for table `absence_type`
--
LOCK TABLES `absence_type` WRITE;
/*!40000 ALTER TABLE `absence_type` DISABLE KEYS */;
INSERT INTO `absence_type` VALUES
(1,'Sick Leave'),
(2,'Personal Carers'),
(3,'Other');
/*!40000 ALTER TABLE `absence_type` ENABLE KEYS */;
UNLOCK TABLES;
From my comments…
Likely the options between mysqldump and PHPMyAdmin export don't match. For example, inclusion of DROP TABLE, extended INSERT, etc.
I suggest comparing the two files. I'm sure there is something obvious. Then either adjust the options for mysqldump or in PHPMyAdmin. Either should work as the latter uses mysqldump underneath.