Migrate ENUM columns from mysql to TiDB - data-migration

I met a problem when migrate data ( by TiDB DM) from mysql
I have a table in mysql which have a enum columns:
`xxx` enum('offer','payment') NOT NULL DEFAULT 'offer',
I create a migration task, the dump and load progress were complete, but I met this error in sync progress:
"ErrCode": 10006,
"ErrClass": "database",
"ErrScope": "not-set",
"ErrLevel": "high",
"Message": "startLocation: [position: (, 0), gtid-set: ], endLocation: [position: (xxx-bin|000001.000035, 522223249), gtid-set: ]: execute statement failed: UPDATE `ti_xxx`.`api_logs` SET `l_id` = ?, `a_id` = ?, `l_datetime` = ?, `l_api` = ?, `l_request` = ?, `l_response_status` = ?, `l_response` = ?, `l_status` = ?, `l_currency` = ?, `l_type` = ?, `cl_id` = ?, `l_test` = ?, `l_parent_id` = ?, `l_attempt` = ?, `l_cl_type` = ?, `l_parameters` = ?, `l_api_type` = ?, `l_group_status` = ?, `l_group_attempt` = ? WHERE `l_id` = ? LIMIT 1",
"RawCause": "Error 1105: item 2 is not in enum [offer payment]",
"Workaround": ""
TiDB version: 5.7.25-TiDB-v5.2.0 TiDB Server
DM version: v2.0.6
DM task configuration:
name: "xxx-api_logs"
task-mode: "all"
routes:
xxx:
schema-pattern: "xxx"
target-schema: "ti_xxx"
target-database:
host: "127.0.0.1"
port: 30306
user: "root"
password: "<removed>"
mysql-instances:
- source-id: "dba5-xxx"
block-allow-list: "xxx"
route-rules: ["xxx"]
block-allow-list:
xxx:
do-tables:
- db-name: "xxx"
tbl-name: "api_logs"
on TiDB side, I tested some operations with enum column, they works:
mysql> CREATE TABLE t1 (
-> a enum('offer','payment') NOT NULL DEFAULT 'offer'
-> );
Query OK, 0 rows affected (0.27 sec)
mysql> INSERT INTO t1 VALUES ('payment');
Query OK, 1 row affected (0.03 sec)
mysql> INSERT INTO t1 VALUES (2);
Query OK, 1 row affected (0.00 sec)
mysql> select * from t1;
+---------+
| a |
+---------+
| payment |
| payment |
+---------+
2 rows in set (0.01 sec)
mysql> update t1 set a=1 where a=2;
Query OK, 2 rows affected (0.01 sec)
Rows matched: 2 Changed: 2 Warnings: 0
mysql> select * from t1;
+-------+
| a |
+-------+
| offer |
| offer |
+-------+
2 rows in set (0.00 sec)
I use a worker which was enabled relay,
I parsed relay log and it contains a valid index values
...
### #15=2
...
pls suggest me some ways to troubleshoot,
Thanks so much

Related

How to insert into table with max(ordering) + 1 in ordering column?

I have a chapters table with a ordering column(integer) in Postgres. How to insert/create a new row with ordering set to MAX(ordering) + 1 in diesel? I've tried using raw sql as following:
INSERT INTO chapters
(id, name, user_id, book_id, ordering, date_created, date_modified)
SELECT ?, ?, ?, ?, IFNULL(MAX(ordering), 0) + 1, ?, ?
FROM chapters
WHERE book_id = ?;
and insert into by:
let id = Uuid::new_v4().to_string_id();
let now = Utc::now().naive_utc();
diesel::sql_query(INSERT_CHAPTER)
.bind::<Text, _>(&id)
.bind::<Text, _>(name)
.bind::<Text, _>(uid)
.bind::<Text, _>(bid)
.bind::<Timestamp, _>(&now)
.bind::<Timestamp, _>(&now)
.bind::<Text, _>(bid)
.execute(conn)?;
It works on Sqlite3 backend, but fails on Pg backend with following error message:
"syntax error at or near \",\""
Use $N as placeholders for Postgres and ? for sqlite3.

if else statement with mulitple variables

i need an if else statement in sqlite3 query, but i can't find the error.
if the name is available it should update the variable disabled. if the name does not exist, insert it
name = 'testname'
disabled = 'x'
finish = '1'
runtime = '1'
c.execute("IF EXISTS(SELECT name FROM infos WHERE name=?', (name,)) THEN UPDATE infos SET disabled=? WHERE name=?', (disabled,name) ELSE INSERT INTO infos VALUES (?, ?, ?, ?, ?);", (None, name, disabled, finish, runtime))
error:
sqlite3.OperationalError: near "IF": syntax error
I've tried this command too, but I suspect the python command is wrong:
c.execute("INSERT INTO infos VALUES (?, ?, ?, ?, ?) ON CONFLICT(?) DO UPDATE SET disabled = (?);", (None, name, disabled, finish, runtime),(name),(disabled))
If your version of SQLite is 3.24.0+ and you have defined the column name as UNIQUE (or PRIMARY KEY) you can use UPSERT, like this:
name = 'testname'
disabled = 'x'
finish = '1'
runtime = '1'
sql = """
INSERT INTO infos(name, disabled, finish, runtime) VALUES (?, ?, ?, ?)
ON CONFLICT(name) DO UPDATE
SET disabled = excluded.disabled, finish = excluded.finish, runtime = excluded.runtime"""
c.execute(sql, (name, disabled, finish, runtime))
See how it works in SQLite in this demo.

Is there a way to bind parameters to db2 dialect(ibm_db_sa) compiled query after compiling?

I am trying to compiled query using db2 dialect ibm_db_sa. After compiling, it binds '?' instead of parameter.
I have tried same for MSSQL and Oracle dialects, they are giving expected results.
import ibm_db_sa
from sqlalchemy import bindparam
from sqlalchemy import Table, MetaData, Column, Integer
tab = Table('customers', MetaData(), Column('cust_id', Integer,
primary_key=True))
stmt = select([tab]).where(literal_column('cust_id') ==
bindparam('cust_id'))
ms_sql = stmt.compile(dialect=mssql.dialect())
oracle_q = stmt.compile(dialect=oracle.dialect())
db2 = stmt.compile(dialect=ibm_db_sa.dialect())
If i print all 3 queries, will output:
MSSQL => SELECT customers.cust_id FROM customers WHERE cust_id = :cust_id
Oracle => SELECT customers.cust_id FROM customers WHERE cust_id = :cust_id
DB2 => SELECT customers.cust_id FROM customers WHERE cust_id = ?
Is there any way to get DB2 query same as others ?
The docs that you reference have that solution:
In the case that a plain SQL string is passed, and the underlying
DBAPI accepts positional bind parameters, a collection of tuples or
individual values in *multiparams may be passed:
conn.execute(
"INSERT INTO table (id, value) VALUES (?, ?)",
(1, "v1"), (2, "v2")
)
conn.execute(
"INSERT INTO table (id, value) VALUES (?, ?)",
1, "v1"
)
For Db2, you just pass a comma-separated list of values as documented in the 2nd example:
conn.execute(stmt,1, "2nd value", storeID, whatever)

Query Parameter Format for SELECT ... IN with Cassandra using Node.js Driver

I have a Cassandra SELECT query with an IN parameter that I want to run via the Node driver, but can't figure out the syntax.
On the cqlsh console, I can run this select and get a correct result:
SELECT * FROM sourcedata WHERE company_id = 4 AND item_id in (ac943b6f-0143-0e1f-5282-2d39209f3a7a,bff421a0-c465-0434-8806-f128612b6850,877ddb6d-a164-1152-da77-1ec4c4468258);
However, trying to run this query using an array of IDs using the Cassandra Node driver, I get various errors depending on the format. Here's what I've tried:
client.execute("SELECT * FROM sourcedata WHERE company_id = ? AND item_id in (?)", [id, item_ids], function(err, rs) { ...
The error is:
ResponseError: Invalid list literal for item_id of type uuid
With this:
client.execute("SELECT * FROM sourcedata WHERE company_id = ? AND item_id in (?)", [id, item_ids], function(err, rs) { ...
The error is:
ResponseError: line 1:72 no viable alternative at input '[' (...WHERE company_id = 4 AND [item_id] in...)
item_ids is an array of string objects, and they were acquired via a select on another Cassandra table.
This is a working app, and other queries that don't use "SELECT .. IN" work fine.
I can also do make it work the "ugly" way, but would prefer not to:
client.execute("SELECT * FROM sourcedata WHERE company_id = ? AND item_id in (" + item_ids.toString() + ")", [id,], function(err, rs) { ...
You should use IN ? without parenthesis, to provide a list:
const query = 'SELECT * FROM sourcedata WHERE company_id = ? AND item_id in ?';
client.execute(query, [ id, item_ids ], { prepare: true }, callback);

Inserting new entity does not look at 'SEQUENCE_GENERATOR' - demo admin

I am using SQL Server 2012 Express Edition with hibernate SQLServer2008Dialect dialect to run the Admin demo and have some troubles with primary key generation. The initial insert statement do not use the pre-calculated values from 'SEQUENCE_GENERATOR' for the #Id field.
#Id
#GeneratedValue(generator = "StructuredContentFieldId")
#GenericGenerator(
name="StructuredContentFieldId",
strategy="org.broadleafcommerce.common.persistence.IdOverrideTableGenerator",
parameters = {
#Parameter(name="segment_value", value="StructuredContentFieldImpl"),
#Parameter(name="entity_name", value="org.broadleafcommerce.cms.structure.domain.StructuredContentFieldImpl")
}
)
#Column(name = "SC_FLD_ID")
protected Long id;
When trying to insert new Structured Content, 'SEQUENCE_GENERATOR' table gets some values populated:
SELECT * FROM dbo.SEQUENCE_GENERATOR
ID_NAME ID_VAL
--------------------------- --------------------
SandBoxImpl 101
StructuredContentFieldImpl 101
StructuredContentImpl 101
But the new entity is saved with the id of 1 (there are some existing rows already in this table as per demo sql script):
SELECT SC_ID, CONTENT_NAME, SC_TYPE_ID FROM dbo.BLC_SC
SC_ID CONTENT_NAME SC_TYPE_ID
-------------------- ------------------------------------------ --------------------
1 html test 2
100 Buy One Get One - Twice the Burn 1
[...]
156 Home Page Featured Products Title 3
The following sql shows up in the console when inserting that row:
[artifact:mvn] Hibernate: select tbl.ID_VAL from SEQUENCE_GENERATOR tbl with (updlock, rowlock ) where tbl.ID_NAME=?
[artifact:mvn] Hibernate: update SEQUENCE_GENERATOR set ID_VAL=? where ID_VAL=? and ID_NAME=?
[artifact:mvn] Hibernate: insert into BLC_SC (ARCHIVED_FLAG, CREATED_BY, DATE_CREATED, DATE_UPDATED, UPDATED_BY, CONTENT_NAME, DELETED_FLAG, LOCALE_CODE, LOCKED_FLAG, OFFLINE_FLAG, ORIG_ITEM_ID, ORIG_SANDBOX_ID, PRIORITY, SANDBOX_ID, SC_TYPE_ID, SC_ID) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Later on saving some HTML content into the BLC_SC_FLD isn't that lucky. New entity also gets assigned the id of 1, which unfortunately already exists:
SELECT SC_FLD_ID, FLD_KEY, VALUE, SC_ID FROM dbo.BLC_SC_FLD
SC_FLD_ID FLD_KEY VALUE SC_ID
------------- ------------- --------------------------------------------- --------
1 imageUrl /img/banners/buy-one-get-one-home-banner.jpg 100
and of course the exception is thrown:
[artifact:mvn] Hibernate: update SEQUENCE_GENERATOR set ID_VAL=? where ID_VAL=? and ID_NAME=?
[artifact:mvn] Hibernate: insert into BLC_SC_FLD (CREATED_BY, DATE_CREATED, DATE_UPDATED, UPDATED_BY, FLD_KEY, LOB_VALUE, VALUE, SC_ID, SC_FLD_ID) values (?, ?, ?, ?, ?, ?, ?, ?, ?)
[artifact:mvn] 2014-05-06 00:58:02.191:WARN:oejs.ServletHandler:/admin/structured-content/1
[artifact:mvn] org.springframework.dao.DataIntegrityViolationException: Violation of PRIMARY KEY constraint 'PK__BLC_SC_F__8A534C1863E06FD9'. Cannot insert duplicate key in object 'dbo.BLC_SC_FLD'. The duplicate key value is (1).; SQL [n/a]; constraint [null]; nested exception is org.hibernate.exception.ConstraintViolationException: Violation of PRIMARY KEY constraint 'PK__BLC_SC_F__8A534C1863E06FD9'. Cannot insert duplicate key in object 'dbo.BLC_SC_FLD'. The duplicate key value is (1).
I am not sure where is the problem. The #GenericGenerator generation strategy org.broadleafcommerce.common.persistence.IdOverrideTableGenerator seems to hit 'SEQUENCE_GENERATOR' on first insert and then increments the id from FIELD_CACHE variable as designed.
So I have actually 2 questions.
Why 'SEQUENCE_GENERATOR' gets initial values of 101, when there is already higher id saved in the table?
Why the entity is being saved with the value of 1? Is this MS SQL Server related?
Ok resolved :) Broadleaf has 3 persistence units, and by default they point to the same database, but only one (blPU) persistence unit imports sql at the start of the demo.
So by doing this:
blPU.hibernate.hbm2ddl.auto=create-drop
blCMSStorage.hibernate.hbm2ddl.auto=create-drop
blSecurePU.hibernate.hbm2ddl.auto=create-drop
I made the SEQUENCE_GENERATOR to be dropped and recreated empty by other persistence unit in line.
This works fine:
blPU.hibernate.hbm2ddl.auto=create-drop
blCMSStorage.hibernate.hbm2ddl.auto=update
blSecurePU.hibernate.hbm2ddl.auto=update
Dooh!

Resources