SQL Table has primary key but error "Diesel only supports tables with primary keys. Table errors has no primary key" [duplicate] - rust

This is my up.sql file:
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`first_name` VARCHAR(255),
`last_name` VARCHAR(255),
`user_name` VARCHAR(255) NOT NULL UNIQUE,
`email` VARCHAR(255) NOT NULL UNIQUE,
`created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
);
When I run diesel migration I receive the following error:
Diesel only supports tables with primary keys. Table users has no primary key
Not sure what I'm doing wrong, any help would be greatly appreciated!

I ended up manually creating the table in mysql directly, however when I tried to generate the schema I once again received the "no primary key" error message from the diesel cli. So I began playing around with my sql to see if I could make it work, and eventually it did using the following:
CREATE TABLE users (
`id` BIGINT(20) NOT NULL AUTO_INCREMENT PRIMARY KEY,
`first_name` VARCHAR(255),
`last_name` VARCHAR(255),
`user_name` VARCHAR(255) NOT NULL UNIQUE,
`email` VARCHAR(255) NOT NULL UNIQUE,
`created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
It seems that in order for diesel to recognize a column as having a PRIMARY KEY, the primary key attribute must be the last one specified.
I'm not sure if this is a bug, a known quirk, or valid syntax. In any case it was fairly annoying to track this down. I will do some more testing and accept this answer if it seems to be a robust solution.
(edit: I also tested this with a similar SQL query where AUTO_INCREMENT came after PRIMARY KEY, and it did not work)

Related

Spark SQL : INSERT Statement with JDBC does not support default value

I am trying to read/write data from other databases using JDBC.
just following the doc https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
But I found Spark SQL does not work well with Default value or AUTO_INCREMENT
CREATE TEMPORARY VIEW jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:postgresql:dbserver",
dbtable "schema.tablename",
user 'username',
password 'password'
)
INSERT INTO TABLE jdbcTable (id) values (1)
Here is my DDL
CREATE TABLE `tablename` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`age` int(11) NULL DEFAULT 0,
PRIMARY KEY (`id`) USING BTREE
)
The error org.apache.spark.sql.AnalysisException: unknown requires that the data to be inserted have the same number of columns as the target table: target table has 2 column(s) but the inserted data has 1 column(s), including 0 partition column(s) having constant value(s).
Is there any way to support Default value or AUTO_INCREMENT? thx
I have discovered this same issue with columns with DEFAULT and also COMPUTED columns. If you are using SQL Server you can consider an AFTER INSERT TRIGGER otherwise you may need to calculate the id on the INSERT side.

Cassandra Invalid Query: Some cluster keys are missing

I'm using Cassandra 3.0.
My table was created with this query, but when I try to insert data into the table, I get the error: 'Some cluster keys are missing: created'
Table Structure:
CREATE TABLE db.feed (
action_object_id int,
owner_id int,
created timeuuid,
action_object text,
action_object_type int,
actor text,
feed_type text,
target text,
target_type int,
verb text,
PRIMARY KEY (action_object_id, owner_id, created)
) WITH CLUSTERING ORDER BY (owner_id ASC, created ASC)
You must have to provide values for all the primary keys. action_object_id, owner_id, created must have to be mentioned in your insert query.
Ex: insert into db.feed(action_object_id, owner_id, created, ...) values (?,?,?,...). And you cannot provide NULL values for primary keys. created cannot be null.

Materialised view error in Cassandra

I am new to Cassandra, I am trying to create a table and materialized view. but it not working.
My queries are:
-- all_orders
create table all_orders (
id uuid,
order_number bigint,
country text,
store_number bigint,
supplier_number bigint,
flow_type int,
planned_delivery_date timestamp,
locked boolean,
primary key ( order_number,store_number,supplier_number,planned_delivery_date ));
-- orders_by_date
CREATE MATERIALIZED VIEW orders_by_date AS
SELECT
id,
order_number,
country,
store_number,
supplier_number,
flow_type,
planned_delivery_date,
locked,
FROM all_orders
WHERE planned_delivery_date IS NOT NULL AND order_number IS NOT NULL
PRIMARY KEY ( planned_delivery_date )
WITH CLUSTERING ORDER BY (store_number,supplier_number);
I am getting an exception like this:
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query]
message="line 1:7 no viable alternative at input 'MATERIALIZED' ([CREATE] MATERI
ALIZED...)">
Materialized Views in Cassandra solves the use case of not having to maintain additional table(s) for querying by different partition keys. But comes with following restrictions
Use all base table primary keys in the materialized view as primary keys.
Optionally, add one non-PRIMARY KEY column from the base table to the
materialized view's PRIMARY KEY.
Static columns are not supported as a PRIMARY KEY.
More documentation reference here.
So the correct syntax in your case of adding the materialized view would be
CREATE MATERIALIZED VIEW orders_by_date AS
SELECT id,
order_number,
country,
store_number,
supplier_number,
flow_type,
planned_delivery_date,
locked
FROM all_orders
WHERE planned_delivery_date IS NOT NULL AND order_number IS NOT NULL AND store_number IS NOT NULL AND supplier_number IS NOT NULL
PRIMARY KEY ( planned_delivery_date, store_number, supplier_number, order_number );
Here planned_delivery_date is the partition key and the rows are ordered by store_number, supplier_number, order_number (essentially the clustering columns). So there isn't a mandatory requirement to add "CLUSTERING ORDER BY" clause here.

Incorrect string value: '\xE6\x8B\x93\xE6\xB5\xB7' for column 'bookName' at row 1

When I add row in my table: book use navicat, there comes an issue:
Error
Incorrect string value: '\xE6\x8B\x93\xE6\xB5\xB7' for column 'bookName' at row 1
Why?
EDIT
I run show create table book;
CREATE TABLE `book` (
`bookName` varchar(50) NOT NULL,
`InsertTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`bookstore_Id` int(11) DEFAULT NULL,
`Id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`Id`),
KEY `fk_book_bookstore` (`bookstore_Id`),
CONSTRAINT `fk_book_bookstore` FOREIGN KEY (`bookstore_Id`) REFERENCES `bookstore` (`Id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='book'
You see the CHARSET=latin1 at the table, your book table default charset is latin1, you should change the charset.
alter table book default character set utf8;

Non-EQ relation error Cassandra - how fix primary key?

I created a one table posts. When I make request SELECT:
return $this->db->query('SELECT * FROM "posts" WHERE "id" IN(:id) LIMIT '.$this->limit_per_page, ['id' => $id]);
I get error:
PRIMARY KEY column "id" cannot be restricted (preceding column
"post_at" is either not restricted or by a non-EQ relation)
My table dump is:
CREATE TABLE posts (
id uuid,
post_at timestamp,
user_id bigint,
name text,
category set<text>,
link varchar,
image set<varchar>,
video set<varchar>,
content map<text, text>,
private boolean,
PRIMARY KEY (user_id,post_at,id)
)
WITH CLUSTERING ORDER BY (post_at DESC);
I read some article about PRIMARY AND CLUSTER KEYS, and understood, when there are some primary keys - I need use operator = with IN. In my case, i can not use a one PRIMARY KEY. What you advise me to change in table structure, that error will disappear?
My dummy table structure
CREATE TABLE posts (
id timeuuid,
post_at timestamp,
user_id bigint,
PRIMARY KEY (id,post_at,user_id)
)
WITH CLUSTERING ORDER BY (post_at DESC);
And after inserting some dummy data
I ran query select * from posts where id in (timeuuid1,timeuuid2,timeuuid3);
I was using cassandra 2.0 with cql 3.0

Resources