Alter column in table to auto increment in Sybase ASE 16.0 - auto-increment

I am using Sybase ASE16.0 database in which I am trying to alter a column in an existing USER table so that it autoincrements every time a row is added to the table. The column: user_id is set as primary key and not null.
I have gone through many sybase tutorials and have tried many approaches but of no avail. Here are some queries that I wrote to make this change:
ALTER TABLE USER (user_id smallint IDENTITY not null)
ALTER TABLE USER ALTER user_id smallint IDENTITY not null
ALTER TABLE USER MODIFY user_id smallint NOT NULL IDENTITY
ALTER TABLE USER MODIFY user_id smallint NOT NULL AUTO_INCREMENT
ALTER TABLE USER MODIFY user_id smallint NOT NULL AUTOINCREMENT
ALTER TABLE USER ALTER user_id smallint NOT NULL AUTOINCREMENT
ALTER TABLE USER user_id smallint AUTOINCREMENT
I expect a SYBASE DB compliant query that would alter the user_id column in the table to autoincrement it by 1 on adding a new record

From documentation:
Adds an IDENTITY column to a table. For each existing row in the table, Adaptive Server assigns a unique, sequential column value. The IDENTITY column could be type numeric or integer, and a scale of zero. The precision determines the maximum value (10 5 -1, or 99,999) that can be inserted into the column:
alter table sales_daily
add ord_num numeric (5,0) identity
Found here

Related

SELECT with yb_hash_code() and DELETE in YugabyteDB

[Question posted by a user on YugabyteDB Community Slack]
We have below schema in postgresql (yugabyte DB 2.8.3) using YSQL:
CREATE TABLE IF NOT EXISTS public.table1
(
customer_id uuid NOT NULL ,
item_id uuid NOT NULL ,
kind character varying(100) NOT NULL ,
details character varying(100) NOT NULL ,
created_date timestamp without time zone NOT NULL,
modified_date timestamp without time zone NOT NULL,
CONSTRAINT table1_pkey PRIMARY KEY (customer_id, kind, item_id)
);
CREATE UNIQUE INDEX IF NOT EXISTS unique_item_id ON table1(item_id);
CREATE UNIQUE INDEX IF NOT EXISTS unique_item ON table1(customer_id, kind) WHERE kind='NEW' OR kind='BACKUP';
CREATE TABLE IF NOT EXISTS public.item_data
(
item_id uuid NOT NULL,
id2 integer NOT NULL,
create_date timestamp without time zone NOT NULL,
modified_date timestamp without time zone NOT NULL,
CONSTRAINT item_data_pkey PRIMARY KEY (item_id, id2)
);
Goal:
Step 1) Select item_id’s from table1 WHERE modified_date < someDate
Step 2) DELETE FROM table item_data WHERE item_id = any of those item_id’s from step 1
Currently we use query
SELECT item_id FROM table1 WHERE modified_date < $1
Can the SELECT query apply yb_hash_code(item_id) with the SELECT query? Because table1 is indexed on item_id ? to enhance the performance of the SELECT query
Currently we perform:
DELETE FROM item_data x WHERE x.item_id IN the listOfItemIds(provided in Step1 above).
With the given listOfItemIds, can we use yb_hash_code(item_id) to enhance performance of DELETE operation?
Yes, it should work out. Something like:
SELECT item_id FROM item_data WHERE yb_hash_code(customer_id, kind, item_id) <= 128 AND yb_hash_code(customer_id, kind, item_id) >= 0 AND modified_date < x;
While you can combine the SELECT + DELETE in 1 query (like a subselect), this is probably better because it will result in smaller transactions.
Also, no need to use yb_hash_code. The db should be able to find the correct rows since you’re sending the columns that are used for partitioning.

Spark SQL : INSERT Statement with JDBC does not support default value

I am trying to read/write data from other databases using JDBC.
just following the doc https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
But I found Spark SQL does not work well with Default value or AUTO_INCREMENT
CREATE TEMPORARY VIEW jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:postgresql:dbserver",
dbtable "schema.tablename",
user 'username',
password 'password'
)
INSERT INTO TABLE jdbcTable (id) values (1)
Here is my DDL
CREATE TABLE `tablename` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`age` int(11) NULL DEFAULT 0,
PRIMARY KEY (`id`) USING BTREE
)
The error org.apache.spark.sql.AnalysisException: unknown requires that the data to be inserted have the same number of columns as the target table: target table has 2 column(s) but the inserted data has 1 column(s), including 0 partition column(s) having constant value(s).
Is there any way to support Default value or AUTO_INCREMENT? thx
I have discovered this same issue with columns with DEFAULT and also COMPUTED columns. If you are using SQL Server you can consider an AFTER INSERT TRIGGER otherwise you may need to calculate the id on the INSERT side.

A view materialized in Cassandra 3.11.8 does not show the same number of rows as the base table

I have the following table in Cassandra 3.11.8
create table MyTable (
id int,
farm_id int,
etc....,
primary key (farm_id, id)
);
After inserting the table with data (14,273,683 rows):
select count (*)
from MyTable
where farm_id = 1504;
count
20964
Note: there is no row in the table (MyTable) with ID null.
After I created a materialized view as follows:
create materialized view MyView
as
select id, farm_id
from MyTable
where farm_id = 1504
and id is not null
primary key (id, farm_id);
But when checking the number of rows inside the view I got the following result:
select count(*) from MyView;
count
10297
I tried many times and the result is the same.
What is happening?
Only difference is id is not null being added in the where clause of view. Maybe you can check how many rows are there with id=null for given farm_id in the original table.

How to add a NOT NULL foreign key to SQlite3 table in node.js

I am having some trouble adding a foreign key column to an already existing table on SQLite.
Here's my SQL code:
CREATE TABLE IF NOT EXISTS director (
director_id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS movie (
movie_id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL
);
ALTER TABLE movie ADD COLUMN director_id INTEGER NOT NULL REFERENCES
director(director_id);
This give me the following error:
Uncaught Error: Cannot add a NOT NULL column with default value NULL
However, if I remove the "NOT NULL" constraint it runs fine:
ALTER TABLE movie ADD COLUMN director_id INTEGER REFERENCES
director(director_id);
The following also works just fine but I already have an existing table:
CREATE TABLE IF NOT EXISTS director (
director_id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS movie (
movie_id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
director_id INTEGER NOT NULL,
FOREIGN KEY(director_id) REFERENCES director(director_id)
);
Am I missing something or it cannot be made in SQLite?
When altering a table to add a column with NOT NULL, you need to add a DEFAULT value. Otherwise, it has no idea what to put in an existing record when adding the column (since NULL is not valid).
Docs ref: https://sqlite.org/lang_altertable.html
For example:
CREATE TABLE IF NOT EXISTS movie (
movie_id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL
);
ALTER TABLE movie ADD COLUMN director_id INTEGER NOT NULL DEFAULT 0 REFERENCES director(director_id);

How to use static column in scylladb and cassandra?

I am new in scylladb and cassandra, I am facing some issues in querying data from the table, following is the schema I have created:
CREATE TABLE usercontacts (
userID bigint, -- userID
contactID bigint, -- Contact ID lynkApp userID
contactDeviceToken text, -- Device Token
modifyDate timestamp static ,
PRIMARY KEY (contactID,userID)
);
CREATE MATERIALIZED VIEW usercontacts_by_contactid
AS SELECT userID, contactID, contactDeviceToken,
FROM usercontacts
contactID IS NOT NULL AND userID IS NOT NULL AND modifyDate IS NOT NULL
-- Need to not null as these are the primary keys in main
-- table same structure as the main table
PRIMARY KEY(userID,contactID);
CREATE MATERIALIZED VIEW usercontacts_by_modifyDate
AS SELECT userID,contactID,contactDeviceToken,modifyDate
FROM usercontacts WHERE
contactID IS NOT NULL AND userID IS NOT NULL AND modifyDate IS NOT NULL
-- Need to not null as these are the primary keys in main
-- table same structure as the main table
PRIMARY KEY (modifyDate,contactID);
I want to create materialized view for contact table which is usercontacts_by_userid and usercontacts_by_modifydate
I need the following queries in case of when I set modifydate (timestamp) static:
update usercontacts set modifydate="newdate" where contactid="contactid"
select * from usercontacts_by_modifydate where modifydate="modifydate"
delete from usercontacts where contactid="contactid"
It is not currently possible to create a materialized view that includes a static column, either as part of the primary key or just as a regular column.
Including a static row would require the whole base table (usercontacts) to be read when the static column is changed, so that the view rows could be re-calculated. This has a significant performance penalty.
Having the static row be the view's partition key means that there would only be one entry in the view for all the rows of a partition. However, secondary indexes do work in this case, and you can use that instead.
This is valid for both Scylla and Cassandra at the moment.

Resources