I just need an idea about this. I have created a table with a unique column named person_name. In this table I use prisma v2 soft delete method. In here just update the deleted column as true.
But when I try to insert the same person_name after a soft delete, it returns an error calling Unique constraint failed.
Can anybody give any idea to skip this issue. My requirement is to make person_name as a unique column. But when I delete the person_name I should be able to insert the same person_name again.
If you have a unique column index on the person_name field then even if you mark the record as soft deleted, it still exists in the table due to which you are getting the unique constraint failed error.
You actually need to create a partial unique index where only the fields which are not soft deleted are included in the unique index and the deleted ones are not included.
You can do something like this:
CREATE UNIQUE INDEX soft_delete ON user (name) WHERE (is_deleted is NOT true);
Related
I'm trying to figure out if there is a way to ignore duplicates when doing a bulkInsert.
The problem is that when you are using uuid's for your primary key you must include the id in your insert statement. Because of this I generate a uuid before I insert the data and technically all the other fields could be duplicates of another row except for the uuid.
I want to know if there is a way to do an insert in Sequelize in which I can check if the row to be inserted is a duplicate dependent upon fields that I choose.
UPDATE: I am using the postgres dialect and I have just discovered that it has a ON CONFLICT (KEY,KEY) DO NOTHING
As of primary keys the best way to generate them is on a DB side setting a primary key default value to uuid_generate_v4() (in case of PostgreSQL).
This (among other benefits) will get you an ability not to indicate a primary key column in INSERT query.
Is there an easy way to overwrite a row that contains a unique index, rather than just failing?
Or do I need to do an update and/or a delete then add.
It would be nice to have a setting when trying to add a row that would violate a unique index constraint it would replace the exist row that matches that the unique index.
My db is defined on Azure using a Core (SQL) API. Any thoughts?
Use UpsertItemAsync(). This will work.
I have a table in Cassandra say employee(id, email, role, name, password) with only id as my primary key.
I want to ...
1. Add another column (manager_id) in with a default value in it
I know that I can add a column in the table but there is no way i can provide a default value to that column through CQL. I can also not update the value for manager_id later since I need to know the id (Partition key and the values are randomly generated unique values which i don't know) to update the row. Is there any way I can achieve this?
2. Rename this table to all_employee.
I also know that its not allowed to rename a table in cassandra. So I am trying to copy the data of table(employee) to csv and copy from csv to new table (all_employee) and deleting the old table(employee). I am doing this through an automated script with cql queries in it and script works fine but will fail if it gets executed again(Which i can not restrict) since the table employee will not be there once its deleted. Essentially I am looking for "If exists" clause in COPY query which is not supported in cql. Is there any other way I can achieve the outcome?
Please note that the amount of data in the table is very small so performance in not an issue.
For #1
I dont think cassandra support default column . You need to do that from your appliaction. Write some default value every time you insert a row.
For #2
you can check if the table exists before trying to copy from it.
SELECT your_table_name FROM system_schema.tables WHERE keyspace_name='your_keyspace_name';
I want to use insert multiple rows using a single insert statement in Postgres.
Here, the catch is that if insert fails for single row, all other successful inserts are being rollbacked. Is there a way to avoid the rollback and make the query return list of failed rows.
Else, I end up in writing a loop of insert statements. I am using Node pg module. Is there a recommended way of achieving my requirement if postgres doesn't support this.
Edit - 1
insert into test(name, email) values ('abcd', 'abcd#a'), ('efgh', 'blah'), (null, 'abcd') ON CONFLICT DO NOTHING;
ERROR: null value in column "name" violates not-null constraint
DETAIL: Failing row contains (9, null, abcd).
After above query, select statement returns 0 rows. I am looking for a solution wherein, the first two rows get inserted.
Sounds like the failure you're talking about is hitting some sort of unique constraint? Take a look at PostgreSQL INSERT ON CONFLICT UPDATE (upsert) use all excluded values for a question related to insert... on conflict usage.
For example if you do INSERT INTO X VALUES (...100 rows...) ON CONFLICT DO NOTHING; Any duplicates that collide with the primary key will just be ignored. The alternate to DO NOTHING is to do an UPDATE on conflict.
EDIT to match newly stated question. On conflict does not help with null constraint violations. You can do a WITH clause and select only the values without null properties. Here's a sample I just tested in Postgres:
create extension if not exists pgcrypto;
create table tmp (id uuid primary key default gen_random_uuid());
with data_set_to_insert as (
select x.id from (values (null), (gen_random_uuid())) x(id) -- alias x can be anything, does not matter what it is
)
insert into tmp(id) select id from data_set_to_insert where id is not null returning *;
Hi I just added a new column Business_sys to my table my_table:
ALTER TABLE my_table ALTER business_sys TYPE set<text>;
But again I just droped this column name because I wanted to change the type of column:
ALTER TABLE my_table DROP business_sys;
Again when I tried to add the same colmn name with different type am getting error message
"Cannnot add a collection with the name business_sys because the collection with the same name and different type has already been used in past"
I just tried to execute this command to add a new column with different type-
ALTER TABLE my_table ADD business_sys list<text>;
What did I do wrong? I am pretty new to Cassandra. Any suggestions?
You're running into CASSANDRA-6276. The problem is when you drop a column in Cassandra that the data in that column doesn't just disappear, and Cassandra may attempt to read that data with its new comparator type.
From the linked JIRA ticket:
Unfortunately, we can't allow dropping a component from the comparator, including dropping individual collection columns from ColumnToCollectionType.
If we do allow that, and have pre-existing data of that type, C* simply wouldn't know how to compare those...
...even if we did, and allowed [users] to create a different collection with the same name, we'd hit a different issue: the new collection's comparator would be used to compare potentially incompatible types.
The JIRA suggests that this may not be an issue in Cassandra 3.x, but I just tried it in 3.0.3 and it fails with the same error.
What did I do wrong? I am pretty new to Cassandra. Any suggestions?
Unfortunately, the only way around this one is to use a different name for your new list.
EDIT: I've tried this out in Cassandra and ended up with inconsistent missing data. Best way to proceed is to change the column name as suggested in CASSANDRA-6276. And always follow documentation guidelines :)
-WARNING-
According to this comment from CASSANDRA-6276, running the following workaround is unsafe.
Elaborating on #masum's comment - it's possible to work around the limitation by first recreating the column with a non-collection type such as an int. Afterwards, you can drop and recreate again using the new collection type.
From your example, assuming we have a business_sys set:
ALTER TABLE my_table ADD business_sys set<text>;
ALTER TABLE my_table DROP business_sys;
Now re-add the column as int and drop it again:
ALTER TABLE my_table ADD business_sys int;
ALTER TABLE my_table DROP business_sys;
Finally, you can re-create the column with the same name but different collection type:
ALTER TABLE my_table ADD business_sys list<text>;
Cassandra doesn't allow you to recreate a column with the same name and the same datatype, but there is an workaround to fix it.
Once you have dropped the column with SET type, you can recreate it with only another "default" type such as varchar or interger.
After recreating with one of those types, you can drop the column once again and finally recreate with the proper type.
I illustrated it below
ALTER TABLE my_table DROP business_sys; # the drop you've done
ALTER TABLE my_table ADD business_sys varchar; # recreating with another type
ALTER TABLE my_table DROP business_sys; # dropping again
ALTER TABLE my_table ADD business_sys list<text>; # recreating with proper type