I am using Postgres and I have a table with a unique key (memberId, propertyName). I am using onDuplicateKey and the generated code shows it is using on conflict but it uses the id. Is it possible to specify your own keys to check or does Jooq try to read the table and check that there are unique constraints? My current workaround is doing a select then doing an update or an insert.
The ON DUPLICATE KEY syntax is derived from MySQL, where the semantics of the clause is to consider all the unique constraints (including primary key) on the table, not just the ones you care about.
But why use this syntax in the first place, when you're targeting PostgreSQL? jOOQ supports PostgreSQL's ON CONFLICT, which allows for specifying which unique key to use for the clause.
Related
I am new to Rust and I am making an API using Diesel and Actix-web. I have a DB in pgsql and mongodb and I use diesel 1.4.4 only for pgsql.
First, I did a test creating a DB with its tables and its primary keys and everything works fine. But, there are always scenarios in which there will be tables that do not have a primary key and only have foreign keys.
I have noticed that Diesel only supports table with Primary Keys, but if you wanted to use a table that does not have a primary key and only has a foreign key, is there a way to import that table? Manually can you? That is, defining it in the schema.rs and in my models.rs.
Tables without primary key are not supported by diesel, because they are bad practice from a database point of view. In almost all cases there is some combination of columns that form a natural primary key, otherwise adding a artificial one is preferred. If you have a table that consists only of foreign key columns the natural foreign key would be constructed out of all those columns.
That said: It is possible to use diesel with tables without primary key column by just telling diesel that a specific (combination) of columns should be treated as primary key. In this case you need to write your table! definition for those tables manually.
We're looking at potentially using a single Cosmos DB collection to hold multiple document types in a multi-tenanted environment using a tenant ID as the partition key. The path to tenant id may change in each document type and I am therefore looking at various was of exposing the partition key to Cosmos DB to enable correct partitioning / querying.
I have noticed that the Paths property of DocumentCollection.PartitionKey is a collection and was therefore wondering whether it is possible to pass multiple paths during the creation of a document collection and what the behaviour of this might be. Ideally, I would like Cosmos to scan each of these paths and use the first value or aggregate of values as the partition key but cannot find any documentation suggesting that this is indeed the behaviour.
The MSDN documentation for this property is pretty useless and none of the associated documentation seems to answer the question. Does anyone know about or previously used multiple partition key paths in a collection?
To be clear, I'm looking for links to additional documentation about and/or direct experience of the Cosmos DB's behaviour when specifying multiple partition keys in the PartitionKey.Paths collection when creating a DocumentCollection.
This question has also been posted in the Azure Community Support forums.
Thanks, Ian
The best way to do this is to assign a generic partition key like “pk”, then assign this value based on each of your object types. You can for example, manage this during serialization by having different properties for each class to be serialized to “pk”.
The reason partition key is an array in DocumentCollection.PartitionKey is to allow us to introduce compound partition keys, where the combination of multiple properties like (“firstName”, “lastName”) form the partition key. This is a little different from what you need.
Further to the above, I ended up adding a partition key property to the document container as suggested by Aravind and then used David Fowler's excellent QueryInteceptor nuget package to apply an ExpressionVisitor which translated any equivalence expression relating to the specific document type's tenant id property into a equivalence expression on the partition key property. This ensured that queries would be performed against only the single, correct partition. Furthermore, I was able to use the ExpressionVisitor as a safety feature in that it is able to enforce that all queries provide a filter on tenant id (as, obviously, tenants should never be able to see each others documents) and if none has been specified then no records are returned (an invalid equivalence expression is added to the partition key property).
This has been tested and seems to be working well.
How to set the unique property to DynamoDB using dynamoose node module which it 'll helps in eliminating duplicate entry?
You can create a table whose schema uses the attribute you want to keep unique as the primary key. Or, to separate business logic from your schema design you can use a content-based key that hashes the unique property using SHA256, and use the hash value as the partition of your table.
How do I check for existing documents with duplicate User ID before inserting a document? In the RDBMS world, I would generally have a unique constraint to ensure that there are no duplicates in the table.
In Couchbase, there is a unique constraint on what is in effect the primary key of the document: its ID (or key) must indeed be unique.
Latest versions of most SDKs (eg for Java it's 2.2.0) now have an exist operation that can be used to check if a particular key is stored. Otherwise you have operations like `
I have a cassandra column family which has a row key like
2012-09-30-05-42-00:30:5856869
I need to query some thing like
select * from cf where key like %5856869%
Currently I am using Astyanax , is same possible in astyanax. If not, which implementation would support it.
LIKE queries are not supported in Cassandra. If you want to query based on part of a key, you'll want to use composite keys. But in this specific case, the 5856869 portion of the key would have to be the first part for you to do what you want. Remember that with Cassandra, you must write your data the way you expect to read it.
None.... you need to write index manually - this is how you handle such things in Cassandra, or you might try full text search: Cassandra full text search like