i'm looking at the high-throughput chaincode example and have a question regarding the composite keys.
In the code the key is created as follows
compositeIndexName := "varName~op~value~txID"
Is it possible to query by 'op' or 'value' omitting or using some wildcard for the 'varName'? Or would I need to create different index composite keys, like in the marbles_chaincode example, for each id I would want to query? The other option is using the couchDB for state database which supports more complex querying?
As I'm gonna be saving some JSON data onto the ledger, which I'll need to query by different keys (in the marbles example let's say Color or Size).
Best regards and happy holidays!
I would recommend considering using CouchDB as a state database since it provides you with quite comprehensive querying capabilities, which a way more expressive than you can achieve with composite keys on a top-level db. This could be especially useful in case you are storing your documents in JSON format. Anyway, check this CouchDB query syntax for more information.
// GetQueryResult performs a "rich" query against a state database. It is
// only supported for state databases that support rich query,
// e.g.CouchDB. The query string is in the native syntax
// of the underlying state database. An iterator is returned
// which can be used to iterate (next) over the query result set.
// The query is NOT re-executed during validation phase, phantom reads are
// not detected. That is, other committed transactions may have added,
// updated, or removed keys that impact the result set, and this would not
// be detected at validation/commit time. Applications susceptible to this
// should therefore not use GetQueryResult as part of transactions that update
// ledger, and should limit use to read-only chaincode operations.
GetQueryResult(query string) (StateQueryIteratorInterface, error)
erefore you can use following API to retrieve your
Related
I need to use optimistic locking for my record. But also in our system, we use mapping logic between layers, records to POJO, and vice versa. And if I use store() on new records it works, but I want to store() like update statement on the existing record, but when I convert it from POJO, it works like for new record and I got duplicate id exception.
A know that I can save records with context.update(), but I need optimistic locking and as I know it works only with UpdatableRecords (store() method).
In the documentation I found :
`When loading records from POJOs, jOOQ will assume the record is a new record. It will hence attempt to INSERT it.
and it means that I can't update existing records from POJO with method store() and should use the update from context, but it can't work with the optimistic lock.
Hence, is there any chance to get it at the same time optimistic lock and work with POJOs?
UPDATE :
Thx Lucas. I've started using record.update(), but I ran into another problem :
UserRecord userRecord = context.fetchOne(...);
UserPojo userPojo = mapper.toPojo(userRecord);
userPojo.setName("new_name")
UserRecord userRecordForSave = context.newRecord(USER,userPojo);
userRecordForSave.update()
got DataChangedException
But I turned on optimistic locking and inner JOOQ method checkIfChanged() - validate originals[] fields, but I don't have them after mapping POJO to record, hence I got DataChangedException: Database record has been changed. What is the correct method of mapping POJO in this case?
Updating explicitly
UpdatableRecord.store() decides whether to insert() or update(), depending on whether you created the record, or whether it was fetched from the database via jOOQ. See the Javadoc:
If this record was created by client code, an INSERT statement is executed
If this record was loaded by jOOQ and the primary key value was changed, an INSERT statement is executed (unless Settings.isUpdatablePrimaryKeys() is set). jOOQ expects that primary key values will never change due to the principle of normalisation in RDBMS. So if client code changes primary key values, this is interpreted by jOOQ as client code wanting to duplicate this record.
If this record was loaded by jOOQ, and the primary key value was not changed, an UPDATE statement is executed.
But you don't have to use store(), you can use insert() or update() directly.
Optimistic locking
There are 2 ways to implement optimistic locking in jOOQ:
Based on version or timestamp fields (see Settings.executeWithOptimisticLocking)
Based on changed values (see Settings.executeWithOptimisticLockingExcludeUnversioned)
The former works purely based on the value in the record, but in order to support the latter, you have to make sure jOOQ knows both:
The previous state of your record (Record.original())
The new state of your record (Record itself)
There's no way jOOQ can implement this functionality without knowing both states, so you can't implement your logic without keeping a hold of the previous state somehow.
But in most cases, the version based optimistic locking approach is the most desirable one.
What indexing mode / policy should I use when using cosmos db as a simple key/value store?
From https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy :
None: Indexing is disabled on the container. This is commonly used when a container is used as a pure key-value store without the need for secondary indexes.
Is this because the property used as partition key is indexed even when indexMode is set to “none”? I would expect to need to turn indexing on but specify just the partition key’s path as the only included path.
If it matters, I’m planning to use the SQL API.
EDIT:
here's the information I was missing to understand this:
The item must have an id property, otherwise cosmos db will assign one. https://learn.microsoft.com/en-us/azure/cosmos-db/account-databases-containers-items#properties-of-an-item
Since I'm using Azure Data Factory to load the items, I can tell ADF to duplicate the column that has the value I want to use as my id into a new column called id: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#add-additional-columns-during-copy
I need to use ReadItemAsync, or better yet, ReadItemStreamAsync since it doesn't deserialize the response, to get the item without using a query.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.container.readitemasync?view=azure-dotnet
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.container.readitemstreamasync?view=azure-dotnet
When you set indexingMode to "none", the only way to efficiently retrieve a document is by id (e.g. ReadDocumentAsync() or read_item()). This is akin to a key/value store, since you wouldn't be performing queries against other properties; you'd be specifically looking up a document by some known id, and returning the entire document. Cost-wise, this would be ~1RU for a 1K document, just like point-reads with an indexed collection.
You could still run queries, but without indexes, you'll see unusually-high RU cost.
You would still specify the partition key's value with your point-reads, as you'd normally do.
I have a backend api with express. I've implemented logging with winston and morgan.
My next requirement is to record a user's activity: timestamp, the user, and the content he've fetched or changed, into the database MySQL. I've searched web and found this. But since there is no answer yet, I've come to this.
My Thought:
I can add another query which INSERT all the information mentioned above, right before I response to the client, in my route handlers. But I'm curious if there could be another way to beautifully achieve it.
Select the best approach that suits your system from following cases.
Decide whether your activity log should be persistent or in memory, based on use case. Lets assume persistent and the Db is mySQL.
If your data is already is DB, there is no point of storing all the data again, you can just store keys/ids that are primary for identification, for the rows which you have performed CRUD. you can store as foreign keys in case if the operations performed are always fixed or serialised JSON in activity table.
For instance, the structure can be shown as below, where activity_data is serialised JSON value.
ID | activity_name | activity_data | start_date | end_date |
If there is a huge struggle while gathering the data again, at the end of storing activity before sending response, you can consider applying activity functions to the database abstraction layer or wrapper module created for mySQL (assuming).
For instance :
try {
await query(`SELECT * FROM products`);
//performActivity(insertion)
}catch{
//performErrorActivity(insertion)
}
Here, we need to consider a minor trade off regarding performance, as we are performing insertion operation at each step.
If we want to do it all at once, we need to maintain a collection that add up references of all activity in something like request.activityPayload or may be a cache and perform the insertion at last.
If you are thinking of specifically adding a new data-source for activity, A non-relational DB can be highly recommended to store/dump such data (MongoDB opinionated). This is because it doesn't focuses on schema structure as compare to relational DB as well you can achieve performance benefits as compare to mySQL specifically in case of activity storing.
I would like to update a document that involves reading other collection and complex modifications, so the update operators in findAndModify() cannot serve my purpose.
Here's what I have:
Collection.findById(id, function (err, doc) {
// read from other collection, validation
// modify fields in doc according to user input
// (with decent amount of logic)
doc.save(function (err, doc) {
if (err) {
return res.json(500, { message: err });
}
return res.json(200, doc);
});
}
My worry is that this flow might cause conflict if multiple clients happens to modify the same document.
It is said here that:
Operations on a single document are always atomic with MongoDB databases
I'm a bit confused about what Operations mean.
Does this means that the findById() will acquire the lock until doc is out of scope (after the response is sent), so there wouldn't be conflicts? (I don't think so)
If not, how to modify my code to support multiple clients knowing that they will modify Collection?
Will Mongoose report conflict if it occurs?
How to handle the possible conflict? Is it possible to manually lock the Collection?
I see suggestion to use Mongoose's versionKey (or timestamp) and retry for stale document
Don't use MongoDB altogether...
Thanks.
EDIT
Thanks #jibsales for the pointer, I now use Mongoose's versionKey (timestamp will also work) to avoid committing conflicts.
aaronheckmann — Mongoose v3 part 1 :: Versioning
See this sample code:
https://gist.github.com/anonymous/9dc837b1ef2831c97fe8
Operations refers to reads/writes. Bare in mind that MongoDB is not an ACID compliant data layer and if you need true ACID compliance, you're better off picking another tech. That said, you can achieve atomicity and isolation via the Two Phase Commit technique outlined in this article in the MongoDB docs. This is no small undertaking, so be prepared for some heavy lifting as you'll need to work with the native driver instead of Mongoose. Again, my ultimate suggestion is to not drink the NoSQL koolaid if you need transaction support which it sounds like you do.
When MongoDB receives a request to update a document, it will lock the database until it has completed the operation. Any other requests that MongoDB receives will wait until the locking operation has completed and the database is unlocked. This lock/wait behavior is automatic, so there aren't any conflicts to handle. You can find a lot more information about this behavior in the Concurrency section of the FAQ.
See jibsales answer for links to MongoDB's recommended technique for doing multi-document transactions.
There are a couple of NoSQL databases that do full ACID transactions, which would make your life a lot easier. FoundationDB is one such database. Data is stored as Key-Value but it supports multiple data models through layers.
Full disclosure: I'm an engineer at FoundationDB.
In my case I was wrong when "try to query the dynamic field with the upsert option". This guide helped me: How to solve error E11000 duplicate
In above guide, you're probably making one of two mistakes:
Upsert a document when findOneAndupdate() but the query finds a non-unique field.
Use insert many new documents in one go but don't use "ordered = false"
So I've been trying to wrap my head around this one for weeks, but I just can't seem to figure it out. So MongoDB isn't equipped to deal with rollbacks as we typically understand them (i.e. when a client adds information to the database, like a username for example, but quits in the middle of the registration process. Now the DB is left with some "hanging" information that isn't assocaited with anything. How can MongoDb handle that? Or if no one can answer that question, maybe they can point me to a source/example that can? Thanks.
MongoDB does not support transactions, you can't perform atomic multistatement transactions to ensure consistency. You can only perform an atomic operation on a single collection at a time. When dealing with NoSQL databases you need to validate your data as much as you can, they seldom complain about something. There are some workarounds or patterns to achieve SQL like transactions. For example, in your case, you can store user's information in a temporary collection, check data validity, and store it to user's collection afterwards.
This should be straight forwards, but things get more complicated when we deal with multiple documents. In this case, you need create a designated collection for transactions. For instance,
transaction collection
{
id: ..,
state : "new_transaction",
value1 : values From document_1 before updating document_1,
value2 : values From document_2 before updating document_2
}
// update document 1
// update document 2
Ooohh!! something went wrong while updating document 1 or 2? No worries, we can still restore the old values from the transaction collection.
This pattern is known as compensation to mimic the transactional behavior of SQL.