How does the state database(couchDB) store data if the putState(key,value) overrides the existing value? Since we are able to fetch the history using GetHistoryForKey()....this means the old value is still existing in the state DB?
Only the latest value of a key is stored in the CouchDB state database.
The full history of keys and values are stored in the blockchain data structure itself. When GetHistoryForKey() is called, an index is consulted that identifies all the transactions that have updated a key, and then these transactions are queried from the blockchain data structure to return the history of keys and values.
Related
What indexing mode / policy should I use when using cosmos db as a simple key/value store?
From https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy :
None: Indexing is disabled on the container. This is commonly used when a container is used as a pure key-value store without the need for secondary indexes.
Is this because the property used as partition key is indexed even when indexMode is set to “none”? I would expect to need to turn indexing on but specify just the partition key’s path as the only included path.
If it matters, I’m planning to use the SQL API.
EDIT:
here's the information I was missing to understand this:
The item must have an id property, otherwise cosmos db will assign one. https://learn.microsoft.com/en-us/azure/cosmos-db/account-databases-containers-items#properties-of-an-item
Since I'm using Azure Data Factory to load the items, I can tell ADF to duplicate the column that has the value I want to use as my id into a new column called id: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#add-additional-columns-during-copy
I need to use ReadItemAsync, or better yet, ReadItemStreamAsync since it doesn't deserialize the response, to get the item without using a query.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.container.readitemasync?view=azure-dotnet
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.container.readitemstreamasync?view=azure-dotnet
When you set indexingMode to "none", the only way to efficiently retrieve a document is by id (e.g. ReadDocumentAsync() or read_item()). This is akin to a key/value store, since you wouldn't be performing queries against other properties; you'd be specifically looking up a document by some known id, and returning the entire document. Cost-wise, this would be ~1RU for a 1K document, just like point-reads with an indexed collection.
You could still run queries, but without indexes, you'll see unusually-high RU cost.
You would still specify the partition key's value with your point-reads, as you'd normally do.
I am looking for the details of how Hyperledger Fabric get transaction history for a specific asset, including what data structures and algorothms are used. In theory, the transactions of a specific asset should be stored in different blocks not in a sequence. That is, a linear search on all blocks in blockchain may be required to get all transactions of the specific asset. Is that right? Is there any document and code about this issue recommended to read? Thank you.
History is not fetched from the ledger.
1.Unlike state data which can be stored in leveldb or couchdb History data is stored in a separate datastore and it is leveldb.
2.History data store should be enabled for each peer.
3.That can be done by enabling it in the core.yaml file in
ledger->enableHistoryDatabase set to true
or else pass the env variable
CORE_LEDGER_HISTORY_ENABLEHISTORYDATABASE=true
to the container for which you want to enable history database.
How History for the same key is stored in history database.
1.Now as it is a nosql database it cannot store a duplicate key.So some unique data is appended to the key and then the data is stored in the leveldb.
2.Querying the history database the gethistoryforkey API uses a query something like this
leveldb::Iterator* it = db->NewIterator(leveldb::ReadOptions());
for (it->Seek("theKey");
it->Valid() && it->key().ToString() < "theKey~";
it->Next()) {
...
}
As you can see the key is split from the unique data and then we get all the data related to it from the db.
Hope this helps.
I need to update a document changing the value of the element being used as the partition key. The documentation says that a document is uniquely identified by its id and partition key.
So, if I change the partition key will this always create new document?
Or, will it only create a new document if it is placed on another partition?
If a new document is always created then I think the safest way to update is
Create new document.
If successful, delete old document.
Failure to delete will result in duplicate data but at least the data is not lost.
If a new document is not always created, how can I identify the cases where a new document was created so that I can delete the old one? I don't want to delete anything without having the new one created first since there is no transactional way to do this.
Regards All.
Trying to update the partition key value will simply fail.
Trying to upsert the partition key value will create a new document with the same id in a different logical partition.
What the process should be is:
Keep the old document in memory
Delete the old document
Create the new document
If the later fails then recreate the old document
Cosmos DB doesn't support transactions so there is no way to do this otherwise, and you can't use a stored procedure as they only run against a single logical partition.
i'm looking at the high-throughput chaincode example and have a question regarding the composite keys.
In the code the key is created as follows
compositeIndexName := "varName~op~value~txID"
Is it possible to query by 'op' or 'value' omitting or using some wildcard for the 'varName'? Or would I need to create different index composite keys, like in the marbles_chaincode example, for each id I would want to query? The other option is using the couchDB for state database which supports more complex querying?
As I'm gonna be saving some JSON data onto the ledger, which I'll need to query by different keys (in the marbles example let's say Color or Size).
Best regards and happy holidays!
I would recommend considering using CouchDB as a state database since it provides you with quite comprehensive querying capabilities, which a way more expressive than you can achieve with composite keys on a top-level db. This could be especially useful in case you are storing your documents in JSON format. Anyway, check this CouchDB query syntax for more information.
// GetQueryResult performs a "rich" query against a state database. It is
// only supported for state databases that support rich query,
// e.g.CouchDB. The query string is in the native syntax
// of the underlying state database. An iterator is returned
// which can be used to iterate (next) over the query result set.
// The query is NOT re-executed during validation phase, phantom reads are
// not detected. That is, other committed transactions may have added,
// updated, or removed keys that impact the result set, and this would not
// be detected at validation/commit time. Applications susceptible to this
// should therefore not use GetQueryResult as part of transactions that update
// ledger, and should limit use to read-only chaincode operations.
GetQueryResult(query string) (StateQueryIteratorInterface, error)
erefore you can use following API to retrieve your
I have a scenario where I am holding personal data. For privacy reasons I need to ensure that I am not holding onto the personal data for too long. However, when a document is deleted a tombstone record is still kept on disk. After I delete a document, can I be sure that the personal data is completely destroyed?
Question: What information is stored in the 'tombstone' record?
The tombstone record only contains the following fields:
_deleted (boolean flag)
_id
_rev
Source: CouchDB document API
However, if you have stored sensitive data in the _id field, you may need to consider:
Purging the record (Single node CouchDB only)
Contacting Cloudant Support (Cloudant only)
Finally, because we are talking about deleting records, be careful of workloads that generate a high ratio of deleted:active documents as this is considered an anti-pattern.