Is there any possibility to search an asset with partial id - node.js

In hyperledger-fabric node js sdk.
Is there any possibility to search an asset with partial id?
for example my id is 'abc123'.
I can search with bc12 or abc or 123..and get the matching results.

Using stub.GetStateByRange(startKey, endKey) it is possible to retrieve results on a partial key, if they key has a specific form.
For eg.
the following keys could be used to successfully with a range query in the chaincode to retrieve a list of results, to match key abc123
a
ab
abc
abc1
abc12
abc123
However, a key without the same initial characters will not work. Eg. bc12 or 123.
The below function documentation gives a good idea of how the GetStateByRange function can be used.
// GetStateByRange returns a range iterator over a set of keys in the
// ledger. The iterator can be used to iterate over all keys
// between the startKey (inclusive) and endKey (exclusive).
// However, if the number of keys between startKey and endKey is greater than the
// totalQueryLimit (defined in core.yaml), this iterator cannot be used
// to fetch all keys (results will be capped by the totalQueryLimit).
// The keys are returned by the iterator in lexical order. Note
// that startKey and endKey can be empty string, which implies unbounded range
// query on start or end.
// Call Close() on the returned StateQueryIteratorInterface object when done.
// The query is re-executed during validation phase to ensure result set
// has not changed since transaction endorsement (phantom reads detected).
GetStateByRange(startKey, endKey string) (StateQueryIteratorInterface, error)

The answer by Clyde is the correct one to your question.
But, if you intend to perform complex queries in your code and you are in a position to refactor your data modelling, maybe you can set the information you must filter in some field inside your model (instead of or in addition to the ID itself) and perform rich queries against that field.
To do this, you must enable CouchDB as the state DB in your peers if haven't done it yet. Then you can query the DB and perform rich queries against your model fields.
Of course, this is not the answer to your question, but it may fit better to your use case if you are in a position to perform this kind of changes.

Related

query values in leveldb database in nodejs

How could I query a levelDB database by values in nodejs?
I have key: value pairs in my leveldb database and I want to search some values in it and find that key-value pair from that result. I am a beginner in leveldb.
{
"key1" : {
"data1": "value1",
"data2": "value2",
"data3": "value3"
}
}
So, I want to query value where data3 == value3 and get that key-value pair.
In levelDB, we can only get data on the basis of key.
Is there any other alternative?
In leveldb, which a low level database abstraction you can "only" query by exact key match or prefix key range.
You can not query by value without somekind duplication.
What pattern I adopted in my graphdb project is to follow the EAV model with a secondary "table" to store the index.
In Python plyvel you can emulate "table" using prefixed databases. Or see how FoundationDB does it in its Subspace implementation. Basically, every key-value pair of a given "table" or "space" is prefixed with a particular bytes sequence, that is all.
The first table, looks like the following:
(Entity, Attribute) → (Value)
Where Entity is a (random) identifier and Attribute is the byte representation of field name and last but not least Value is the bytes serialized value associated with Attribute for the given Entity.
The table schema is done that way so that you can quickly fetch using a range query all Attribute and Value using prefix range search over a given Entity.
The index table use the following schema:
(Attribute, Value) → Entity
That is it a shuffled version of the first table.
This is done like so, to make it possible to quickly fetch Entity that match a particular Attribute == Value that's what you are looking for.
There is alternative implementations for what you are looking for. Lookup my answers about leveldb and key-value stores e.g. Expressing multiple columns in berkeley db in python?
Good luck!

Create a Couchbase Document without Specifying an ID

Is it possible to insert a new document into a Couchbase bucket without specifying the document's ID? I would like use Couchbase's Java SDK create a document and have Couchbase determine the document's UUID with Groovy code similar to the following:
import com.couchbase.client.java.CouchbaseCluster
import com.couchbase.client.java.Cluster
import com.couchbase.client.java.Bucket
import com.couchbase.client.java.document.JsonDocument
// Connect to localhost
CouchbaseCluster myCluster = CouchbaseCluster.create()
// Connect to a specific bucket
Bucket myBucket = myCluster.openBucket("default")
// Build the document
JsonObject person = JsonObject.empty()
.put("firstname", "Stephen")
.put("lastname", "Curry")
.put("twitterHandle", "#StephenCurry30")
.put("title", "First Unanimous NBA MVP)
// Create the document
JsonDocument stored = myBucket.upsert(JsonDocument.create(person));
No, Couchbase documents have to have a key, that's the whole point of a key-value store, after all. However, if you don't care what the key is, for example, because you retrieve documents through queries rather than by key, you can just use a uuid or any other unique value when creating the document.
It seems there is no way to have Couchbase generate the document IDs for me. At the suggestion of another developer, I am using UUID.randomUUID() to generate the document IDs in my application. The approach is working well for me so far.
Reference: https://forums.couchbase.com/t/create-a-couchbase-document-without-specifying-an-id/8243/4
As you already found out, generating a UUID is one approach.
If you want to generate a more meaningful ID, for instance a "foo" prefix followed by a sequence number, you can make use of atomic counters in Couchbase.
The atomic counter is a document that contains a long, on which the SDK relies to guarantee a unique, incremented value each time you call bucket.counter("counterKey", 1, 2). This code would take the value of the counter document "counterKey", increment it by 1 atomically and return the incremented value. If the counter doesn't exist, it is created with the initial value 2, which is the value returned.
This is not automatic, but a Couchbase way of creating sequences / IDs.

Search Documents from two collections in MarkLogic

In Marklogic, I want to search between two collections by joining the id element of doc from collection1 to id element of doc from collection2. When it is matched i need the resulting document from both collections.
I have the below code, but it is very slow. How to use cts:search or search:search to achieve the same
for $i in collection('demographic')/individual,
$j in collection('membership')/membership[enrolleIndividualId/id/text() = $i/individual/id/text()])
return {$i,$j}
Update:
I should note that your sample is not valid XQuery: return element root { $i, $j } would be valid. Also, you should not use the /text() node selector, as it's behavior can be counterintuitive. You can compare elements directly in an XPath predicate ([enrolleIndividualId/id eq $i/individual/id]). Use /fn:string() in place of /text() if you need the contents of an element as a string. I'd also recommend using the atomic equality operator eq in place of the sequence equality operator = when directly comparing individual elements.
Original Answer:
There are several approaches to implementing joins in MarkLogic, but I would first question your data model. From the names of the elements in your sample query, it looks like you are using a relational model (individuals have memberships). MarkLogic is a document database, and it's optimized for denormalized documents. You will be much better served to process your data and generate new individual documents that each contain the relevant membership data.
That being said, here's how you could join your documents:
First, you will need range indices to write performant joins. If the id element from your sample query is not unique to individuals, you will need path range indices on enrolledIndividualId/id and individual/id, otherwise, a simple element range index on id will do.
The most common join pattern in MarkLogic uses a "shotgun-OR" query; first retrieving values from the lexicon backing a range index, and then constructing an or-query from those values to retrieve the relevant documents. This won't work directly in your case, as you want to retrieve both sides of the join. You can either run a search for each pair of documents, or run a single search for one side, and then an additional document read for each document.
pairs:
for $value in cts:values(cts:path-reference("individual/id"))
return
cts:search(/,
cts:or-query((
cts:and-query((
cts:collection-query("demographic"),
cts:path-range-query("individual/id", "=", $value))),
cts:and-query((
cts:collection-query("membership"),
cts:path-range-query("enrolledIndividualId/id", "=", $value))))),
"unfiltered")
shotgun-OR plus iteration:
for $doc in
cts:search(/,
cts:and-query((
cts:collection-query("demographic"),
cts:path-range-query("individual/id", "=",
cts:values(cts:path-reference("individual/id"))))),
"unfiltered")
return
cts:search(/,
cts:and-query((
cts:collection-query("membership"),
cts:path-range-query("enrolledIndividualId/id", "=", $doc/individual/id))),
"unfiltered")
As you can see, each approach requires I/O proportionate to the number of docs/values you want to join. If you only needed the shotgun-OR (ie, a query for documents based on criteria from other documents), you would only need to make two requests, the initial cts:values() call to retrieve values from a lexicon, and the cts:search() call using a query built from those values.
Note: the cts:query objects used in these examples could be used in conjunction with the Search API by means of the search:resolve() function.
Given your apparent data model, you will be much better served by processing your data into individual, de-normalized documents.

Couchdb filter using reduce functions/linked documents

Considering:
doc profile
{
_id:"1",
name:"john",
likes: ["2222","1111"]
}
doc likes
{
_id:"2222",
value:"true"
}
{
_id:"1111",
value:"false"
}
I have a filter on my xamarin app to get the profile, and it works well but I need to include the "children" (linked) docs... I can do this with a view setting include_docs=true but I want couchdb to filter so I can use replication.
Also, it would be possible to accomplish the same result if I could use a reduce function to filter data, but I can't make the filter use the reduce function.. So, any idea?
the expected result would be:
doc profile
{
_id:"1",
name:"john",
likes: {
{_id:"2222",
value:"true"},
{_id:"1111",
value:"false"]
}
}
Thanks!
I can do this with a view setting include_docs=true but I want couchdb to filter so I can use replication
You might already know this but you can use couchdb views as filters.
Also, it would be possible to accomplish the same result if I could use a reduce function to filter data
The reduce function is for "reducing" the values that are returned by the map function. The map function returns a key and a value like so:
emit(key,value)
The reduce function only gets the keys and the values that are returned from a map function. For example if you call a view with
?key=abc
and it returns results like
[{
_id:...,
type: abc
},
{
_id:...,
type:abc
}
....
]
You already have all the documents filtered by the key "abc". The reduce function will get as inputs the key, the value and a rereduce parameters. If you use the reduce function as a post map processing step to further filter the results from the view there will be two problems:
There is no way to pass a parameter to a reduce. The keys that you specify will only be used by the map function and then passed as they are to reduce.
It is not a good idea anyway. With reduce you want to return a small value that aggregates the results you get from a view. So taking the above example if you return say an integer as a value from the map function ( in emit(key,value)//suppose that the value is an integer) the reduce function may return a sum or aggregate of those values. But trying to return a modified document is not what reduce function is for. From the docs
"A reduce function must reduce the input values to a smaller output value. If you are building a composite return structure in your reduce, or only transforming the values field, rather than summarizing it, you might be misusing this feature. "
List functions might be more suited to what you are trying to do. If you want to process the results of the view query before returning them they are they way to go.
In list functions you get a set of results returned by the view function. You can even pass additional parameters if you'd like to apply complex filters on them. But you won't be able to use list functions for replication.
Finally replication works on a document level. Documents have _rev fields that is used by the replicator process to check what version the document is in before the replication is performed. So you won't be able to replicate the results returned by a view. Only the documents will be replicated.

CouchDB emit with lookup key that is array, such that order of array elements are ignored

When indexing a couchdb view, you can emit an array as the key such as:
emit(["one", "two", "three"], doc);
I appreciate the fact that when searching the view, the order is important, but sometimes I would like the view to ignore it. I have thought of a couple of options.
1. By convention, just emit the contents in alphabetical order, and ensure that looking up uses the same convention.
2. Somehow hash in a manner that disregards the order, and emit/search based on that hash. (This is fairly easy, if you simply hash each one individually, "sum" the hashes, then mod.)
Note: I'm sure this may be covered somewhere in the authoritative guide, but I was unsuccessful in finding it.
It looks like the correct approach is to determine a conventional ordering on the keys, emit them in this ordering, and be sure to query with this ordering enforced. Otherwise we would need to emit all n(factorial) permutations of the keys (which could get bad if n is greater than 3)
CouchDB will always maintain the array keys in order. Have you considered emitting all sequence variations as part of the view? Something along the lines of:
function(doc) {
function computeAllKeyVariations(fromKey) {
// returns array of key arrays
}
var allKeys = computeAllKeyVariations(startingKey);
for (k in allKeys) {
emit(k, doc); // or emit(k, null)
}
}
Side note: You also have the option to use emit(['one','two','three'], null) instead of emitting the document. This will avoid having CouchDB store the full document in the view index (more than once). To get the same results as before just make use of &include_docs=true

Resources