GridGain requirements for a cache key where value is to be used with Sql Queries - gridgain

I am trying to execute a GridCacheQuery created with createSqlQuery() on a cache that I am using a custom Key type for. The problem is that this always gives me back an empty collection, even when there are matching results.
If I repeat the test with a non-custom key type, such as String or Integer, I get the results back that I would expect.
If I repeat the test using a createSqlFieldsQuery() query, this time with the custom key again, I also get the results I would expect!
Is this behaviour expected? I have tested this with 6.5.0-p1, and my custom key type overrides hashCode() and equals(), and even implements Comparable, for what it's worth.

Related

Is there any possibility to search an asset with partial id

In hyperledger-fabric node js sdk.
Is there any possibility to search an asset with partial id?
for example my id is 'abc123'.
I can search with bc12 or abc or 123..and get the matching results.
Using stub.GetStateByRange(startKey, endKey) it is possible to retrieve results on a partial key, if they key has a specific form.
For eg.
the following keys could be used to successfully with a range query in the chaincode to retrieve a list of results, to match key abc123
a
ab
abc
abc1
abc12
abc123
However, a key without the same initial characters will not work. Eg. bc12 or 123.
The below function documentation gives a good idea of how the GetStateByRange function can be used.
// GetStateByRange returns a range iterator over a set of keys in the
// ledger. The iterator can be used to iterate over all keys
// between the startKey (inclusive) and endKey (exclusive).
// However, if the number of keys between startKey and endKey is greater than the
// totalQueryLimit (defined in core.yaml), this iterator cannot be used
// to fetch all keys (results will be capped by the totalQueryLimit).
// The keys are returned by the iterator in lexical order. Note
// that startKey and endKey can be empty string, which implies unbounded range
// query on start or end.
// Call Close() on the returned StateQueryIteratorInterface object when done.
// The query is re-executed during validation phase to ensure result set
// has not changed since transaction endorsement (phantom reads detected).
GetStateByRange(startKey, endKey string) (StateQueryIteratorInterface, error)
The answer by Clyde is the correct one to your question.
But, if you intend to perform complex queries in your code and you are in a position to refactor your data modelling, maybe you can set the information you must filter in some field inside your model (instead of or in addition to the ID itself) and perform rich queries against that field.
To do this, you must enable CouchDB as the state DB in your peers if haven't done it yet. Then you can query the DB and perform rich queries against your model fields.
Of course, this is not the answer to your question, but it may fit better to your use case if you are in a position to perform this kind of changes.

web2py-sqlform can't check unique=True that use with requires=IS_LENGTH()

sqlform don't show error message when data have same value it accepted then error appear
error1
detail
ps. my goal is to create a field that contain 13 figure number which not same as other
i try delete requires=IS_LENGTH(maxsize=13,minsize=13) then the sqlform work fine but which these method i can't check either string is equal 13 or not
db.define_table('person',
Field('h_id_card',unique=True,requires=IS_LENGTH(maxsize=13,minsize=13))
)
def add():
form = SQLFORM(db.person).process()
return locals()
i expected sqlform will show error message instead of accepted
this is what i expect
From the book:
Notice that requires=... is enforced at the level of forms, required=True is enforced at the level of the DAL (insert), while notnull, unique and ondelete are enforced at the level of the database. While they sometimes may seem redundant, it is important to maintain the distinction when programming with the DAL.
Because unique=True translates to the UNIQUE SQL statement, when an insert/update violates the uniqueness constraint, you simply get an error from the database, which generates an exception in the database driver, which ultimately generates an exception in your app code if you don't catch it.
If you instead want to enable form validation for the uniqueness requirement, you should use the IS_NOT_IN_DB validator:
Field('h_id_card',
requires=[IS_LENGTH(maxsize=13, minsize=13), IS_NOT_IN_DB(db, 'person.h_id_card')])

Insert now() using Cassandra's Java Object-mapping API

What is the equivalent of:
INSERT INTO table (myColumn) VALUES (now())
using the Cassandra object-mapping api?
The #Computed annotation doesnt look like it would work unfortunately.
You can also set the value of your object to a type1 uuid. The jre doesnt have standard function for it but you can use the java driver util, JUG, cassandra-all or even write one yourself. This would be a little different because your setting the time as the time of creation as opposed to coordinator setting time of when it receives the request but with ORM's abstractions you tend to lose some control.
Alternatively there is nothing preventing you from issuing CQL statements while still using the object mapping api. Maybe even adding a query to a method on your object to do it ie:
#Query("UPDATE table SET myColumn = now() WHERE ....")
public ResultSet setNow()

Key-value-like store for most specific URI

Is there a data-structure/model for storing a value at an arbitrary URI-based key, and then if null, backing down to a less specific path/domain? i.e.
SET example.com "hello"
SET a.example.com/foo "world"
GET example.com => "hello"
GET example.com/foo => "hello"
GET a.example.com/foo/bar => "world"
Value is simply a serialized JSON object; I don't need to do any list operations on it.
Currently, I'm using node.js/restify backed by redis (although I am open to other datastores). I realize I could have a flat key-value store, and loop through all subpaths/domains, but that feels inefficient with a dozen potentially empty calls to the datastore.
You can do a binary search on a failed key lookup to find the most specific matching URI. For example, you've got a depth-8 URI; start by checking for an exact match, if this fails then check the depth-4 URI; if this fails then check the depth-2 URI, else check the depth-6 URI; assume in this case that the depth-4 and depth-6 lookups succeeded, next you'll do a depth-7 lookup, if this succeeds then return the depth-7 value, else return the depth-6 value.
As an alternative, via Google I found a trie implementation for leveldb that might do the trick, but in general there don't appear to be very many database trie implementations.

node-postgres: how to prepare a statement without executing the query?

I want to create a "prepared statement" in postgres using the node-postgres module. I want to create it without binding it to parameters because the binding will take place in a loop.
In the documentation i read :
query(object config, optional function callback) : Query
If _text_ and _name_ are provided within the config, the query will result in the creation of a prepared statement.
I tried
client.query({"name":"mystatement", "text":"select id from mytable where id=$1"});
but when I try passing only the text & name keys in the config object, I get an exception :
(translated) message is binding 0 parameters but the prepared statement expects 1
Is there something I am missing ? How do you create/prepare a statement without binding it to specific value in order to avoid re-preparing the statement in every step of a loop ?
I just found an answer on this issue by the author of node-postgres.
With node-postgres the first time you issue a named query it is
parsed, bound, and executed all at once. Every subsequent query issued
on the same connection with the same name will automatically skip the
"parse" step and only rebind and execute the already planned query.
Currently node-postgres does not support a way to create a named,
prepared query and not execute the query. This feature is supported
within libpq and the client/server protocol (used by the pure
javascript bindings), but I've not directly exposed it in the API. I
thought it would add complexity to the API without any real benefit.
Since named statements are bound to the client in which they are
created, if the client is disconnected and reconnected or a different
client is returned from the client pool, the named statement will no
longer work (it requires a re-parsing).
You can use pg-prepared for that:
var prep = require('pg-prepared')
// First prepare statement without binding parameters
var item = prep('select id from mytable where id=${id}')
// Then execute the query and bind parameters in loop
for (i in [1,2,3]) {
client.query(item({id: i}), function(err, result) {...})
}
Update: Reading your question again, here's what I believe you need to do. You need to pass a "value" array as well.
Just to clarify; where you would normally "prepare" your query, just prepare the object you pass to it, without the value array. Then where you would normally "execute" your query, set the value array in the object and pass it to the query. If it's the first time, the driver will do the actual prepare for you the first time around, and simple do binding and execution for the rest of the iteration.

Resources