What is the equivalent of:
INSERT INTO table (myColumn) VALUES (now())
using the Cassandra object-mapping api?
The #Computed annotation doesnt look like it would work unfortunately.
You can also set the value of your object to a type1 uuid. The jre doesnt have standard function for it but you can use the java driver util, JUG, cassandra-all or even write one yourself. This would be a little different because your setting the time as the time of creation as opposed to coordinator setting time of when it receives the request but with ORM's abstractions you tend to lose some control.
Alternatively there is nothing preventing you from issuing CQL statements while still using the object mapping api. Maybe even adding a query to a method on your object to do it ie:
#Query("UPDATE table SET myColumn = now() WHERE ....")
public ResultSet setNow()
Related
I'm trying to run my service with Micronaut and Cassandra (currently version 3.11.10) and store a column that is a set of tuples into Cassandra.
example code:
QueryBuilder
.insertInto(table)
.value("column", QueryBuilder.literal(items.map { it.toTuple() }.toSet())))
The toTuple() method is just an extension method that transfer the items into Term objects
When I'm doing so I'm receiving the following error:
Internal Server Error: Could not inline literal of type java.util.Collections$SingletonSet. This happens because the driver doesn't know how to map it to a CQL type. Try passing a TypeCodec or CodecRegistry to literal().
I've checked online in multiple sources but couldn't find a simple way to store a set of tuples into the database without implementing my custom TypeCodec. As I'm sure that I'm not the first person to have this issue, I'm probably doing something completely wrong, however I couldn't find any documentation regarding to what is the correct way of doing this.
I am planning to use the Update...USING TIMESTAMP... statement to make sure that I do not overwrite fresh data with stale data while having to avoid doing at least LOCAL_QUORUM writes.
Here is my table structure.
Table=DocumentStore
DocumentID (primaryKey, bigint)
Document(text)
Version(int)
If the service receives 2 write requests with Version=1 and Version=2, regardless of the order of arrival, the business requirement is that we end up with Version=2 in the database.
Can I use the following CQL Statement?
Update DocumentStore using <versionValue>
SET Document=<documentValue>,
Version=<versionValue>
where DocumentID=<documentIDValue>;
Has anybody used something like this? If so was the behavior as expected?
Yes, this is a known technique. Although it should be
UPDATE "DocumentStore" USING TIMESTAMP <versionValue>
SET "Document" = <documentValue>,
"Version" = <versionValue>
WHERE "DocumentID" = <documentIDValue>;
You missed a TIMESTAMP keyword, and also since you are using case sensitive names, you should enclose them in quotes.
The DataStax documentation says that to page through all data, the following CQL query is useful:
SELECT * FROM test WHERE token(k) > token(42);
Is it possible to build this query using the QueryBuilder? It provides a token method, but that seems to work only on column names, not on values.
Ideally, the value (in the example: 42) is of type Object, just like in the eq/gte/lte functions.
Try using automatic paging with the .fetchSize method. It uses token under the hood:
Automatic paging is introduced Cassandra 2.0. Automatic paging allows the developer to iterate on an entire ResultSet without having to care about its size: some extra rows are fetched as the client code iterate over the results while the old ones are dropped. The amount of rows that must be retrieved can be parameterized at query time. In the Java Driver this will looks like:
Statement stmt = new SimpleStatement("SELECT * FROM images");
stmt.setFetchSize(100);
ResultSet rs = session.execute(stmt);
Source: http://www.datastax.com/dev/blog/client-side-improvements-in-cassandra-2-0
QueryBuilder.fcall("token", value) ;
can solve the problem!
Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.
I want to create a "prepared statement" in postgres using the node-postgres module. I want to create it without binding it to parameters because the binding will take place in a loop.
In the documentation i read :
query(object config, optional function callback) : Query
If _text_ and _name_ are provided within the config, the query will result in the creation of a prepared statement.
I tried
client.query({"name":"mystatement", "text":"select id from mytable where id=$1"});
but when I try passing only the text & name keys in the config object, I get an exception :
(translated) message is binding 0 parameters but the prepared statement expects 1
Is there something I am missing ? How do you create/prepare a statement without binding it to specific value in order to avoid re-preparing the statement in every step of a loop ?
I just found an answer on this issue by the author of node-postgres.
With node-postgres the first time you issue a named query it is
parsed, bound, and executed all at once. Every subsequent query issued
on the same connection with the same name will automatically skip the
"parse" step and only rebind and execute the already planned query.
Currently node-postgres does not support a way to create a named,
prepared query and not execute the query. This feature is supported
within libpq and the client/server protocol (used by the pure
javascript bindings), but I've not directly exposed it in the API. I
thought it would add complexity to the API without any real benefit.
Since named statements are bound to the client in which they are
created, if the client is disconnected and reconnected or a different
client is returned from the client pool, the named statement will no
longer work (it requires a re-parsing).
You can use pg-prepared for that:
var prep = require('pg-prepared')
// First prepare statement without binding parameters
var item = prep('select id from mytable where id=${id}')
// Then execute the query and bind parameters in loop
for (i in [1,2,3]) {
client.query(item({id: i}), function(err, result) {...})
}
Update: Reading your question again, here's what I believe you need to do. You need to pass a "value" array as well.
Just to clarify; where you would normally "prepare" your query, just prepare the object you pass to it, without the value array. Then where you would normally "execute" your query, set the value array in the object and pass it to the query. If it's the first time, the driver will do the actual prepare for you the first time around, and simple do binding and execution for the rest of the iteration.