I just started working on DynamoDB so please forgive me if the following seems like a dumb mistake.
I have a model with a hashKey and a rangeKey. Let's name these as HASH and RANGE respectively.
A global secondary index: GlobalIndex is added to the model as well.
Now what I want is to get the list of records by rangeKey. I don't want to use the scan operation since it impacts the performance. I am unable to achieve this with the query operation.
Trying to achieve something like this with dynogels.
Any kind of help would be really helpful.
Thanks.
Dynogels: 9.0.0
Node: 6.10.3
It was a mistake on my end. The index has the same hash key and range key as the table.
The GitHub issue thread for the same: https://github.com/clarkie/dynogels/issues/137
Related
I use jooq to generate objects against a local database, but when running "for real" later in production the actual databases will have different names. To remedy this I use the <outputSchemaToDefault>true</outputSchemaToDefault> config option (maven).
At the same time, we have multiple databases (schemas), and are using a connection pool to the server like "jdbc:mysql://localhost:3306/" (without specifying a database here).
How do I tell jooq which database to use when running queries?
I have tried all config I can think of:
new Settings()
.withRenderSchema(true) // true/false seems to make no difference.
.withRenderCatalog(true) // true/false seems to make no difference.
.withRenderMapping(new RenderMapping()
.withDefaultSchema("my_database") // Seems to have no effect.
// The above 3 configs always give me an error saying "no database selected".
// Adding this gives me 'my_database.my_table' does not exist - while it actually does.
.withSchemata(new MappedSchema()
.withInputExpression(Pattern.compile(".*"))
.withOutput("my_database")
));
I have also tried using a database/schema name, as in not configuring outputSchemaToDefault. But then, adding the MappedSchema code above, but that gives me errors with "'my_databasemy_database.my_table' does not exist", which is correct. I have no clue why that code gives me the database/schema name twice?
Edit:
When jooq tells me that the db.table does not exist, if I put a break point in a good place and get the sql from jooq and run exactly that against my database it does work. But jooq fails to run it.
Also, I'm using version 3.15.3 of jooq.
I solved it. Instead of using .withInputExpression(Pattern.compile(".*")), it seems to work with .withInput("").
I'm still not sure why it works, or if this is the "correct" way of solving it. But at least it is a way forward.
No clue why using the pattern, I got the name twice though. But that one I'll leave alone.
I am trying to run a parameterised query using the npm module #google-cloud/bigquery.
Something like this:
SELECT * FROM myTable WHERE id IN (#ids);
I have no idea how bigQuery is expecting the parameter ids formatted.
My options.params look like something like this:
{ ids: '"1234", "4567"'}
But I don't get any result back. I know there are results, I can see them in bigquery and if I remove the parameter and just inject the string works just fine.
It seem pretty easy, but I can't figure out why it doesn't work, anyone who is willing to help me out?
Thank you in advance
Of course I found the solution as soon as I posted the question...
Thanks to this thread! Need to do some gymnastic...
So provided that the parameter is a string like:
'1234,5678'
We need to do:
WHERE id IN UNNEST(REGEXP_EXTRACT_ALL(#ids,"[0-9a-zA-Z]+"))
REGEXP_EXTRACT_ALL - returns an array
UNNEST - flattens the array for the IN clause as stated in the link above.
I basically have the same problem as the following Composite key in Cassandra with Pig. The only difference is I try to query for a part of the composite key within the where_clause of pig.
The data structure is similar to the earlier mentioned issue, I'll copy some code/context to minimize the reading of that issue.
We have a CQL table that looks something like this:
CREATE table data (
occurday text,
seqnumber int,
occurtimems bigint,
unique bigint,
fields map<text, text>,
primary key ((occurday, seqnumber), occurtimems, unique)
)
Instead of querying for both the seqnumber and the occurday (as was the issue in previously mentioned issue) I try to query one of the keys.
If I execute this query as part of a LOAD from within Pig, however, things don't work.
-- Need to URL encode the query
data = LOAD 'cql://ks/data?where_clause=occurday%3D%272013-10-01%27' USING CqlStorage();
gives
java.lang.RuntimeException
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:665)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.<init>(CqlPagingRecordReader.java:301)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:167)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordReader.java:181)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:522)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Caused by: InvalidRequestException(why:occurday cannot be restricted by more than one relation if it includes an Equal)
at org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result$prepare_cql3_query_resultStandardScheme.read(Cassandra.java:51017)
at org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result$prepare_cql3_query_resultStandardScheme.read(Cassandra.java:50994)
at org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:50933)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1756)
at org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1742)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.prepareQuery(CqlPagingRecordReader.java:605)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:635)
... 7 more
Basically my question is, what am I doing wrong or what don't I understand?
As I understand from CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated
I should be able to query with just part of the partition key?
Also while reading
Add CqlRecordReader to take advantage of native CQL pagination
I get the impression this should be possible, but I am swimming around with (in my opinion) no clear direction on how to accomplish this.
Any help is very very welcome at this point.
Regards,
Lennart Weijl
PS.
I am running on Cassandra 2.0.9 with Pig 0.13.0
According to CASSANDRA-6311, I believe you need to apply the 6331-v2-2.0-branch.txt patch, recompile pig, and then update your LOAD statement to:
data = LOAD 'cql://ks/data?where_clause=occurday%3D%272013-10-01%27' USING CqlInputFormat();
The key change being USING CqlInputFormat() which triggers the use of the new CqlRecordReader that was released in Cassandra 2.0.7.
Edit: Note that the exception is thrown from CqlPagingRecordReader which means you're still using the old record reader.
i'm new to neo4j, i'm reading the documentation and a sample, small app based on the node module (neo4j), but i don't see a way to get a node or nodes based on parameter/r.
something like:
var node = db.find({user: "WillSmith#iam.com", password: "5#^632g23^##23"});
Can anyone explane it to me, or point me to a good resource explaining it :)
As far as i know, if you want to find nodes by property, you should search in index.
This index you should create by yourself.
In this file i can see functions for working with indexes, unfortunately i've implemented this only in java, so i cant help with exact implementation. Hope, it helps =)
trying to write a query which will paginate through all rows in a column family using astyanax client and RowSliceQuery.
keyspace.prepareQuery(COLUMN_FAMILY).getKeyRange(null, null, null, null, 100);
Done this successfully using hector where 1st call is done with null start and end keys. After retrieving 1st page I use last key from the result to make query for second page and etc. This is code for 1st page using hector.
HFactory.createRangeSlicesQuery(keyspace,
LongSerializer.get(), new CompositeSerializer(),
BytesArraySerializer.get())
.setColumnFamily(COLUMN_FAMILY)
.setRange(null, null, false, 100).setRowCount(100);
Now when I am trying to do this with astyanax I am getting errors about null and non-null keys and tokens. Not sure what tokens do in this query. Also I am able to use allRows(), but would like to do this using key range query as it gives me more flexibility.
Does anybody have an example of key range query using astyanax? I cannot find an example neither in "getting started" documentation or anywhere else on the net.
Thanks!
Anton
What you are referring to is the getRowRange method:
keyspace.prepareQuery(CF_STANDARD1)
.getRowRange(startKey, endKey, startToken, endToken, count)
Note however that this works only when the ByteOrderedPartitioner is used. Since by default Cassandra uses the Murmur3Partitioner, this will usually not work. Using an index to do this instead is recommended. Astyanax also provides the reverse index search recipe which takes advantage of a second column family which stores your keys as columns to allow efficient range searches on the original data.
Check this sample code. I hope this code will help you in doing the paging.
IndexQuery<String, String> query = keyspace
.prepareQuery(CF_STANDARD1).searchWithIndex()
.setRowLimit(10).autoPaginateRows(true).addExpression()
.whereColumn("Index2").equals().value(42);
Best,