how to read all 1000 rows from cassandra CF with astyanax - cassandra

We have this one CF that only has about 1000 rows. Is there a way with astyanax to read all 1000 rows? Does thrift even support that?
thanks,
Dean

You can read all rows with the thrift call get_range_slices. Note that it returns rows in token order, not key order. So it's fine to read all the rows but not to do ranges across row keys.
You can use it in Astyanax with the getAllRows(). Here is some sample code (copied from the docs at https://github.com/Netflix/astyanax/wiki/Reading-Data#iterate-all-rows-in-a-column-family)
Rows<String, String>> rows;
try {
rows = keyspace.prepareQuery("ColumnFamilyName")
.getAllRows()
.setBlockSize(10)
.withColumnRange(new RangeBuilder().setMaxSize(10).build())
.setExceptionCallback(new ExceptionCallback() {
#Override
public boolean onException(ConnectionException e) {
try {
Thread.sleep(1000);
} catch (InterruptedException e1) {
}
return true;
}})
.execute().getResult();
} catch (ConnectionException e) {
}
// This will never throw an exception
for (Row<String, String> row : rows.getResult()) {
LOG.info("ROW: " + row.getKey() + " " + row.getColumns().size());
}
This will return the first 10 columns of each row, in batches of 10 rows. Increase the number passed to RangeBuilder().setMaxSize to get more (or fewer) columns.

Related

Cassandra 3.6.0 bug: StackOverflowError thrown by HashedWheelTimer on Connection.release

When running some inserts and updates on Cassandra database via the java driver version 3.6.0, I get the following StackOverflowError, of which I am showing here just the top, but it repeats the last 10 rows endlessly.
There is no mention of any line in my code, so I don't know what was the specific operation that invoked this.
2018-09-03 00:19:58,294 WARN {cluster1-timeouter-0} [c.d.s.n.u.HashedWheelTimer] : An exception was thrown by TimerTask.
java.lang.StackOverflowError: null
at java.util.regex.Pattern$Branch.match(Pattern.java:4604)
at java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)
at java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)
at java.util.regex.Pattern$Curly.match0(Pattern.java:4279)
at java.util.regex.Pattern$Curly.match(Pattern.java:4234)
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)
at java.util.regex.Pattern$Branch.match(Pattern.java:4604)
at java.util.regex.Pattern$Branch.match(Pattern.java:4602)
at java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)
at java.util.regex.Pattern$Start.match(Pattern.java:3461)
at java.util.regex.Matcher.search(Matcher.java:1248)
at java.util.regex.Matcher.find(Matcher.java:664)
at java.util.Formatter.parse(Formatter.java:2549)
at java.util.Formatter.format(Formatter.java:2501)
at java.util.Formatter.format(Formatter.java:2455)
at java.lang.String.format(String.java:2940)
at com.datastax.driver.core.exceptions.BusyConnectionException.<init>(BusyConnectionException.java:29)
at com.datastax.driver.core.Connection$ResponseHandler.<init>(Connection.java:1538)
at com.datastax.driver.core.Connection.write(Connection.java:711)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.write(RequestHandler.java:451)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.access$1600(RequestHandler.java:307)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:397)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:384)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1355)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1024)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:866)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:689)
at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:48)
at com.datastax.driver.core.HostConnectionPool$PendingBorrow.set(HostConnectionPool.java:755)
at com.datastax.driver.core.HostConnectionPool.dequeue(HostConnectionPool.java:407)
at com.datastax.driver.core.HostConnectionPool.returnConnection(HostConnectionPool.java:366)
at com.datastax.driver.core.Connection.release(Connection.java:810)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:407)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:384)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1355)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1024)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:866)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:689)
at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:48)
at com.datastax.driver.core.HostConnectionPool$PendingBorrow.set(HostConnectionPool.java:755)
at com.datastax.driver.core.HostConnectionPool.dequeue(HostConnectionPool.java:407)
at com.datastax.driver.core.HostConnectionPool.returnConnection(HostConnectionPool.java:366)
at com.datastax.driver.core.Connection.release(Connection.java:810)
I do not use any UDTs.
Here are the keyspace and table creation code:
session.execute(session.prepare(
"CREATE KEYSPACE IF NOT EXISTS myspace WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'dc1': '3'} AND DURABLE_WRITES = true;").bind());
session.execute(session.prepare("CREATE TABLE IF NOT EXISTS myspace.tasks (myId TEXT PRIMARY KEY, pointer BIGINT)").bind());
session.execute(session.prepare("CREATE TABLE IF NOT EXISTS myspace.counters (key TEXT PRIMARY KEY, cnt COUNTER)").bind());
This is the prepared statement that I use:
PreparedStatement quickSearchTasksInsert = session.prepare("INSERT INTO myspace.tasks (myId, pointer) VALUES (:oid,:loc)");
The code that reproduces the issue does the following:
Runs about 10,000 times the method writeTask() with different values such as the following example rows which are selecteed from a SQL database:
05043FA57ECEAABC3E096B281A55356B, 1678192046
5DE661E77D19C157C31EB7309494EA89, 3959390363
85D6211384E6E190299093E501169625, 3146521416
0327817F8BD59039069C13D581E8EBBE, 2907072247
D913FA0F306D6516D8DF87EB0CB1EE9B, 2507147331
DC946B409CD1E59F560A0ED75559CB16, 2810148057
2A24B1DC71D395938BA77C6CA822A5F7, 1182061065
F70705303980DA40D125CC3497174A5D, 1735385855
runs the setLocNum() method with some Long number.
Loop back to (1) above.
public void writeTask(String myId, long pointer) {
try {
session.executeAsync(quickSearchTasksInsert.bind().setString("oid",myId).setLong("loc", pointer));
incrementCounter("tasks_count", 1);
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public synchronized void setLocNum(long num) {
setCounter("loc_num", num);
}
public void incrementCounter(String key, long incVal) {
try {
session.executeAsync(
"UPDATE myspace.counters SET cnt = cnt + " + incVal + " WHERE key = '" + key.toLowerCase() + "'");
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public void decrementCounter(String key, long decVal) {
try {
session.executeAsync(
"UPDATE myspace.counters SET cnt = cnt - " + decVal + " WHERE key = '" + key.toLowerCase() + "'");
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public synchronized void setCounter(String key, long newVal) {
try {
Long prevCounterValue = countersCache.get(key);
long oldCounter = prevCounterValue == null ? readCounter(key) : prevCounterValue.longValue();
decrementCounter(key, oldCounter);
incrementCounter(key, newVal);
countersCache.put(key, newVal);
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}

Azure Batch Insert: Bad Request Error

I am getting below error while trying to insert multiple entities in Azure Table storage:
com.microsoft.azure.storage.table.TableServiceException: Bad Request
at com.microsoft.azure.storage.table.TableBatchOperation$1.postProcessResponse(TableBatchOperation.java:525)
at com.microsoft.azure.storage.table.TableBatchOperation$1.postProcessResponse(TableBatchOperation.java:433)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:146)
Below is the Java code for batch insert:
public BatchInsertResponse batchInsert(BatchInsertRequest request){
BatchInsertResponse response = new BatchInsertResponse();
String erpName = request.getErpName();
HashMap<String,List<TableEntity>> tableNameToEntityMap = request.getTableNameToEntityMap();
HashMap<String,List<TableEntity>> errorMap = new HashMap<String,List<TableEntity>>();
HashMap<String,List<TableEntity>> successMap = new HashMap<String,List<TableEntity>>();;
CloudTable cloudTable=null;
for (Map.Entry<String, List<TableEntity>> entry : tableNameToEntityMap.entrySet()){
try {
cloudTable = azureStorage.getTable(entry.getKey());
} catch (Exception e) {
e.printStackTrace();
}
// Define a batch operation.
TableBatchOperation batchOperation = new TableBatchOperation();
List<TableEntity> value = entry.getValue();
for (int i = 0; i < value.size(); i++) {
TableEntity entity = value.get(i) ;
batchOperation.insertOrReplace(entity);
if (i!=0 && i % batchSize == 0) {
try {
cloudTable.execute(batchOperation);
batchOperation.clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
try {
cloudTable.execute(batchOperation);
} catch (Exception e) {
e.printStackTrace();
}
}
}
Above code is working fine if I will assign batchSize value to 10 but if I will assign to 1000 or 100 it will throw Bad request error.
Please help me to resolve this error. I am using Spring boot and Azure-storage Java SDK version 4.3.0.
As Aravind mentioned, 400 error usually means there's something wrong with your data. From this link, an entity batch transaction will fail if one or more of the following conditions are not met:
All entities subject to operations as part of the transaction must have the same PartitionKey value.
An entity can appear only once in the transaction, and only one operation may be performed against it.
The transaction can include at most 100 entities, and its total payload may be no more than 4 MB in size.
All entities are subject to the limitations described in Understanding the Table Service Data Model.
Please check your entities against these four rules and ensure that you're not violating one of the rules.

Apache-thrift call to cassandra inserts junk/null values in the "key"

cassandra-thrift-1.1.2.jar
Problem code:
ColumnOrSuperColumn cosc = null;
org.apache.cassandra.thrift.Column c = new org.apache.cassandra.thrift.Column ();
c.setName ("full_name".getBytes ("UTF-8"));
c.setValue ("Test name".getBytes ("UTF-8"));
c.setTimestamp (System.currentTimeMillis());
// insert data
// long timestamp = System.currentTimeMillis();
try {
client.set_keyspace("CClient");
bb=ByteBuffer.allocate (10);
client.insert (bb.putInt(1),
new ColumnParent ("users"),
c,
ConsistencyLevel.QUORUM);
bb.putInt (2);
cosc = client.get (bb, cp, ConsistencyLevel.QUORUM);
}
catch (TimedOutException toe) {
System.out.println (toe.getMessage());
}
catch (org.apache.cassandra.thrift.UnavailableException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
catch (Exception e) {
e.printStackTrace();
}
finally {
System.out.println (new String (cosc.getColumn().getName()) + "-" + new String (cosc.getColumn().getValue()));
}
The code shown above inserts some junk or null into the database, I don't understand the reason why?
See how it looks on the CLI:
RowKey:
=> (column=full_name, value=Test name, timestamp=1345743985973)
Any help in this is greatly appreciated.
Thanks.
You're creating a row with row key as bytes.
In Cassandra cli you'll probably see the row key if you list the rows as bytes.
E.g. in cassandra cli type:
assume users keys as bytes;
list users;

adding an image component to Table cell by overidding `createCell`

I am using LWUIT and showing data with Table, say, flight information!
Instead of writing air companies with text I just like to replace them with icons.
So, I need to override protected Component createCell(Object value, final int row, final int column, boolean editable) method of Table.
This is how I've implemented:
Initializing
imgAln[i]=null;
try {
imgAln[i] = Image.createImage(strPathToImage[i]);
//e.g /uta.png,/somonair.png and so on
lAln[i] = new Label(imgAln[i]);
} catch (IOException e) { }
Creating Table object
Table table = new Table(model) {
protected Component createCell(Object value, final int row,
final int column, boolean editable) {
final Component c = super.createCell(value, row, column, editable);
if (column == 6) {
return lAln[value]; //it does not work here
}
}
};
need help to add Image to table cell!!!
Is there any example??? links are welcome!
The problem in your createCell(...) implementation is that it does not return the super.createCell(...) when the column is not 6. Also your array of labels (lAln) may not be properly created. Try my implementation below, but make sure you store the appropriate image name in the table models' column 0.
This should solve it:
TableModel model = new DefaultTableModel(
new String[]{"Uneditable", "Editable", "CheckBox", "Multiline"},
new Object[][]{
{"/animations.png", "", new Boolean(false), "Multi-line text\nright here"},
{"/buttons.png", "", new Boolean(true), "Further text that\nspans lines"},
{"/dialogs.png", "", new Boolean(true), "No span"},
{"/fonts.png", "", new Boolean(false), "Spanning\nFor\nEvery\nWord"},
});
Table table = new Table(model) {
protected Component createCell(Object value, final int row,
final int column, boolean editable) {
if (row != -1 && column == 0) {
try {
//In my case Column 0 store the resource path names
return new Label(Image.createImage((String)value));
} catch (Exception ex) {
ex.printStackTrace();
}
}
return super.createCell(value, row, column, editable);
}
};
NOTE: If you see the names instead of images in column 0 it means the image path is incorrect, fix it to see the images.
Did you manage to have a look at TableLayoutDemo.java in project LWUITDemo? If i remember it correct, this comes bundled download package LWUIT1.5.zip (or you can always google it).
Let me know if you need more specific help.

Limitation in Cassandra-0.8.1 when using batch mutation

I found some exceptions from cassandra when I do batch mutation, it said "already has modifications in this mutation", but the info given are two different operations.
I use Super column with counters in this case, it's like
Key: md5 of urls, utf-8
SuperColumnName: date, utf-8
ColumnName: Counter name is a random number from 1 to 200,
ColumnValue:1L
L
public void SuperCounterMutation(ArrayList<String> urlList) {
LinkedList<HCounterSuperColumn<String, String>> counterSuperColumns;
for(String line : urlList) {
String[] ele = StringUtils.split(StringUtils.strip(line), ':');
String key = ele[0];
String SuperColumnName = ele[1];
LinkedList<HCounterColumn<String>> ColumnList = new LinkedList<HCounterColumn<String>>();
for(int i = 2; i < ele.length; ++i) {
ColumnList.add(HFactory.createCounterColumn(ele[i], 1L, ser));
}
mutator.addCounter(key, ColumnFamilyName, HFactory.createCounterSuperColumn(SuperColumnName, ColumnList, ser, ser));
++count;
if(count >= BUF_MAX_NUM) {
try {
mutator.execute();
} catch(Exception e) {
e.printStackTrace();
}
mutator = HFactory.createMutator(keyspace, ser);
count = 0;
}
}
return;
}
Error info from cassandra log showed that the duplicated operations have the same key only, SuperColumnName are not the same, and for counter name set, some conflicts have intersects and some not.
I'm using Cassandra 0.8.1 with hector 0.8.0-rc2
Can anyone tell me the reason of this problem? Thanks in advance!
Error info from cassandra log showed that the duplicated operations have the same key
Bingo. You'll need to combine operations from the same key into a single mutation.

Resources