How the prevent Azure table injection? - azure

Is there a general way to prevent azure storage injection.
If the query contains a user entered string for example his name. Then it is possible to do some injection like: jan + ' or PartitionKey eq 'kees.
This will and up getting an object jan and an object with the partitionKey kees.
One option is URLEncoding. In this case ' and " are encoded. And the above injection is not possible anymore.
Is this the best option or are there better ones?

Per my experience, I realize that there is two general ways to prevent azure storage table injection.
The one is replace the string ' with the other string such as ; , " or URLEncode string of '. This is your option.
The other is storage table key using an encoding format(such as Base64) instread of plain content.
This is my test Java program as follows:
import org.apache.commons.codec.binary.Base64;
import com.microsoft.azure.storage.CloudStorageAccount;
import com.microsoft.azure.storage.table.CloudTable;
import com.microsoft.azure.storage.table.CloudTableClient;
import com.microsoft.azure.storage.table.TableOperation;
import com.microsoft.azure.storage.table.TableQuery;
import com.microsoft.azure.storage.table.TableQuery.QueryComparisons;
public class TableInjectTest {
private static final String storageConnectString = "DefaultEndpointsProtocol=http;" + "AccountName=<ACCOUNT_NAME>;"
+ "AccountKey=<ACCOUNT_KEY>";
public static void reproduce(String query) {
try {
CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectString);
CloudTableClient tableClient = storageAccount.createCloudTableClient();
// Create table if not exist.
String tableName = "people";
CloudTable cloudTable = new CloudTable(tableName, tableClient);
final String PARTITION_KEY = "PartitionKey";
String partitionFilter = TableQuery.generateFilterCondition(PARTITION_KEY, QueryComparisons.EQUAL, query);
System.out.println(partitionFilter);
TableQuery<CustomerEntity> rangeQuery = TableQuery.from(CustomerEntity.class).where(partitionFilter);
for (CustomerEntity entity : cloudTable.execute(rangeQuery)) {
System.out.println(entity.getPartitionKey() + " " + entity.getRowKey() + "\t" + entity.getEmail() + "\t"
+ entity.getPhoneNumber());
}
} catch (Exception e) {
e.printStackTrace();
}
}
/*
* The one way is replace ' with other symbol string
*/
public static String preventByReplace(String query, String symbol) {
return query.replaceAll("'", symbol);
}
public static void addEntityByBase64PartitionKey() {
try {
CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectString);
CloudTableClient tableClient = storageAccount.createCloudTableClient();
// Create table if not exist.
String tableName = "people";
CloudTable cloudTable = new CloudTable(tableName, tableClient);
String partitionKey = Base64.encodeBase64String("Smith".getBytes());
CustomerEntity customer = new CustomerEntity(partitionKey, "Will");
customer.setEmail("will-smith#contoso.com");
customer.setPhoneNumber("400800600");
TableOperation insertCustomer = TableOperation.insertOrReplace(customer);
cloudTable.execute(insertCustomer);
} catch (Exception e) {
e.printStackTrace();
}
}
// The other way is store PartitionKey using encoding format such as Base64
public static String preventByEncodeBase64(String query) {
return Base64.encodeBase64String(query.getBytes());
}
public static void main(String[] args) {
String queryNormal = "Smith";
reproduce(queryNormal);
/*
* Output as follows:
* PartitionKey eq 'Smith'
* Smith Ben Ben#contoso.com 425-555-0102
* Smith Denise Denise#contoso.com 425-555-0103
* Smith Jeff Jeff#contoso.com 425-555-0105
*/
String queryInjection = "Smith' or PartitionKey lt 'Z";
reproduce(queryInjection);
/*
* Output as follows:
* PartitionKey eq 'Smith' or PartitionKey lt 'Z'
* Webber Peter Peter#contoso.com 425-555-0101 <= This is my information
* Smith Ben Ben#contoso.com 425-555-0102
* Smith Denise Denise#contoso.com 425-555-0103
* Smith Jeff Jeff#contoso.com 425-555-0105
*/
reproduce(preventByReplace(queryNormal, "\"")); // The result same as queryNormal
reproduce(preventByReplace(queryInjection, "\"")); // None result, because the query string is """PartitionKey eq 'Smith" or PartitionKey lt "Z'"""
reproduce(preventByReplace(queryNormal, "&")); // The result same as queryNormal
reproduce(preventByReplace(queryInjection, "&")); // None result, because the query string is """PartitionKey eq 'Smith& or PartitionKey lt &Z'"""
/*
* The second prevent way
*/
addEntityByBase64PartitionKey(); // Will Smith
reproduce(preventByEncodeBase64(queryNormal));
/*
* Output as follows:
* PartitionKey eq 'U21pdGg='
* U21pdGg= Will will-smith#contoso.com 400800600 <= The Base64 string can be decoded to "Smith"
*/
reproduce(preventByEncodeBase64(queryInjection)); //None result
/*
* Output as follows:
* PartitionKey eq 'U21pdGgnIG9yIFBhcnRpdGlvbktleSBsdCAnWg=='
*/
}
}
I think that the best option is choose a suitable way to prevent query injection on the basis of application sence.
Any concerns, please feel free to let me know.

Related

Cannot insert into Cassandra table, getting SyntaxError

I have an assignment where I have to build a Cassandra database. I have connected Cassandra with IntelliJ, i'm writing in java and the output is shown in the command line.
My keyspace farm_db contains a couple of tables in wish i'm would like to insert data. I would like to insert the data with two columns and a list all in one row, in the table 'farmers'. This is a part of my database so far:
cqlsh:farm_db> use farm_db;
cqlsh:farm_db> Describe tables;
farmers foods_dairy_eggs foods_meat
foods_bread_cookies foods_fruit_vegetables
cqlsh:farm_db> select * from farmers;
farmer_id | delivery | the_farmer
-----------+----------+------------
This is what i'm trying to do:
[Picture of what i'm trying to do][1]
I need to insert the collection types 'list' and 'map' in 'farmers' but after a couple of failed attempts with that I tried using hashmap and arraylist instead. I think this could work but i seem to have an error in my syntax and I have no idea what the problem seem to be:
Exception in thread "main" com.datastax.driver.core.exceptions.SyntaxError: line 1:31 mismatched input 'int' expecting ')' (INSERT INTO farmers (farmer_id [int]...)
Am I missing something or am I doing something wrong?
This is my code:
public class FarmersClass {
public static String serverIP = "127.0.0.1";
public static String keyspace = "";
//Create db
public void crateDatabase(String databaseName) {
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
keyspace = databaseName;
Session session = cluster.connect();
String create_db_query = "CREATE KEYSPACE farm_db WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};";
session.execute(create_db_query);
}
//Create table
public void createFarmersTable() {
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
Session session = cluster.connect("farm_db");
String create_farm_table_query = "CREATE TABLE farmers(farmer_id int PRIMARY KEY, the_farmer Map <text, text>, delivery list<text>); ";
session.execute(create_farm_table_query);
}
//Insert data in table 'farmer'.
public void insertFarmers(int id, HashMap< String, String> the_farmer, ArrayList <String> delivery) {
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
Session session = cluster.connect("farm_db");
String insert_query = "INSERT INTO farmers (farmer_id int PRIMARY KEY, the_farmer, delivery) values (" + id + "," + the_farmer + "," + delivery + ");";
System.out.println(insert_query);
session.execute(insert_query);
}
}
public static void main(String[] args) {
FarmersClass farmersClass = new FarmersClass();
// FarmersClass.crateDatabase("farm_db");
// FarmersClass.createFarmersTable();
//Collection type map
HashMap<String, String> the_farmer = new HashMap<>();
the_farmer.put("Name", "Ana Petersen ");
the_farmer.put("Farmhouse", "The great farmhouse");
the_farmer.put("Foods", "Fruits & Vegetables");
//Collection type list
ArrayList<String> delivery = new ArrayList<String>();
String delivery_1 = "Village 1";
String delivery_2 = "Village 2";
delivery.add(delivery_1);
delivery.add(delivery_2);
FarmersClass.insertFarmers(1, the_farmer, delivery);
}
The problem is the syntax of your CQL INSERT query:
String insert_query = \
"INSERT INTO farmers (farmer_id int PRIMARY KEY, the_farmer, delivery) \
values (" + id + "," + the_farmer + "," + delivery + ");";
You've incorrectly added int PRIMARY KEY in the list of columns.
The correct format is:
INSERT INTO table_name (pk, col2, col3) VALUES ( ... )
For details and examples, see CQL INSERT. Cheers!

SQL Parser Visitor + Metabase + Presto

I'm facing what seems to be a quite easy problem, but I'm not able to put my head around the problem to find a suitable solution.
Problem:
I need to append the schema into my SQL statement, in a "weird"(with schema in double quotes) way.
FROM "SCHEMA".tableB tableB
LEFT JOIN "SCHEMA".tableC tableC
Context
Basically, we are hosting and exposing a Metabase tool that will connect and perform query on our Hive database using Presto SQL.
Metabase allow the customer to write SQL statements and some customers, they just don't type the schema on statements. Today we are throwing and error for those queries, but I could easily retrieve the schema value from the Authorization header, since in our multi-tenant product the schema is the tenant id where this user is logged, and with that information in hands, I could append to the customer SQL statement and avoid the error.
Imagine that the customer typed the follow statement:
SELECT tableA.*
, (tableA.valorfaturado + tableA.valorcortado) valorpedido
FROM (SELECT from_unixtime(tableB.datacorte / 1000) datacorte
, COALESCE((tableB.quantidadecortada * tableC.preco), 0) valorcortado
, COALESCE((tableB.quantidade * tableC.preco), 0) valorfaturado
, tableB.quantidadecortada
FROM tableB tableB
LEFT JOIN tableC tableC
ON tableC.numeropedido = tableB.numeropedido
AND tableC.codigoproduto = tableB.codigoproduto
AND tableC.codigofilial = tableB.codigofilial
LEFT JOIN tableD tableD
ON tableD.numero = tableB.numeropedido
WHERE (CASE
WHEN COALESCE(tableB.codigofilial, '') = '' THEN
tableD.codigofilial
ELSE
tableB.codigofilial
END) = '10'
AND from_unixtime(tableB.datacorte / 1000) BETWEEN from_iso8601_timestamp('2020-07-01T03:00:00.000Z') AND from_iso8601_timestamp('2020-08-01T02:59:59.999Z')) tableA
ORDER BY datacorte
I should convert this into (adding the "SCHEMA"):
SELECT tableA.*
, (tableA.valorfaturado + tableA.valorcortado) valorpedido
FROM (SELECT from_unixtime(tableB.datacorte / 1000) datacorte
, COALESCE((tableB.quantidadecortada * tableC.preco), 0) valorcortado
, COALESCE((tableB.quantidade * tableC.preco), 0) valorfaturado
, tableB.quantidadecortada
FROM "SCHEMA".tableB tableB
LEFT JOIN "SCHEMA".tableC tableC
ON tableC.numeropedido = tableB.numeropedido
AND tableC.codigoproduto = tableB.codigoproduto
AND tableC.codigofilial = tableB.codigofilial
LEFT JOIN "SCHEMA".tableD tableD
ON tableD.numero = tableB.numeropedido
WHERE (CASE
WHEN COALESCE(tableB.codigofilial, '') = '' THEN
tableD.codigofilial
ELSE
tableB.codigofilial
END) = '10'
AND from_unixtime(tableB.datacorte / 1000) BETWEEN from_iso8601_timestamp('2020-07-01T03:00:00.000Z') AND from_iso8601_timestamp('2020-08-01T02:59:59.999Z')) tableA
ORDER BY datacorte
Still trying to find a solution that uses only presto-parser and Visitor + Instrumentation solution.
Also, I know about JSQLParser and I tried, but I alway come back to try to find a "plain" solution scared that JSQLParser will not be able to support all the Presto/Hive queries, that are a little bit different than standard SQL;
I create a little project on GitHub with test case to validate..
https://github.com/genyherrera/prestosqlerror
But for those that don't want to clone a repository, here are the classes and dependencies:
import java.util.Optional;
import com.facebook.presto.sql.SqlFormatter;
import com.facebook.presto.sql.parser.ParsingOptions;
import com.facebook.presto.sql.parser.SqlParser;
public class SchemaAwareQueryAdapter {
// Inspired from
// https://github.com/prestodb/presto/tree/master/presto-parser/src/test/java/com/facebook/presto/sql/parser
private static final SqlParser SQL_PARSER = new SqlParser();
public String rewriteSql(String sqlStatement, String schemaId) {
com.facebook.presto.sql.tree.Statement statement = SQL_PARSER.createStatement(sqlStatement, ParsingOptions.builder().build());
SchemaAwareQueryVisitor visitor = new SchemaAwareQueryVisitor(schemaId);
statement.accept(visitor, null);
return SqlFormatter.formatSql(statement, Optional.empty());
}
}
public class SchemaAwareQueryVisitor extends DefaultTraversalVisitor<Void, Void> {
private String schemaId;
public SchemaAwareQueryVisitor(String schemaId) {
super();
this.schemaId = schemaId;
}
/**
* The customer can type:
* [table name]
* [schema].[table name]
* [catalog].[schema].[table name]
*/
#Override
protected Void visitTable(Table node, Void context) {
List<String> parts = node.getName().getParts();
// [table name] -> is the only one we need to modify, so let's check by parts.size() ==1
if (parts.size() == 1) {
try {
Field privateStringField = Table.class.getDeclaredField("name");
privateStringField.setAccessible(true);
QualifiedName qualifiedName = QualifiedName.of("\""+schemaId+"\"",node.getName().getParts().get(0));
privateStringField.set(node, qualifiedName);
} catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e) {
throw new SecurityException("Unable to execute query");
}
}
return null;
}
}
import static org.testng.Assert.assertEquals;
import org.gherrera.prestosqlparser.SchemaAwareQueryAdapter;
import org.testng.annotations.Test;
public class SchemaAwareTest {
private static final String schemaId = "SCHEMA";
private SchemaAwareQueryAdapter adapter = new SchemaAwareQueryAdapter();
#Test
public void testAppendSchemaA() {
String sql = "select * from tableA";
String bound = adapter.rewriteSql(sql, schemaId);
assertEqualsFormattingStripped(bound,
"select * from \"SCHEMA\".tableA");
}
private void assertEqualsFormattingStripped(String sql1, String sql2) {
assertEquals(sql1.replace("\n", " ").toLowerCase().replace("\r", " ").replaceAll(" +", " ").trim(),
sql2.replace("\n", " ").toLowerCase().replace("\r", " ").replaceAll(" +", " ").trim());
}
}
<dependencies>
<dependency>
<groupId>com.facebook.presto</groupId>
<artifactId>presto-parser</artifactId>
<version>0.229</version>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>6.10</version>
<scope>test</scope>
</dependency>
</dependencies>
PS: I was able to add the schema without the doubles quotes, but them I got into identifiers must not start with a digit; surround the identifier with double quotes error. Basically this error comes from SqlParser$PostProcessor.exitDigitIdentifier(...) method..
Thanks
I was able to find a solution for my case, either way will share on Presto Slack my finding to see if that is expected behavior.
So, if you want to append with double quote your schema, you will need to create your own Vistor class and you'll need to override the method visitTable and when you Qualify the name of your table with schema, (here's the tick), pass the schema as UPPERCASE, so it will not match the regex pattern on class SqlFormatter on method formatName and it will add the double-quote..
public class SchemaAwareQueryVisitor extends DefaultTraversalVisitor<Void, Void> {
private String schemaId;
public SchemaAwareQueryVisitor(String schemaId) {
super();
this.schemaId = schemaId;
}
#Override
protected Void visitTable(Table node, Void context) {
try {
Field privateStringField = Table.class.getDeclaredField("name");
privateStringField.setAccessible(true);
QualifiedName qualifiedName = QualifiedName.of(schemaId, node.getName().getParts().get(0));
privateStringField.set(node, qualifiedName);
} catch (NoSuchFieldException
| SecurityException
| IllegalArgumentException
| IllegalAccessException e) {
throw new SecurityException("Unable to execute query");
}
return null;
}
}

Programmatically create a new Person having an Internal Login

I am trying to programmatically create Users with Internal Accounts as part of a testing system. The following code can not create an InternalLogin because there is not password hash set at object creation time.
Can a person + internal account be created using metadata_* functions ?
data _null_;
length uri_Person uri_PW uri_IL $256;
call missing (of uri:);
rc = metadata_getnobj ("Person?#Name='testbot02'", 1, uri_Person); msg=sysmsg();
put 'NOTE: Get Person, ' rc= uri_Person= / msg;
if rc = -4 then do;
rc = metadata_newobj ('Person', uri_Person, 'testbot02'); msg=sysmsg();
put 'NOTE: New Person, ' rc= uri_Person= / msg;
end;
rc = metadata_setattr (uri_Person, 'DisplayName', 'Test Bot #2'); msg=sysmsg();
put 'NOTE: SetAttr, ' rc= / msg;
rc = metadata_newobj ('InternalLogin', uri_IL, 'autobot - IL', 'Foundation', uri_Person, 'InternalLoginInfo'); msg=sysmsg();
put 'NOTE: New InternalLogin, ' rc= / msg;
run;
Logs
NOTE: Get Person, rc=-4 uri_Person=
NOTE: New Person, rc=0 uri_Person=OMSOBJ:Person\A5OJU4RB.AP0000SX
NOTE: SetAttr, rc=0
NOTE: New InternalLogin, rc=-2
ERROR: AddMetadata of InternalLogin is missing required property PasswordHash.
NOTE: DATA statement used (Total process time):
real time 0.02 seconds
cpu time 0.01 seconds
The management console and metabrowse were used to find what objects might need to be created.
Metabrowse
I ended up using Java to create new users.
A ISecurity_1_1 instance from MakeISecurityConnection() on the metadata connection was used to SetInternalPassword
Java ended up being a lot more clear coding wise.
package sandbox.sas.metadata;
import com.sas.iom.SASIOMDefs.GenericError;
import com.sas.metadata.remote.*;
import com.sas.meta.SASOMI.*;
import java.rmi.RemoteException;
import java.util.List;
/**
*
* #author Richard
*/
public class Main {
public static final String TESTGROUPNAME = "Automated Test Users";
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
String metaserver="################";
String metaport="8561";
String omrUser="sasadm#saspw";
String omrPass="##########";
try
{
MdFactoryImpl metadata = new MdFactoryImpl(false);
metadata.makeOMRConnection(metaserver,metaport,omrUser,omrPass);
MdOMIUtil util = metadata.getOMIUtil();
ISecurity_1_1 security = metadata.getConnection().MakeISecurityConnection();
MdObjectStore workunit = metadata.createObjectStore();
String foundationID = util.getFoundationReposID();
String foundationShortID = foundationID.substring(foundationID.indexOf(".") + 1);
// Create group for test bot accounts
List identityGroups = util.getMetadataObjects(foundationID, MetadataObjects.IDENTITYGROUP, MdOMIUtil.OMI_XMLSELECT,
String.format("<XMLSelect search=\"IdentityGroup[#Name='%s']\"/>", TESTGROUPNAME));
IdentityGroup identityGroup;
if (identityGroups.isEmpty()) {
identityGroup = (IdentityGroup) metadata.createComplexMetadataObject (workunit, TESTGROUPNAME, MetadataObjects.IDENTITYGROUP, foundationShortID);
identityGroup.setDisplayName(TESTGROUPNAME);
identityGroup.setDesc("Group for Automated Test Users performing concurrent Stored Process execution");
identityGroup.updateMetadataAll();
}
identityGroups = util.getMetadataObjectsSubset(workunit, foundationID, MetadataObjects.IDENTITYGROUP, MdOMIUtil.OMI_XMLSELECT,
String.format("<XMLSelect search=\"IdentityGroup[#Name='%s']\"/>", TESTGROUPNAME));
identityGroup = (IdentityGroup) identityGroups.get(0);
// Create (or Update) test bot accounts
for (int index = 1; index <= 25; index++)
{
String token = String.format("%02d", index);
String personName = String.format("testbot%s", token);
String password = String.format("testbot%s%s", token, token); * simple dynamically generated password for user (for testing scripts only);
String criteria = String.format("Person[#Name='%s']", personName);
String xmlSelect = String.format("<XMLSelect search=\"%s\"/>", criteria);
List persons = util.getMetadataObjectsSubset(workunit, foundationID, MetadataObjects.PERSON, MdOMIUtil.OMI_XMLSELECT | MdOMIUtil.OMI_GET_METADATA | MdOMIUtil.OMI_ALL_SIMPLE, xmlSelect);
Person person;
if (persons.size() == 1)
{
person = (Person) persons.get(0);
System.out.println(String.format("Have %s %s", person.getName(), person.getDisplayName()));
}
else
{
person = (Person) metadata.createComplexMetadataObject (workunit, personName, MetadataObjects.PERSON, foundationShortID);
person.setDisplayName(String.format("Test Bot #%s", token));
person.setDesc("Internal account for testing purposes");
System.out.println(String.format("Make %s, %s (%s)", person.getName(), person.getDisplayName(), person.getDesc()));
person.updateMetadataAll();
}
security.SetInternalPassword(personName, password);
AssociationList personGroups = person.getIdentityGroups();
personGroups.add(identityGroup);
person.setIdentityGroups(personGroups);
person.updateMetadataAll();
}
workunit.dispose();
metadata.closeOMRConnection();
metadata.dispose();
}
catch (GenericError | MdException | RemoteException e)
{
System.out.println(e.getLocalizedMessage());
System.exit(1);
}
}
}

Spring data Cassandra 2.0 Select BLOB column returns incorrect ByteBuffer data

Context: Spring data cassandra official 1.0.2.RELEASE from Maven Central repo, CQL3, cassandra 2.0, datastax driver 2.0.4
Background: The cassandra blob data type is mapped to a Java ByteBuffer.
The sample code below demonstrates that you won't retrieve the correct bytes using select next to an equivalent insert. The data actually retrieved is prefixed by numerous garbage bytes that actually looks like a serialization of the entire row.
This older post relating to Cassandra 1.2 suggested that we may have to start at ByteBuffer.arrayOffset() of length ByteBuffer.remaining(), but a the arrayOffset value is actually 0.
I discovered a spring-data-cassandra 2.0.0. SNAPSHOT but the CassandraOperations API is much different, and its package name too: org.springdata... versus org.springframework...
Help in fixing this will be much welcome.
In the mean time it looks like I have to encode/decode Base64 my binary data to/from a text data type column.
--- here is the simple table CQL meta data I use -------------
CREATE TABLE person (
id text,
age int,
name text,
pict blob,
PRIMARY KEY (id)
) ;
--- follows the simple data object mapped to a CQL table ---
package org.spring.cassandra.example;
import java.nio.ByteBuffer;
import org.springframework.data.cassandra.mapping.PrimaryKey;
import org.springframework.data.cassandra.mapping.Table;
#Table
public class Person {
#PrimaryKey
private String id;
private int age;
private String name;
private ByteBuffer pict;
public Person(String id, int age, String name, ByteBuffer pict) {
this.id = id; this.name = name; this.age = age; this.pict = pict;
}
public String getId() { return id; }
public String getName() { return name; }
public int getAge() { return age; }
public ByteBuffer getPict() { return pict; }
}
}
--- and the plain java application code that simply inserts and retrieves a person object --
package org.spring.cassandra.example;
import java.nio.ByteBuffer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.data.cassandra.core.CassandraOperations;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.Row;
import com.datastax.driver.core.querybuilder.QueryBuilder;
import com.datastax.driver.core.querybuilder.Select;
public class CassandraApp {
private static final Logger logger = LoggerFactory
.getLogger(CassandraApp.class);
public static String hexDump(ByteBuffer bb) {
char[] hexArray = "0123456789ABCDEF".toCharArray();
bb.rewind();
char[] hexChars = new char[bb.limit() * 2];
for ( int j = 0; j < bb.limit(); j++ ) {
int v = bb.get() & 0xFF;
hexChars[j * 2] = hexArray[v >>> 4];
hexChars[j * 2 + 1] = hexArray[v & 0x0F];
}
bb.rewind();
return new String(hexChars);
}
public static void main(String[] args) {
ApplicationContext applicationContext = new ClassPathXmlApplicationContext("app-context.xml");
try {
CassandraOperations cassandraOps = applicationContext.getBean(
"cassandraTemplate", CassandraOperations.class);
cassandraOps.truncate("person");
// prepare data
byte[] ba = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x11, 0x22, 0x33, 0x44, 0x55, (byte) 0xAA, (byte) 0xCC, (byte) 0xFF };
ByteBuffer myPict = ByteBuffer.wrap(ba);
String myId = "1234567890";
String myName = "mickey";
int myAge = 50;
logger.info("We try id=" + myId + ", name=" + myName + ", age=" + myAge +", pict=" + hexDump(myPict));
cassandraOps.insert(new Person(myId, myAge, myName, myPict ));
Select s = QueryBuilder.select("id","name","age","pict").from("person");
s.where(QueryBuilder.eq("id", myId));
ResultSet rs = cassandraOps.query(s);
Row r = rs.one();
logger.info("We got id=" + r.getString(0) + ", name=" + r.getString(1) + ", age=" + r.getInt(2) +", pict=" + hexDump(r.getBytes(3)));
} catch (Exception e) {
e.printStackTrace();
}
}
}
--- assuming you have configured a simple Spring project for cassandra
as explained at http://projects.spring.io/spring-data-cassandra/
The actual execution yields:
[main] INFO org.spring.cassandra.example.CassandraApp - We try id=1234567890, name=mickey, age=50, pict= 0001020304051122334455AACCFF
[main] INFO org.spring.cassandra.example.CassandraApp - We got id=1234567890, name=mickey, age=50, pict=8200000800000073000000020000000100000004000A6D796B657973706163650006706572736F6E00026964000D00046E616D65000D000361676500090004706963740003000000010000000A31323334353637383930000000066D69636B657900000004000000320000000E 0001020304051122334455AACCFF
although the insert looks correct in the database itself, as seen from cqlsh command line:
cqlsh:mykeyspace> select * from person;
id | age | name | pict
------------+-----+--------+--------------------------------
1234567890 | 50 | mickey | 0x0001020304051122334455aaccff
(1 rows)
I had exactly the same problem but have fortunately found a solution.
The problem is that ByteBuffer use is confusing. Try doing something like:
ByteBuffer bb = resultSet.one().getBytes("column_name");
byte[] data = new byte[bb.remaining()];
bb.get(data);
Thanks to Sylvain's for this suggestion here: http://grokbase.com/t/cassandra/user/134brvqzd3/blobs-in-cql
Give a look at the Bytes class of the Datastax Java Driver, it provides what you need to encode/decode your data.
In this post I wrote an usage example.
HTH,
Carlo

Is it possible to do paging with JoinSqlBuilder?

I have a pretty normal join that I create via JoinSqlBuilder
var joinSqlBuilder = new JoinSqlBuilder<ProductWithManufacturer, Product>()
.Join<Product, Manufacturer>(sourceColumn: p => p.ManufacturerId,
destinationColumn: mf => mf.Id,
sourceTableColumnSelection: p => new { ProductId = p.Id, ProductName = p.Name },
destinationTableColumnSelection: m => new { ManufacturerId = m.Id, ManufacturerName = m.Name })
Of course, the join created by this could potentially return a lot of rows, so I want to use paging - preferably on the server-side. However, I cannot find anything in the JoinSqlBuilder which would let me do this? Am I missing something or does JoinSqlBuilder not have support for this (yet)?
If you aren't using MS SQL Server I think the following will work.
var sql = joinSqlBuilder.ToSql();
var data = this.Select<ProductWithManufacturer>(
q => q.Select(sql)
.Limit(skip,rows)
);
If you are working with MS SQL Server, it will most likely blow up on you. I am working to merge a more elegant solution similar to this into JoinSqlBuilder. The following is a quick and dirty method to accomplish what you want.
I created the following extension class:
public static class Extension
{
private static string ToSqlWithPaging<TResult, TTarget>(
this JoinSqlBuilder<TResult, TTarget> bldr,
string orderColumnName,
int limit,
int skip)
{
var sql = bldr.ToSql();
return string.Format(#"
SELECT * FROM (
SELECT ROW_NUMBER() OVER (ORDER BY [{0}]) As RowNum, *
FROM (
{1}
)as InnerResult
)as RowConstrainedResult
WHERE RowNum > {2} AND RowNum <= {3}
", orderColumnName, sql, skip, skip + limit);
}
public static string ToSqlWithPaging<TResult, TTarget>(
this JoinSqlBuilder<TResult, TTarget> bldr,
Expression<Func<TResult, object>> orderSelector,
int limit,
int skip)
{
var member = orderSelector.Body as MemberExpression;
if (member == null)
throw new ArgumentException(
"TResult selector refers to a non member."
);
var propInfo = member.Member as PropertyInfo;
if (propInfo == null)
throw new ArgumentException(
"TResult selector refers to a field, it must be a property."
);
var orderSelectorName = propInfo.Name;
return ToSqlWithPaging(bldr, orderSelectorName, limit, skip);
}
}
It is applied as follows:
List<Entity> GetAllEntities(int limit, int skip)
{
var bldr = GetJoinSqlBuilderFor<Entity>();
var sql = bldr.ToSqlWithPaging(
entity => entity.Id,
limit,
skip);
return this.Db.Select<Entity>(sql);
}

Resources