I want to read schema of a keyspace in cassandra.
I know that, in Cassandra-cli we can execute following command to get Schema
show schema keyspace1;
But i want to read schema from remote machine using java.
How i can solve this? Plzzz help me....
This one i solved by using thrift client
KsDef keyspaceDefinition = _client.describe_keyspace(_keyspace);
List<CfDef> columnDefinition = keyspaceDefinition.getCf_defs();
Here key space definition contains whole schema details, so from that KsDef we can read whatever we want. In my case i want to read metadata so i am reading column metadata from the above column definitions as shown below.
for(int i=0;i<columnDefinition.size();i++){
List<ColumnDef> columnMetadata = columnDefinition.get(i).getColumn_metadata();
for(int j=0;j<columnMetadata.size();j++){
columnfamilyNames.add(columnDefinition.get(i).getName());
columnNames.add(new String((columnMetadata.get(j).getName())));
validationClasses.add(columnMetadata.get(j).getValidation_class());
//ar.add(coldef.get(i).getName()+"\t"+bb_to_str(colmeta.get(j).getName())+"\t"+colmeta.get(j).getValidationClass());
}
}
here columnfamilyNames, columnNames and validationClasses are arraylists.
Related
I updated these lines of code to support for spring-data-cassandra-2.0.7.RELEASE:
CassandraOperations cOps = new CassandraTemplate(session);
From:
Insert insertStatement = (Insert)statement;
CqlTemplate.addWriteOptions(insertStatement, queryWriteOptions);
cOps.execute(insertStatement);
To:
Insert insertStatement = (Insert)statement;
insertStatement = QueryOptionsUtil.addWriteOptions(insertStatement,
queryWriteOptions);
cOps.insert(insertStatement);
Above changes are throwing below error:
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unknown type [interface com.datastax.driver.core.policies.RetryPolicy] for property [retryPolicy] in entity [com.datastax.driver.core.querybuilder.Insert]; only primitive types and Collections or Maps of primitive types are allowed
at org.springframework.data.cassandra.core.mapping.BasicCassandraPersistentProperty.getDataType(BasicCassandraPersistentProperty.java:170)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.lambda$null$10(CassandraMappingContext.java:552)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.lambda$getDataTypeWithUserTypeFactory$11(CassandraMappingContext.java:542)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.getDataTypeWithUserTypeFactory(CassandraMappingContext.java:527)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.getDataType(CassandraMappingContext.java:486)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getPropertyTargetType(MappingCassandraConverter.java:689)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.lambda$getTargetType$0(MappingCassandraConverter.java:682)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getTargetType(MappingCassandraConverter.java:670)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getWriteValue(MappingCassandraConverter.java:711)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.writeInsertFromWrapper(MappingCassandraConverter.java:403)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.writeInsertFromObject(MappingCassandraConverter.java:360)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.write(MappingCassandraConverter.java:345)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.write(MappingCassandraConverter.java:320)
at org.springframework.data.cassandra.core.QueryUtils.createInsertQuery(QueryUtils.java:78)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:442)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:430)
Query that is passed as input is of type com.datastax.driver.core.querybuilder.Insert containing:
INSERT INTO person (name,id,age) VALUES ('name01','123',23) USING TIMESTAMP 1528922717378000 AND TTL 60;
And the queryoptions containing RetryPolicy and consistency level is passed.
Based on documentation followed above changes are not working. Can anyone let me know what is wrong here?
I'm using Spring 2.0.7.RELEASE with Cassandra driver 3.5.0
I was able to work with it using below changes:
cOps.getCqlOperations().execute(insertStatement);
How can i check the consistency level if it got applied?
For me, this works:
batchOps.insert(ImmutableSet.of(entity), insertOptions);
I am using SchemaCrawler to get the metadata for MySQL5.7 tables using the following code:
final Connection connection = ...;
final DatabaseSpecificOverrideOptions databaseSpecificOverrideOptions =
SchemaCrawlerUtility.matchDatabaseSpecificOverrideOptions(connection);
final SchemaCrawler schemaCrawler = new SchemaCrawler(connection, databaseSpecificOverrideOptions);
final SchemaCrawlerOptions options = new SchemaCrawlerOptions();
options.setSchemaInfoLevel(SchemaInfoLevelBuilder.maximum());
options.setTableInclusionRule(new IncludeAll());
options.setColumnInclusionRule(new IncludeAll());
final Catalog catalog = schemaCrawler.crawl(options);
final Collection<Table> tables = catalog.getTables();
for (Table t : tables) {
logger.info("Table comment: {}", t.getRemarks());
}
This is the test table:
create table testtable (
id bigint comment 'key level comment',
name varchar(32) comment 'column level comment'
) comment='table level comment';
I can get the column-level comments, but I never can get the table-level comment.
Is there anything that I mis-configured ?
Thanks!
This is an annoyance with the MySQL JDBC driver. You need to set useInformationSchema=true in your JDBC connection URL when creating a connection. For more information, please take a look at the StackOverflow question, Retrieve mysql table comment using DatabaseMetaData.
Sualeh Fatehi, SchemaCrawler
If is use this code in a CQL shell , I get all the names of table(s) in that keyspace.
DESCRIBE TABLES;
I want to retrieve the same data using ResulSet . Below is my code in Java.
String query = "DESCRIBE TABLES;";
ResultSet rs = session.execute(query);
for(Row row : rs) {
System.out.println(row);
}
While session and cluster has been initialized earlier as:
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
Session session = cluster.connect("keyspace_name");
Or I like to know Java code to retrieve table names in a keyspace.
The schema for the system tables change between versions quite a bit. It is best to rely on drivers Metadata that will have version specific parsing built in. From the Java Driver use
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
Collection<TableMetadata> tables = cluster.getMetadata()
.getKeyspace("keyspace_name")
.getTables(); // TableMetadata has name in getName(), along with lots of other info
// to convert to list of the names
List<String> tableNames = tables.stream()
.map(tm -> tm.getName())
.collect(Collectors.toList());
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
Metadata metadata = cluster.getMetadata();
Iterator<TableMetadata> tm = metadata.getKeyspace("Your Keyspace").getTables().iterator();
while(tm.hasNext()){
TableMetadata t = tm.next();
System.out.println(t.getName());
}
The above code will give you table names in the passed keyspace irrespective of the cassandra version used.
Can anyone please help me on the below query.
I have an RDD with 5 columns. I want to join with a table in Cassandra.
I knew that there is a way to do that by using "joinWithCassandraTable"
I see somewhere a syntax to use it.
Syntax:
RDD.joinWithCassandraTable(KEYSPACE, tablename, SomeColumns("cola","colb"))
.on(SomeColumns("colc"))
Can anyone please send me the correct syntax??
I would like to actually know where to mention the column name of a table which is a key to join.
JoinWithCassandraTable works by pulling only the partition keys which match your RDD entries from C* so it only works on partition keys.
The documentation is here
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md#using-joinwithcassandratable
and API Doc is here
http://datastax.github.io/spark-cassandra-connector/ApiDocs/1.6.0-M2/spark-cassandra-connector/#com.datastax.spark.connector.RDDFunctions
The jWCT table method can be used without the fluent api by specifying all the arguments in the method
def joinWithCassandraTable[R](
keyspaceName: String,
tableName: String,
selectedColumns: ColumnSelector = AllColumns,
joinColumns: ColumnSelector = PartitionKeyColumns)
But the fluent api can also be used
joinWithCassandraTable[R](keyspace, tableName).select(AllColumns).on(PartitionKeyColumns)
These two calls are equivalent
Your example
RDD.joinWithCassandraTable(KEYSPACE, tablename, SomeColumns("cola","colb")) .on(SomeColumns("colc"))
Uses the Object from RDD to join against colc of tablename and only returns cola and colb as join results.
Use below syntax for join in cassandra
joinedData = rdd.joinWithCassandraTable(keyspace,table).on(partitionKeyName).select(Column Names)
It will look something like this,
joinedData = rdd.joinWithCassandraTable(keyspace,table).on('emp_id').select('emp_name', 'emp_city')
I'm following the Slick documentation that can be found at the following location:
http://slick.typesafe.com/doc/3.0.0/gettingstarted.html
In that I'm looking into the "Populating the Database" section. I'm not able to find the schema method defined for the TableQuery, hence I'm not able to populate my H2 database with initial values!
Is there something wrong with the documentation? It is confusing the hell out of me! Please help!
Here is how to do it:
val h2DbConfig = Map(
"default.driver" -> "slick.driver.H2Driver$",
"default.db.driver" -> "org.h2.Driver",
"default.db.url" -> "jdbc:h2:yourDbName;DATABASE_TO_UPPER=false;DB_CLOSE_DELAY=-1"
)
ConfigFactory.parseMap(h2DbConfig) // gives you a typesafe config
Once you have the typesafe config object containing the h2 database, you create the tables as below:
private def h2SchemaSetUp = {
val schema = slick.dbio.DBIO.seq(
(Table1.tbl1.schema ++
Table2.tbl2.schema
).create
)
Await.result(db.run(schema), 5.seconds)
}
You then insert values into the created Schema as per Slick's documentation!