How can i describe table in cassandra database? - cql

$describe = new Cassandra\SimpleStatement(<<<EOD
describe keyspace.tablename
EOD
);
$session->execute($describe);
i used above code but it is not working.
how can i fetch field name and it's data type from Cassandra table ?

Refer to CQL documentation. Describe expects a table/schema/keyspace.
describe table keyspace.tablename
Its also a cqlsh command, not an actual cql command. To get this information query the system tables. try
select * from system.schema_columns;
- or for more recent versions -
select * from system_schema.columns ;
if using php driver may want to check out http://datastax.github.io/php-driver/features/#schema-metadata

Try desc table keyspace.tablename;

Related

cassandra bind variables produces error: no viable alternative at input '?'

I'm using cassandra. I am trying to update the gc_grace value using new bind variable.
ALTER table keyspace.table_name with gc_grace_seconds = ? ;
I got the following error:
no viable alternative at input '?'
How can I solve this?
As I see from the source code (maybe I'm wrong), but ALTER TABLE doesn't support bindings, so you can't use them for this command (and all DDL commands), and need to just use execute with specific value
It looks like you're trying to bind parameters programatically to set GC grace on a table. It isn't possible to do that using the Cassandra drivers.
It will only work through cqlsh. For example:
cqlsh> ALTER TABLE community.maptbl WITH gc_grace_seconds = 3600;
It doesn't make sense to do it in your app and it is not recommended. Cheers!

Spark SQL version of EXEC()

Does anyone know of a way in Spark SQL to execute a string variable like the following?
INSERT TableA (Col1,Col2) SELECT Col1,Col2 FROM TableB
I understand that I can obviously write this statement directly. However, I am using a work flow engine where my Insert/Select statement is in String variable. If not, I assume I should use spark_submit. I was looking for other options.
I'm not sure what environment you're in. If this is a Spark application or a Spark shell you always provide queries as strings:
val query = "INSERT TableA (Col1,Col2) SELECT Col1,Col2 FROM TableB"
sqlContext.sql(query)
(See http://spark.apache.org/docs/latest/sql-programming-guide.html#running-sql-queries-programmatically.)
Spark Sql also support hive queries
insert overwrite table usautomobiles select * from sourcedata
Go Through this link

memsql does not support temporary table or table variable?

Tried to create temp table in Memsql:
Create temporary table ppl_in_grp as
select pid from h_groupings where dt= '2014-10-05' and location = 'Seattle'
Got this error: Feature 'TEMPORARY tables' is not supported by MemSQL.
Is there any equivalence I can use instead? Thanks!
temp tables are definitely on the roadmap. For now, with MemSQL 4 you can create a regular table and clean it up at the end of your session, or use subqueries.

Syntax error at position 7: unexpected "*" for `Select * FROM mytable;`

I write because I've a problem with cassandra; after have imported the data from pentaho as show here
http://wiki.pentaho.com/display/BAD/Write+Data+To+Cassandra
when I try to execute the query
Select * FROM mytable;
cassandre give me an error message
Syntax error at position 7: unexpected "*" for Select * FROM mytable;.
and don't show the results of query.Why? what does it mean that error?
the step that i make are the follow:
start cassandra cli utility;
use keyspace added from pentaho; (use tpc_h);
select to show the data added (Select * FROM mytable;)
The cassandra-cli does not support any CQL version. It has its own syntax which you can find on datastax's website.
Just for clarity, in cql to select everything from a table (aka column-family) called mytable stored in a keyspace called myks you would use:
SELECT * FROM myks.mytable;
The equivalent in cassandra-cli would *roughly be :
USE myks;
LIST mytable;
***** In the cli you are limited to selecting the first 100 rows. If this is a problem you can use the limit clause to specify how many rows you want:
LIST mytable limit 10000;
As for this:
in cassandra i have read that isn't possible make the join such as sql, ther isn't a shortcut to issue this disadvantage
There is a reason why joins don't exist in Cassandra, its for the same reason that C* isn't ACID compliant, it sacrifices that functionality for it's amazing performance and scalability, so it's not a disadvantage, you just need to re-think your model if you need joins. Also take a look at this question / answer.

How do I delete all data in a Cassandra column family?

I'm looking for a way to delete all of the rows from a given column family in cassandra.
This is the equivalent of TRUNCATE TABLE in SQL.
You can use the truncate thrift call, or the TRUNCATE <table> command in CQL.
http://www.datastax.com/docs/1.0/references/cql/TRUNCATE
You can also do this via Cassandra CQL.
$ cqlsh
Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh> TRUNCATE my_keyspace.my_column_family;
Its very simple in Astyanax. Just a Single Line statement
/* keyspace variable is Keyspace Type */
keyspace.truncateColumnFamily(ColumnFamilyName);
If you are using Hector it is easy as well:
cluster.truncate("our keyspace name here", "your column family name here");
If you are using cqlsh, then you can either do it in 2 ways
use keyspace; and then truncate column_family;
truncate keyspace.column_family;
If you want to use DataStax Java driver, you can look at -
http://www.datastax.com/drivers/java/1.0/com/datastax/driver/core/querybuilder/QueryBuilder.html
or
http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/querybuilder/Truncate.html
depending on your version.
if you are working on cluster setup, truncate can only be used when all the nodes of the cluster are UP.
By using truncate, we will miss the data(we are not sure with the importance of the data)
So the very safe way as well a trick to delete data is to use COPY command,
1) backup data using copy cassandra cmd
copy tablename to 'path'
2) duplicate the file using linux cp cmd
cp 'src path' 'dst path'
3) edit duplicate file in dst path, delete all lines expect first line.
save the file.
4) use copy cassandra cmd to import
copy tablename from 'dst path'

Resources