it is the first time i use cassandra, so please excuse me if my question is naive :)
I have downloaded and extracted cassandra 1.2.4
i have run it using /usr/local/apache-cassandra-1.2.4/bin/cassandra -f
now i connect to it
root#Alaa:/usr/local/apache-cassandra-1.2.4# ./bin/cassandra-cli
Connected to: "Test Cluster" on 127.0.0.1/9160
Welcome to Cassandra CLI version 1.2.4
Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.
[default#unknown] show cluster name
...
and those three dots remain forever!! any idea what is wrong?
You need to terminate the command with a ;, otherwise the shell has no way of telling that you're "done" entering a query/command:
show cluster name;
^---
That's why the help;, quit;, and exit; examples printed as part of the startup all include a ;...
Related
[Question posted by a user on YugabyteDB Community Slack]
Just exploring YugabyteDB for a large application, and trying to get a private cluster set up. On the first yb-master I can start yb_master with the following command:
/root/yugabyte-2.11.1.0/bin/yb-master --replication_factor=3 --fs_data_dirs "/data"
I tried to set these in a flagfile as such:
/root/yugabyte-2.11.1.0/bin/yb-master --flagfile=/yugabyte/config/ybmaster.conf
My file:
cat /yugabyte/config/ybmaster.conf
{
"fs_data_dirs": "/data",
"replication_factor": 3
}
When I run it:
Invalid argument (yb/util/init.cc:96): Cannot initialize logging. Flag fs_data_dirs (a comma-separated list of data directories) must contain at least one data directory.
Any ideas on what is wrong with my flagfile? Thank you all for your help :)
Your flag file doesn't look right, it needs to be in this format:
--flag=value
--flag=value
And not in JSON like you did.
My development instance of Accumulo became quite messy with a lot of tables created for testing.
I would like to bulk delete a large number of tables.
Is there a way to do it other than deleting the entire instance?
BTW - If it's of any relevance, this instance is just a single machine "cluster".
In the Accumulo shell, you can specify a regular expression for table names to delete by using the -p option of the deletetable command.
I would have commented on original answer, but I lack the reputation (first contribution right here).
It would have been helpful to provide a legal regex example.
The Accumulo shell can only escape certain characters. In particular it will not escape brackets []. If you want to remove every table starting with the string "mytable", the otherwise legal regex commands have the following warning/error.
user#instance> deletetable -p mytable[.]*
2016-02-18 10:21:04,704 [shell.Shell] WARN : No tables found that match your criteria
user#instance> deletetable -p mytable[\w]*
2016-02-18 10:21:49,041 [shell.Shell] ERROR: org.apache.accumulo.core.util.BadArgumentException: can only escape single quotes, double quotes, the space character, the backslash, and hex input near index 19
deletetable -p mytable[\w]*
A working shell command would be:
user#instance> deletetable -p mytable.*
There is not currently (as of version 1.7.0) a way to bulk delete many tables in a single call.
Table deletion is actually done in an asynchronous way. The client submits a request to delete the table, and that table will be deleted at some point in the near future. The problem is that after the call to delete the table is performed, the client then waits until the table is deleted. This blocking is entirely artificial and unnecessary, but unfortunately that's how it currently works.
Because each individual table deletion appears to block, a simple loop over the table names to delete them serially is not going to finish quickly. Instead, you should use a thread pool, and issue delete table requests in parallel.
A bulk delete table command would be very useful, though. As an open source project, a feature request on their issue tracker would be most welcome, and any contributions to implement it, even more so.
Trying to load tsv file in HBase running in HDInsight in Microsoft Azure cloud using a recommended approach connecting through Remote Desktop and running on the command line trying to load t1.tsv file (with two tab separated columns) from hdfs into hbase t1 table:
C:\apps\dist\hbase-0.98.0.2.1.5.0-2057-hadoop2\bin>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,num t1 t1.tsv
and get:
ERROR: One or more columns in addition to the row key and timestamp(optional) are required
Usage: importtsv -Dimporttsv.columns=a,b,c
replacing order of the specified columns to num,HBASE_ROW_KEY
C:\apps\dist\hbase-0.98.0.2.1.5.0-2057-hadoop2\bin>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=num,HBASE_ROW_KEY t1 t1.tsv
I get:
ERROR: Must specify exactly one column as HBASE_ROW_KEY
Usage: importtsv -Dimporttsv.columns=a,b,c
This tells me that comma separator in the column list is not recognized or column name is incorrect I also tried to use column with qualifier as num:v and as 'num' - nothing helps
Any ideas what could be wrong here? Thanks.
>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns="HBASE_ROW_KEY,d:c1,d:c2" testtable /example/inputfile.txt
This works for me. I think there are some differences between terminals in Linux and Windows, thus in windows you need to add quotation marks to clarify this is a value string, otherwise might not be recognized.
I'm looking for a way to delete all of the rows from a given column family in cassandra.
This is the equivalent of TRUNCATE TABLE in SQL.
You can use the truncate thrift call, or the TRUNCATE <table> command in CQL.
http://www.datastax.com/docs/1.0/references/cql/TRUNCATE
You can also do this via Cassandra CQL.
$ cqlsh
Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh> TRUNCATE my_keyspace.my_column_family;
Its very simple in Astyanax. Just a Single Line statement
/* keyspace variable is Keyspace Type */
keyspace.truncateColumnFamily(ColumnFamilyName);
If you are using Hector it is easy as well:
cluster.truncate("our keyspace name here", "your column family name here");
If you are using cqlsh, then you can either do it in 2 ways
use keyspace; and then truncate column_family;
truncate keyspace.column_family;
If you want to use DataStax Java driver, you can look at -
http://www.datastax.com/drivers/java/1.0/com/datastax/driver/core/querybuilder/QueryBuilder.html
or
http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/querybuilder/Truncate.html
depending on your version.
if you are working on cluster setup, truncate can only be used when all the nodes of the cluster are UP.
By using truncate, we will miss the data(we are not sure with the importance of the data)
So the very safe way as well a trick to delete data is to use COPY command,
1) backup data using copy cassandra cmd
copy tablename to 'path'
2) duplicate the file using linux cp cmd
cp 'src path' 'dst path'
3) edit duplicate file in dst path, delete all lines expect first line.
save the file.
4) use copy cassandra cmd to import
copy tablename from 'dst path'
When I run sp_helpdb dbname in Sybase Adaptive Server Enterprise, it returns only the following columns:
name,db_size,owner,dbid,created,status
And it's not returning the following columns:
device_fragments,size,usage,created,free kbytes
Why is this happening?
Both sets are returned however where they are displayed depends on which tool you're using to run the query. If you're using SQL Advantage or ASEISQL, then you need to look in the results and the messages windows to get the full answers. If you're using the command line ISQL then all will be returned together.
It's because some of the results are returned from a select, and some from print messages.
print "Print hello"
select "Select hello"
Try running the above and you'll hopefully find where each different output is displayed in your tool.
If you're using SQL Advantage see the SQL Advantage image here, this shows the options screen in which you can change how your results return. The "Display Print Messages with Results" might help in this case.