I'm a newbie to Cassandra. I tried to insert a log file into Cassandra using the GUI tool Spoon, by Pentaho Data Integration. The file has the following fields. Timestamp, process_id, IP, child_id, dev_id, filter_level_id, uh, URL. The log file did not have a header. So i typed in these column names in the file and loaded it to Cassandra using the tool. But when i tried to create an index for the field IP using cql3, I got the following error.
No column definition found for column ip
Am I doing something wrong? How can this be resolved?
Thanks in advance for any help.
Related
I want to push data into an already existing table, single column family, no records.
I am using shc-core:1.1.1-2.1-s_2.11 on a windows machine. I have hbase 1.2.6 installed and use scala 2.11.8.
When I try to push data I got first the following error: org.apache.spark.sql.execution.datasources.hbase.InvalidRegionNumberException: Number of regions specified for new table must be greater than 3.
After following the advice of this link https://github.com/hortonworks-spark/shc/issues/249#issue-318285217, I added: HBaseTableCatalog.newTable -> "5" to my options.
It still failed but with: java.lang.IllegalArgumentException: Can not create a Path from a null string.
Following this link: https://github.com/hortonworks-spark/shc/issues/151#issuecomment-313800739, I added to my catalog: , "tableCoder":"PrimitiveType".
Still facing the same error.
I saw people are expecting some clarification about this issue (https://github.com/hortonworks-spark/shc/issues/249#issuecomment-463528032).
It is known issue and apparently it seems fixed (https://github.com/hortonworks-spark/shc/issues/155#issuecomment-315236736).
I do not know what to do next.
Is there a solution about this?
We tried adding a new column to an existing table in Cassandra. It ended up giving an exception "org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch".
When we execute the command "describe " --> New columns was added.
when we tried to insert the data --> it throws an exception that "the newly added column does NOT exist".
We tried to recreate the table by dropping it --> Table gets dropped but while recreating it says table already exists.
Seems like some issue with Cassandra sync.
I want this issue to be resolved without any need to restart the Cassandra Nodes.
Can someone suggest the right approach to resolve this?
Thanks.
The rolling restart of the cluster resolved this issue. Thanks.
Flushing memtables (nodetool flush) should resolve the issue.
Flushing does not require restarting cassandra whereas draining does.
See:
Column family ID mismatch during ALTER TABLE
I have installed Cassandra 2.2.12 on my window machine locally. I have exported database from live server in a '.sql' file using 'razorsql' GUI tool. I don't have server access for live, only have database access. When i am trying ti import '.sql' file using 'razorsql' to local cassandra setup, its giving me error (Invalid STRING constant '8ca25030-89ab-11e7-addb-70a0656e5127' for "id" of type timeuuid).
Even i tried using COPY FROM command, its returning same error. Please find attached screen-shot for more detail of error.
Could anybody please help?
You should not put any quotes, because then it gets interpreted as a string instead of UUID - hence the error message.
See also: Inserting a hard-coded UUID via CQLsh (Cassandra)
I think you have two solutions:
edit your export file and remove the single quotes from the inserts.
rerun the export and export the data as csv and run the copy command in cqlsh. In this case, the csv file will not have quotes.
I have a counter column family in cassandra. When i try to view the data from CQL i get an error even though there is data in the column family.
SELECT * from userstats;
Generates the following error:
'int' object has no attribute 'replace'
I can confirm that the data is in the column family and is working properly since I can view the data with the Datastax Opscenter data explorer.
It sounds like you're using an older version of cqlsh. Upgrading it (just copying the bin/cqlsh file from the Cassandra 1.1 branch head, along with everything under the pylib directory, into place) ought to solve this.
If it doesn't, running cqlsh with --debug would help a lot in diagnosing the problem.
Does anybody know to force sybase bcp to export column headers along with data?
The utility documentation curiously ignores this important feature..., perhaps I missing something.
Any help will be greatly appreciated
It looks like this older post could give you some answers.
export table to file with column headers (column names) using the bcp utility and SQL Server 2008
BCP doesn't support a command line flag to add the headers unfortunately