I have columnfamily testtable with data. Can I somehow export data to INSERT statements?
desc testtable will give me code to create it, but how can I export data? thanks.
If you just want to export from one table to another you can use the CQL COPY command:
COPY keyspace.table1(column1, column2) TO 'temp.csv';
COPY keyspace.table2(column1, column2) FROM 'temp.csv';
This will copy all of the data from keyspace.table1 to keyspace.table2
Related
I am new to ADF. I have a pipeline which deletes all rows of any of the attributes are null. Schema : { Name, Value, Key}
I tried using a data flow with Alter Table and set both source and sink to be the same table but it always appends to the table instead of overwriting it which creates duplicate rows and the rows I want to delete still remain. Is there a way to overwrite the table.
Assuming that your table is SQL table, I have tried to overwrite the source table after deleting the specific null values. It successfully deleted the records but got the duplicate records even after exploring various methods.
So, as an alternate you can try the below methods to achieve your requirement:
By Creating new table and deleting old table:
This is my sample source table names mytable.
Alter transformation
Give new table in the sink and in settings->post SQL scripts. give the drop command to delete the source dataset. Now your sink table is your required table. drop table [dbo].[mytable]
Result table(named newtable) and old table.
Source table deleted.
Deleting null values from source table using script activity
Use script activity to delete the null values from source table.
Source table after execution.
Is there any Insert overwrite mode in postgres sql like below.
INSERT OVERWRITE INTO TABLE table2 select * FROM table1;
PostgreSQL has the TRUNCATE command to wipe the contents of a table but keep the table itself. You would have to use two statements:
TRUNCATE table2;
INSERT INTO table2 SELECT * FROM table1;
If you want to INSERT new records and UPDATE existing records you can use the ON CONFLICT argument of the INSERT statement:
INSERT INTO table2 (id, name)
(SELECT id, name FROM table1)
ON CONFLICT (id_pkey) DO UPDATE SET name = EXCLUDED.name;
You need to have a primary key or UNIQUE constraint to perform the conflict check. Full details can be viewed in the INSERT Statement documentation, https://www.postgresql.org/docs/current/sql-insert.html
You can also choose to DO NOTHING on conflict which has the advantage of protecting against inserting duplicate records.
I have deployed 9 node cluster on google cloud.
Created a table and loaded the data. Now want to change the table name.
Is there any way I can change the table name in Cassandra?
Thanks
You can't rename table name.
You have to drop the table and create again
You can use ALTER TABLE to manipulate the table metadata. Do this to change the datatype of a columns, add new columns, drop existing columns, and change table properties. The command returns no results.
Start the command with the keywords ALTER TABLE, followed by the table name, followed by the instruction: ALTER. ADD, DROP, RENAME, or WITH. See the following sections for the information each instruction require
If you need the data you can backup and restore data using copy command in cqlsh.
To Backup data :
COPY old_table_name TO 'data.csv'
To Restore data :
COPY new_table_name FROM 'data.csv'
I created an old keyspace in Cassandra cluster but found the definition of its "comparator" is wrong, so I have to recreate a new keyspace and do data migration. Is there any tool to do data migration? or I have to program with Thrift client read all data from old keyspace and write them to new keyspace? Any suggestions or code snippets is welcome!
This is a commun question, and I think it has been asked before here.
You can use the COPY command in C*.
You will find more details here http://www.datastax.com/dev/blog/ways-to-move-data-tofrom-datastax-enterprise-and-cassandra
We can do it using COPY command in cql. Using COPY command we can save table data to .csv file and back to a table from .csv file. But, the better approach will be to write a program to read from table and write it to another table because importing from csv may fail if the table contains collection column types like list<text>, map<text, text>, set<text>.
Eg :-
To copy table data from table to .csv file :-
COPY keyspace1.table1 (column1, column2) TO 'path/to/file/keyspace1_table1.csv';
To copy csv data from file to a table :-
COPY keyspace2.table1 (column1, column2) FROM 'path/to/file/keyspace1_table1.csv';
Refer Cassandra migration tool
I know the statement in oracle which copies the structure and the data.
create table mytable1 as select * from mytable;
But how to achieve the same in Sybase ASE?
It is possible using select into!
Check more info HERE!
In Sybase ASE 16 the syntax for copying the data and structure is
SELECT field1, field2 INTO NewTable FROM OldTable
If you want to copy only the structure use this
SELECT field1, field2 INTO NewTable FROM OldTable WHERE 1=0