Renaming columns in memsql column name? - singlestore

in memsql how to rename the column name?
i have tried using the below command it is not working.
ALTER TABLE supp_quote_detail RENAME COLUMN user_support_comments TO user_abc_support_comments ;
ALTER TABLE supp_quote_detail RENAME COLUMN user_support_comments TO user_abc_support_comments text;

ALTER TABLE quote_detail CHANGE user_quote_support_comments user_cfg_support_comments ;

Related

In Bigquery, How can I convert Struct of Struct of String to Columns

So, In the table, there are 3 columns as per Image , 3rd one is Record(Struct), conntaing 2 structs old and new. Inside those structs there are columns and values.
I can access each final column by this -change.old.name , But I want to convert them as normal columns and create another taable with that ?
tried unnest but doesn't work as it's not array.
Data structure image
UPDATE :
Finally got it sorted. Should select and convert all columns by selecting all of the nested data and set the alias as how we want or replace dot with an underscore. Then create a table with that.
create table abc
as
select
ID
,Created_on
,Change.old.add as Change_old_add
,Change.old.name as Change_old_name
,Change.old.count_people as Change_old_count_people
,Change.new.add as Change_new_add
,Change.new.name as Change_new_name
,Change.new.count_people as Change_new_count_people
FROM `project.Table`
Finally got it sorted. Should select and convert all columns by selecting all of the nested data and set the alias as how we want or replace dot with an underscore. Then create a table with that.
create table abc
as
select
ID
,Created_on
,Change.old.add as Change_old_add
,Change.old.name as Change_old_name
,Change.old.count_people as Change_old_count_people
,Change.new.add as Change_new_add
,Change.new.name as Change_new_name
,Change.new.count_people as Change_new_count_people
FROM `project.Table`

How to get the column names after table extracted from a PDF file using camelot? I'm new for this

Briefly I am doing this steps.
tables = camelot.read_pdf(doc_file)
tables[0].df
I am using tables[0].df.columns to get column names from the extracted table.
But it does not give the column names.
Camelot extracted tables have no alphabetic column names.
tables[0].df.columns returns, for example, for three columns table:
RangeIndex(start=0, stop=3, step=1)
Instead, you can try to read the first row and get a list from it: tables[0].df.iloc[0].tolist().
The output could be:
['column1', 'column2', 'column3']

How to write to an excel with different column name from dataframe?

Like in sql;
Select Name as EmployeeName,
Age as EmployeeAge
From tableA.
How can I write to an excel with different column name?
df.to_excel(writer, columns=['Date' as TimeStamp,'Id' as DeliveryId],sheet_name='sales')
I think not possible, need rename before DataFrame.to_excel:
d = {'Date':'TimeStamp','Id':'DeliveryId'}
df.rename(columns=d).to_excel(writer, sheet_name='sales')
If multiple columns and want to specify them by list with parameter columns:
columns : sequence or list of str, optional
Columns to write.
d = {'Date':'TimeStamp','Id':'DeliveryId'}
df.rename(columns=d).to_excel(writer, columns=["TimeStamp","DeliveryId"], sheet_name='sales')

Is it possible to delete all rows for a specific column at once in cassandra?

This is not a question asking how to delete all data for a table.
That's what TRUNCATE does.
I want to delete all rows for a specific column at once in cassandra.
I can delete only one row for a specific column using DELETE or UPDATE like the follows.
DELETE column_name FROM table_name WHERE primary_key = value_of_primary_key;
or
UPDATE table_name SET column_name = null WHERE primary_key = value_of_primary_key;
However, what I want to do is like the follows.
TRUNCATE key_name.table_name.column_name;
The above TRUNCATE code doesn't work since it doesn't take column_name, but I hope you understand what I want to do.
Is it possible to delete all rows for a specific column at once?
If so, how can I do that?
If not, what is the best workaround for this?
Try Drop the column and Add the column again.
Example :
ALTER TABLE table_name DROP column_name;
ALTER TABLE table_name ADD column_name text;

Copy few of the columns of a csv file into a table error

I have data in excel sheet named with district.csv needs to import only few columns from excel to postgres table named with hc_court_master
copy hc_court_master (district_cd, category,category1,shrt_nm,court_name)
from 'D:\district.csv'
with (format csv)
I was getting the error
ERROR: extra data after last expected column
SQL state: 22P04
Context: COPY hc_court_master, line 1: "court_code,court_name,short_c_name,category,dist_no,dist_name,category1"
There is two problem:
You can not copy just few column from csv. And you can not change their order.
Just use HEADER keyword in with clause to ignore header in csv file.
copy hc_court_master (court_code, court_name, short_c_name,
category, dist_no, dist_name, category1)
from 'D:\district.csv'
with (format csv, HEADER)
If you do not want all fields in your table you can:
ALTER TABLE DROP COLUMN after importing
Import into temp table and after that run INSERT INTO ... SELECT ...
As mentioned here you can not to load selected fields from CSV, only whole data. So
begin; -- Start transaction
do $$
begin
create temp table t( -- Create table to load data from CSV
court_code text, -- There are whole list of columns ...
court_name text,
short_c_name text,
category text,
dist_no text,
dist_name text,
category1 text) on commit drop;
copy t from '/home/temp/a.csv' with (
format CSV,
header -- To ignore first line
);
end $$;
insert into hc_court_master (district_cd, category,category1,shrt_nm,court_name)
select <appropriate columns> from t;
commit;
copy hc_court_master (district_cd, category,category1,shrt_nm,court_name)
from 'D:\district.csv'
with (format csv)
i think sql above will work only if your csv contains only required columns (district_cd, category,category1,shrt_nm,court_name).
You can delete extra columns then try to upload.
Please refer below link
Copy a few of the columns of a csv file into a table

Resources