alter external table in sql azure - azure

I am trying to alter external table column name to new name
I followed this post
ALTER EXTERNAL TABLE RemoteCustomerTable RENAME [OldName] column TO [Name]
Error:
Incorrect syntax near the keyword 'TABLE'.
Can external table be altered ?
Any help would be great.
Update:
As i dont see any official docs to alter external table, so i droped the external table and re-created it using this post
DROP EXTERNAL TABLE RemoteCustomerTable;

Alter external table is not a supported operation in Azure SQL Database. As you pointed out, DROP and then CREATE is the way to go.

Related

Greenplum-Spark Connector generate too many external table

I'm programming with Greenplum-Spark Connector, and I find that every time I use Spark to take the table data, an external table is created in Greenplum and is not deleted after taking data. When i query the same table, another external been generated. Can anybody tell me if the extenral table can be used in the future? Will this table be automatically cleaned?

Importing Schema Issues

I have my source table in SAP HANA with almost 80 columns, the destination tables on azure DW/Mapping document has 60 columns. (They are mapped to source columns with different names). Now When I am trying to create a source dataset I gave the linked service which was created by my TL and Once I Select that HANA linked to service, It is not showing me any table option under linked service to select and import schemas. Why is that? It is throwing gateway timeout error.
PS: The Linked service was created by my manager and I don't know any credentials for that
SAP Hana only supports query, not table name.
Create a pipeline first and drag a copy activity into the pipeline. Reference your dataset in your copy activity.
Then you will see a browser table button. You could construct your query there.
When you want to edit your json, please use the button instead of the button in the activity node.
I solved it, the issues were source has column concatenation and destination table didn't have a correct logic in concatenation and few columns which were not present in source, I did the explicit mapping. And made it one-one relation from source to destination in Azure. It works now. Thank you all!

Datastax rename table

I have deployed 9 node cluster on google cloud.
Created a table and loaded the data. Now want to change the table name.
Is there any way I can change the table name in Cassandra?
Thanks
You can't rename table name.
You have to drop the table and create again
You can use ALTER TABLE to manipulate the table metadata. Do this to change the datatype of a columns, add new columns, drop existing columns, and change table properties. The command returns no results.
Start the command with the keywords ALTER TABLE, followed by the table name, followed by the instruction: ALTER. ADD, DROP, RENAME, or WITH. See the following sections for the information each instruction require
If you need the data you can backup and restore data using copy command in cqlsh.
To Backup data :
COPY old_table_name TO 'data.csv'
To Restore data :
COPY new_table_name FROM 'data.csv'

Create a Volatile table in teradata

I have a sharepoint list which i have linked to in MS Access.
The information in this table needs to be compared to information in our datawarehouse based on keys both sets of data have.
I want to be able to create a query which will upload the ishare data into our datawarehouse under my login run the comparison and then export the details to Excel somewhere. MS Access seems to be the way to go here.
I have managed to link the ishare list (with difficulties due to the attachment fields)and then create a local table based on this.
I have managed to create the temp table in my Volatile space.
How do i append the newly created table that i created from the list into my temporary space.
I am using Access 2010 and sharepoint 2007
Thank you for your time
If you can avoid using Access I'd recommend it since it is an extra step for what you are trying to do. You can easily manipulate or mesh data within the Teradata session and export results.
You can run the following types of queries using the standard Teradata SQL Assistant:
CREATE VOLATILE TABLE NewTable (
column1 DEC(18,0),
column2 DEC(18,0)
)
PRIMARY INDEX (column1)
ON COMMIT PRESERVE ROWS;
Change your assistant to Import Mode (File-> Import Data)
INSERT INTO NewTable (?,?)
Browse for your file, this example would be a comma delineated file with two numeric columns and column one being the index.
You can now query or join this table to any information in the uploaded database.
When you are finished you can drop with:
DROP TABLE NewTable
You can export results using File->Export Data as well.
If this is something you plan on running frequently there are many ways to easily do these type of imports and exports. The Python module Pandas has simple functionality for reading a query directly into DataFrame objects and dropping those objects into Excel through the pandas.io.sql.read_frame() and .to_excel functions.

Adding columns to a sybase table with unique auto_identity index option

I've inherited a Sybase database that has the 'unique auto_identity index' option enabled on it. As part of an upgrade process I need to add a few extra columns to the tables in this database i.e.
alter table mytable add <newcol> float default -1 not null
When I try to do this I get the follow error:
Column names in each table must be unique, column name SYB_IDENTITY_COL in table #syb__altab....... is specifed more than once
Is it possible to add columns to a table with this property enabled?
Update 1:
I created the following test that replicates the problem:
use master
sp_dboption 'esmdb', 'unique auto_identity indexoption',true
use esmdb
create table test_unique_ids (test_col char)
alter table test_unique_ids add new_col float default -1 not null
The alter table command here produces the error. (Have tried this on ASE 15/Solaris and 15.5/Windows)
Update 2:
This is a bug in the Sybase dbisql interface, which the client tools Sybase Central and Interactive SQL use to access the database and it only appears to affect tables with the 'unique auto_identity index' option enabled.
To work around the problem use a different SQL client (via JDBC for example) to connect to the database or use isql on the command line.
Should be no problem to ALTER TABLE with such columns; the err msg indicates the problem regards something else. I need to see the CREATE TABLE DDL.
Even if we can't ALTER TABLE, which we will try first, there are several work-arounds.
Responses
Hah! Internal Sybase error. Open a TechSupport case.
Workaround:
Make sure you get jthe the exact DDL. sp_help . Note the IDENTITY columns and indices.
Create a staging table, exactly the same. Use the DDL from (1). Exclude the Indices.
INSERT new_table SELECT old_table. If the table is large, break it into batches of 1000 rows per batch.
Now create the Indices.
If the table is very large, AND time is an issue, then use bcp. You need to research that first, I am happy to answer questions afterwards.
When I ran your sample code I first get the error:
The 'select into' database option is not enabled for database 'mydb'. ALTER TABLE with data copy cannot be done. Set the 'select into' database option and re-run
This is no doubt because the data within your table needs copying out because the new column is not null. This will use tempdb I think, and the error message you've posted refers to a temp table. Is it possible that this dboption has been accidentally enabled for the tempdb?
It's a bit of a shot in the dark, as I only have 12.5 to test on here, and it works for me. Or it could be a bug.

Resources