memsql does not support temporary table or table variable? - temp-tables

Tried to create temp table in Memsql:
Create temporary table ppl_in_grp as
select pid from h_groupings where dt= '2014-10-05' and location = 'Seattle'
Got this error: Feature 'TEMPORARY tables' is not supported by MemSQL.
Is there any equivalence I can use instead? Thanks!

temp tables are definitely on the roadmap. For now, with MemSQL 4 you can create a regular table and clean it up at the end of your session, or use subqueries.

Related

Databricks auto merge schema

Does anyone know how to resolve this error?
I have put the following before my merge, but it seems to not like it.
%sql set spark.databricks.delta.schema.autoMerge.enabled = true
Also, the reason for putting this in was because my notebook was failing on schema changes to a delta lake table. I have an additional column on one of the tables I am loading into. I thought that data bricks were able to auto-merge schema changes.
The code works fine in my environment. I'm using Databricks runtime 10.4
TL;DR: add a semicolon to the end of the separate SQL statements:
set spark.databricks.delta.schema.autoMerge.enabled = true;
The error is actually a more generic SQL error; the IllegalArgumentException is a clue - though not a very helpful one :)
I was able to reproduce your error:
set spark.databricks.delta.schema.autoMerge.enabled = true
INSERT INTO records SELECT * FROM students
gives: Error in SQL statement: IllegalArgumentException: spark.databricks.delta.schema.autoMerge.enabled should be boolean, but was true
and was able to fix it by adding a ; to the end of the first line:
set spark.databricks.delta.schema.autoMerge.enabled = true;
INSERT INTO records SELECT * FROM students
succeeds.
Alternatively you could run the set in a different cell.

Databricks Drop Rollback

Is there a way to undo drop table statemnt in Databricks. I know for delete there is time travel/restore option, but I am specifically looking for drop statemnt. Please help.
DROP TABLE removes data only when you have managed table - when you created it without explicit specification of location. To prevent dropping of the data, create a table as unmanaged - even if you drop the table, it will remove only table definition, but not data, so you can always re-create the table using the data (it's not limited to Delta, you can use other formats as well):
for SQL - specify path to data using LOCATION:
CREATE TABLE name
USING delta
LOCATION '<path-to-data>'
when using APIs (Scala/Python/R/Java) - provide the path option:
df.write.format("delta") \
.option("path", "path-to-data") \
.saveAsTable("table-name")

AnalysisException: Operation not allowed: `CREATE TABLE LIKE` is not supported for Delta tables;

create table if not exists map_table like position_map_view;
While using this it is giving me operation not allowed error
As pointed in documentation, you need to use CREATE TABLE AS, just use LIMIT 0 in SELECT:
create table map_table as select * from position_map_view limit 0;
I didn't find an easy way of getting CREATE TABLE LIKE to work, but I've got a workaround. On DBR in Databricks you should be able to use SHALLOW CLONE to do something similar:
%sql
CREATE OR REPLACE TABLE $new_database.$new_table
SHALLOW CLONE $original_database.$original_table`;
You'll need to replace $templates manually.
Notes:
This has an added side-effect of preserving the table content in case you need it.
Ironically, creating empty table is much harder and involves manipulating show create table statement with custom code

Copying data in and out of Snowflake via Azure Blob Storage

I'm trying to copy into blob storage and then copy out of blob storage. The copy into works:
copy into 'azure://my_blob_url.blob.core.windows.net/some_folder/MyTable'
from (select *
from MyTable
where condition = 'true')
credentials = (azure_sas_token = 'my_token');
But the copy out fails:
copy into MyTable
from 'azure://my_blob_url.blob.core.windows.net/some_folder/MyTable'
credentials = (azure_sas_token = 'my_token');
the error is:
SQL Compilation error: Function 'EXTRACT' not supported within a COPY.
Weirdly enough, it worked once and hasn't worked since. I'm at a loss, nothing turns up details for this.
I know there's an approach I could take using stages, but I don't want to for a bunch of reasons and even when I try with stages the same error presents itself.
Edit:
The cluster key definition is:
cluster by (idLocal, year(_ts), month(_ts), substring(idGlobal, 0, 1));
where the idLocal and idGlobal are varchars and the _ts is a TIMESTAMPTZ
I think I've seen this before with a cluster key on the table (which I don't think is supported with COPY INTO). The EXTRACT function (shown in the error) being part of the CLUSTER BY on the table.
This is a bit of a hunch, but assuming this isn't occurring for all your tables, hoping it leads to investigation on the table configuration and perhaps that might help.
Alex can you try with a different function in the cluster key on your target table like date_trunc('day',_ts)?
thanks
Chris

How can i describe table in cassandra database?

$describe = new Cassandra\SimpleStatement(<<<EOD
describe keyspace.tablename
EOD
);
$session->execute($describe);
i used above code but it is not working.
how can i fetch field name and it's data type from Cassandra table ?
Refer to CQL documentation. Describe expects a table/schema/keyspace.
describe table keyspace.tablename
Its also a cqlsh command, not an actual cql command. To get this information query the system tables. try
select * from system.schema_columns;
- or for more recent versions -
select * from system_schema.columns ;
if using php driver may want to check out http://datastax.github.io/php-driver/features/#schema-metadata
Try desc table keyspace.tablename;

Resources