Slick - This DBMS allows only a single AutoInc column to be returned from an INSERT - slick

In Slick, some examples throw this exception.
slick.SlickException: This DBMS allows only a single column to be returned from an INSERT, and that column must be an AutoInc column.
Which DBMS do support this feature?

Postgres and Oracle support returning with multiple columns.
Mysql does not support it, but slick probably emulates it for auto increment ids with the last_insert_id() call.
Not sure if any others support it.
(They should add this to the documentation..)

Related

Saving an existing item in table from dataFrame

I have a dataframes which have few rows among them some already exists in db. I want to update few columns of existing rows. How can we do that?
I see we have SaveModes:
append and override which might serve the purpose but there is a limitation in both the cases.
With append, I am getting primary key error, as this option tries to create a new row in db
With ovverride, I will loose values for the unchanged attributes in the tuple.
Can someone please suggest how can I update few attributes(Columns values) of a row(tuple).?
This can be handled in MySql level, The concept is known as upsert.
case when : primary key is new
The SQL will insert into MySQL DB as new row
Case when : primary key is existing
You can use
INSERT
ON DUPLICATE KEY UPDATE
Which will update the key with the new entries/changes.
Read More here and here.
The ideal way to such use case is, insert your data into a temporary table first in your MySQL DB and post that use a trigger in order to load that data into original table. Call that trigger from spark itself.
In spark, dataframes are immutable. So you cannot change a value in place. One way would be to read the complete table, make the modification and write back the complete table in overwrite mode. This will take time.
If your modifications are always for a particular group, say user id based or date based, then you can write the data based on that column using partitionBy(). Then you can read that partition using .filter() do the modifications and overwrite only that partition using insertInto() - from pyspark 2.3.0
Refer this answer for other versions for pyspark :Overwrite specific partitions in spark dataframe write method

Update existing rows, while altering Cassandra table

I have some table in Cassandra, and I need to add field with default data.
Is there way, to add default value to already existing rows, without updating all data manually?
ALTER TABLE data ADD some_bool bool; // Make it false for all existing records.
(Docs: ALTER TABLE Does not update existing rows)
You have to take care of that at application level when you retrieve the rows. Cassandra will return data to the client as NULL, so everything depends on the driver and language you use. Check the driver's documentation to find out if the returned values are null or real values. They usually have an isNull method to perform such checks.

full table join in powerpivot

In powerpivot, Related(Othertable[field]) retrieves the associated column from a related table.
I would like to import ALL such columns, doing the equivalent of a join.
Is it possible to do this ?
nicolas,
the smartest thing to do from my perspective is to merge your queries into one so that you can keep your original tables.
I would suggest using new PowerQuery Merge funcionality, which is very easy and works reliably (and also supports loading data directly into your PowerPivot data model).
Or you can write you custom Query in PowerPivot - if you use MSSQL (or any other) database as your source, you can actually use JOIN directly in the PowerPivot window with Table Import Wizard that makes things a bit easier.
So the answer is: keep your original data tables intact, and create a new one that will be merging them together just for the purpose of your desired report.
Hope this helps.

Inserting data into TEXT type column in Informix

How do I insert data into a columm with the type TEXT in Informix via SQL. If there are two other columns that I also want to insert/update - is the only way to save it in a file and LOAD it?
Or if I want to do do via SQL statements - can you give the syntax?
See my question: Consistent method of inserting TEXT column to Informix database using JDBC and ODBC
It is easy with JDBC and PreparedStatement. ODBC works little different but is able to insert string with simple SQL INSERT (without preparing).
The load command works, and you can also use ESQL/C to do it (it is mentioned in this answer that you might already found).
About doing it in a simple insert,
You can use the VALUES clause to insert a value, but the only value that you can give that column is null. However, you can use the SELECT form of the INSERT statement to copy a TEXT or value from another table.
You can see here the docs for Text data type.

Adding columns to a sybase table with unique auto_identity index option

I've inherited a Sybase database that has the 'unique auto_identity index' option enabled on it. As part of an upgrade process I need to add a few extra columns to the tables in this database i.e.
alter table mytable add <newcol> float default -1 not null
When I try to do this I get the follow error:
Column names in each table must be unique, column name SYB_IDENTITY_COL in table #syb__altab....... is specifed more than once
Is it possible to add columns to a table with this property enabled?
Update 1:
I created the following test that replicates the problem:
use master
sp_dboption 'esmdb', 'unique auto_identity indexoption',true
use esmdb
create table test_unique_ids (test_col char)
alter table test_unique_ids add new_col float default -1 not null
The alter table command here produces the error. (Have tried this on ASE 15/Solaris and 15.5/Windows)
Update 2:
This is a bug in the Sybase dbisql interface, which the client tools Sybase Central and Interactive SQL use to access the database and it only appears to affect tables with the 'unique auto_identity index' option enabled.
To work around the problem use a different SQL client (via JDBC for example) to connect to the database or use isql on the command line.
Should be no problem to ALTER TABLE with such columns; the err msg indicates the problem regards something else. I need to see the CREATE TABLE DDL.
Even if we can't ALTER TABLE, which we will try first, there are several work-arounds.
Responses
Hah! Internal Sybase error. Open a TechSupport case.
Workaround:
Make sure you get jthe the exact DDL. sp_help . Note the IDENTITY columns and indices.
Create a staging table, exactly the same. Use the DDL from (1). Exclude the Indices.
INSERT new_table SELECT old_table. If the table is large, break it into batches of 1000 rows per batch.
Now create the Indices.
If the table is very large, AND time is an issue, then use bcp. You need to research that first, I am happy to answer questions afterwards.
When I ran your sample code I first get the error:
The 'select into' database option is not enabled for database 'mydb'. ALTER TABLE with data copy cannot be done. Set the 'select into' database option and re-run
This is no doubt because the data within your table needs copying out because the new column is not null. This will use tempdb I think, and the error message you've posted refers to a temp table. Is it possible that this dboption has been accidentally enabled for the tempdb?
It's a bit of a shot in the dark, as I only have 12.5 to test on here, and it works for me. Or it could be a bug.

Resources