SQLite: update every 'null' string record in database - string

I'm not sure if this was my mistake while I was importing CSV to my database or it was on CSV file itself but almost every record that should be null has now a 'null' string value in it.
The problematic columns have integer or float datatypes they are not string at all...
Is there a reasonably quick way to update the entire database and substitute each 'null' record with a real null?
If not the entire database, a single table would also work since I only got 3 tables. (but it would be nicer to learn anyways)
I'm trying to do something like:
SELECT ALL 'NULL' values in DATABASE and REPLACE with =NULL
***Question is for SQLite, I know there are solutions for SQL Server or other SQL types but couldn't find any for SQLite

if your sqlite has version 3.8.5 or higher, You can replace NULL string with NULL as following
UPDATE
table_name
SET
column_name = REPLACE(column_name,'NULL','');
But the problem: you should do this for each column in each table of your database.
REFERENCES
sqlite-replace-function (www.sqlitetutorial.net)
replace-update-all-rows-sqlite (dba.stackexchange.com)
Edit: Lengthy Approach
In this approach, we will get data out from sqlite file and use code editor to find and replace
Step 1
Take all data out from database and delete original data.
sqlite3> .mode insert
sqlite3> .output filename.sql
sqlite3> select * from table_name;
sqlite3> delete from table_name;
Step 2
Find and Replace data from this filename.sql
Replace NULL with nothing and also table with table_name
Step 3
Import all data back to table
sqlite3> .read filename.sql
After this close your terminal/cmd and again open sqlite database and check that table.

Related

sqlite insert function duplicate id error

I am trying to copy a dbf database into a sqlite database
I have the following code
db=dataset.connect('sqlite:///:memory:')
table =db['table1']
for record in DBF(settings['xbasefile']):
db['table1'].insert(record)
this loads the record but fails to insert with a datatype mismatch on the ID column because the row coming in has a format like
ID:text
field1:Text
this function
table=db['table1']
seems to assume an int id for the table. Any way to get this to do an insert with the text id that is in the table?
Ended up using the dbf2sqlite utility which automatically creates the table with correct columns from the dbf

Postgres sql Insert overwrite mode

Is there any Insert overwrite mode in postgres sql like below.
INSERT OVERWRITE INTO TABLE table2 select * FROM table1;
PostgreSQL has the TRUNCATE command to wipe the contents of a table but keep the table itself. You would have to use two statements:
TRUNCATE table2;
INSERT INTO table2 SELECT * FROM table1;
If you want to INSERT new records and UPDATE existing records you can use the ON CONFLICT argument of the INSERT statement:
INSERT INTO table2 (id, name)
(SELECT id, name FROM table1)
ON CONFLICT (id_pkey) DO UPDATE SET name = EXCLUDED.name;
You need to have a primary key or UNIQUE constraint to perform the conflict check. Full details can be viewed in the INSERT Statement documentation, https://www.postgresql.org/docs/current/sql-insert.html
You can also choose to DO NOTHING on conflict which has the advantage of protecting against inserting duplicate records.

Spark SQL query issue - SQL with Subquery doesn't seem to retrieve records

I have a Spark SQL query like:
Select * from xTable a Where Exist (filter subquery) AND (a.date IN (Select max(b.date) from xTable b))
Under certain circumstances (when a filter table is not provided) , my filter subquery should simply do a Select 1.
Whenever I run this in Impala it returns records, in Hive it complains that only 1 subquery expression is allowed. However, when I run it as a Spark SQL in Spark 2.4, it returns an empty dataframe. Any idea why? What am I doing wrong?
Ok, I think I found the reason. It is not related to the query. It seems like an issue while trying to create a table using a csv file in Hive.
When you select the source - path to the csv file in HDFS
, and then under the format - check the 'Has Header' check box.
It seems to create the table ok.
Then, When I execute the following in Hive or Impala :
Select max(date) from xTable
I get the max date back (Where the date column is a String)
However, when I try and run the same via Spark SQL:
I get the result as date (The same name as the column header).
If I remove the header from CSV file and import it, and them manually create the headers and types, then I am not facing this issue.
Seems like some form of bug or may be a user error from my end.

Inserting data into TEXT type column in Informix

How do I insert data into a columm with the type TEXT in Informix via SQL. If there are two other columns that I also want to insert/update - is the only way to save it in a file and LOAD it?
Or if I want to do do via SQL statements - can you give the syntax?
See my question: Consistent method of inserting TEXT column to Informix database using JDBC and ODBC
It is easy with JDBC and PreparedStatement. ODBC works little different but is able to insert string with simple SQL INSERT (without preparing).
The load command works, and you can also use ESQL/C to do it (it is mentioned in this answer that you might already found).
About doing it in a simple insert,
You can use the VALUES clause to insert a value, but the only value that you can give that column is null. However, you can use the SELECT form of the INSERT statement to copy a TEXT or value from another table.
You can see here the docs for Text data type.

Adding columns to a sybase table with unique auto_identity index option

I've inherited a Sybase database that has the 'unique auto_identity index' option enabled on it. As part of an upgrade process I need to add a few extra columns to the tables in this database i.e.
alter table mytable add <newcol> float default -1 not null
When I try to do this I get the follow error:
Column names in each table must be unique, column name SYB_IDENTITY_COL in table #syb__altab....... is specifed more than once
Is it possible to add columns to a table with this property enabled?
Update 1:
I created the following test that replicates the problem:
use master
sp_dboption 'esmdb', 'unique auto_identity indexoption',true
use esmdb
create table test_unique_ids (test_col char)
alter table test_unique_ids add new_col float default -1 not null
The alter table command here produces the error. (Have tried this on ASE 15/Solaris and 15.5/Windows)
Update 2:
This is a bug in the Sybase dbisql interface, which the client tools Sybase Central and Interactive SQL use to access the database and it only appears to affect tables with the 'unique auto_identity index' option enabled.
To work around the problem use a different SQL client (via JDBC for example) to connect to the database or use isql on the command line.
Should be no problem to ALTER TABLE with such columns; the err msg indicates the problem regards something else. I need to see the CREATE TABLE DDL.
Even if we can't ALTER TABLE, which we will try first, there are several work-arounds.
Responses
Hah! Internal Sybase error. Open a TechSupport case.
Workaround:
Make sure you get jthe the exact DDL. sp_help . Note the IDENTITY columns and indices.
Create a staging table, exactly the same. Use the DDL from (1). Exclude the Indices.
INSERT new_table SELECT old_table. If the table is large, break it into batches of 1000 rows per batch.
Now create the Indices.
If the table is very large, AND time is an issue, then use bcp. You need to research that first, I am happy to answer questions afterwards.
When I ran your sample code I first get the error:
The 'select into' database option is not enabled for database 'mydb'. ALTER TABLE with data copy cannot be done. Set the 'select into' database option and re-run
This is no doubt because the data within your table needs copying out because the new column is not null. This will use tempdb I think, and the error message you've posted refers to a temp table. Is it possible that this dboption has been accidentally enabled for the tempdb?
It's a bit of a shot in the dark, as I only have 12.5 to test on here, and it works for me. Or it could be a bug.

Resources