I have a table named 'Test' contains
id int
name nvarchar
Deleted bit
columns.
First i install subsonic in my project using sql server 2000.
where i try to delete some rows, it throw object null exception for me on line:
repo.Update(); //object reference not set to...
I config subsonic again with sql server 2008,
Now when i try to delete row, it work fine! even when i change connection string to sql 2000.
But my current problem is:
When i try to delete some rows that does not exists with lambda expression like below it threw a new exception:
Test.Delete(a=>a.id==100); //it doesn't exists!
//...
repo.Update();
//ExecuteNonQuery: CommandText property has not been initialized
I use VS2008, sql2008 & sql2000 and subsonic 3.0.0.4
Related
I am trying to copy a dbf database into a sqlite database
I have the following code
db=dataset.connect('sqlite:///:memory:')
table =db['table1']
for record in DBF(settings['xbasefile']):
db['table1'].insert(record)
this loads the record but fails to insert with a datatype mismatch on the ID column because the row coming in has a format like
ID:text
field1:Text
this function
table=db['table1']
seems to assume an int id for the table. Any way to get this to do an insert with the text id that is in the table?
Ended up using the dbf2sqlite utility which automatically creates the table with correct columns from the dbf
I'm new to SSIS, I'm trying to extract data from an Excel file into Postgres database,
I have tried a small file example that contains only one String column "column1" with 3 lines:
ab
ac
abb
And I have created one table with one column in Postgres
I have created a task with excel source, and ODBC destination the connection worked good I can see the data, but when I execute the task I get empty strings in the database.
I don't know what is the problem can anyone help?
PS: I'm using visual studio 2019, Postgres 9.4 with Pgadmin 3 and I imported Excel as 96-2003
I used OLEDB connection instead of ODBC connection and it worked
I am using SQLalchemy and the pandas method pd.to_sql(if_exists="replace") to insert data in to a table in PostgreSQL.
Now, if the table doesn't exist, then everything works ok.
But, if the table already exists (I know it exists since I can see it in pgAdmin), I get the following error: sqlalchemy.exc.NoSuchTableError:
SQLalchemy is trying to drop this table to replace it with the new one, but it can't find the table for some reason?
Here is a snippet of the code:
ENGINE = create_engine(...)
with ENGINE.begin() as con:
df.to_sql("table_name", con=con, index=False, if_exists="replace")
This throws the above error? I have tried specifying the schema but this also throws the same error.
Why isn't SQLalchemy finding the table even though it's there?
EDIT: If I remove the if_exists="replace", I then get the error: ValueError: Table tablename already exists.
I too faced the same issue and figured out that the table exists with the below ddl :
create table table_name();
Please delete the table and try again it will work.
I'm not sure if this was my mistake while I was importing CSV to my database or it was on CSV file itself but almost every record that should be null has now a 'null' string value in it.
The problematic columns have integer or float datatypes they are not string at all...
Is there a reasonably quick way to update the entire database and substitute each 'null' record with a real null?
If not the entire database, a single table would also work since I only got 3 tables. (but it would be nicer to learn anyways)
I'm trying to do something like:
SELECT ALL 'NULL' values in DATABASE and REPLACE with =NULL
***Question is for SQLite, I know there are solutions for SQL Server or other SQL types but couldn't find any for SQLite
if your sqlite has version 3.8.5 or higher, You can replace NULL string with NULL as following
UPDATE
table_name
SET
column_name = REPLACE(column_name,'NULL','');
But the problem: you should do this for each column in each table of your database.
REFERENCES
sqlite-replace-function (www.sqlitetutorial.net)
replace-update-all-rows-sqlite (dba.stackexchange.com)
Edit: Lengthy Approach
In this approach, we will get data out from sqlite file and use code editor to find and replace
Step 1
Take all data out from database and delete original data.
sqlite3> .mode insert
sqlite3> .output filename.sql
sqlite3> select * from table_name;
sqlite3> delete from table_name;
Step 2
Find and Replace data from this filename.sql
Replace NULL with nothing and also table with table_name
Step 3
Import all data back to table
sqlite3> .read filename.sql
After this close your terminal/cmd and again open sqlite database and check that table.
I've inherited a Sybase database that has the 'unique auto_identity index' option enabled on it. As part of an upgrade process I need to add a few extra columns to the tables in this database i.e.
alter table mytable add <newcol> float default -1 not null
When I try to do this I get the follow error:
Column names in each table must be unique, column name SYB_IDENTITY_COL in table #syb__altab....... is specifed more than once
Is it possible to add columns to a table with this property enabled?
Update 1:
I created the following test that replicates the problem:
use master
sp_dboption 'esmdb', 'unique auto_identity indexoption',true
use esmdb
create table test_unique_ids (test_col char)
alter table test_unique_ids add new_col float default -1 not null
The alter table command here produces the error. (Have tried this on ASE 15/Solaris and 15.5/Windows)
Update 2:
This is a bug in the Sybase dbisql interface, which the client tools Sybase Central and Interactive SQL use to access the database and it only appears to affect tables with the 'unique auto_identity index' option enabled.
To work around the problem use a different SQL client (via JDBC for example) to connect to the database or use isql on the command line.
Should be no problem to ALTER TABLE with such columns; the err msg indicates the problem regards something else. I need to see the CREATE TABLE DDL.
Even if we can't ALTER TABLE, which we will try first, there are several work-arounds.
Responses
Hah! Internal Sybase error. Open a TechSupport case.
Workaround:
Make sure you get jthe the exact DDL. sp_help . Note the IDENTITY columns and indices.
Create a staging table, exactly the same. Use the DDL from (1). Exclude the Indices.
INSERT new_table SELECT old_table. If the table is large, break it into batches of 1000 rows per batch.
Now create the Indices.
If the table is very large, AND time is an issue, then use bcp. You need to research that first, I am happy to answer questions afterwards.
When I ran your sample code I first get the error:
The 'select into' database option is not enabled for database 'mydb'. ALTER TABLE with data copy cannot be done. Set the 'select into' database option and re-run
This is no doubt because the data within your table needs copying out because the new column is not null. This will use tempdb I think, and the error message you've posted refers to a temp table. Is it possible that this dboption has been accidentally enabled for the tempdb?
It's a bit of a shot in the dark, as I only have 12.5 to test on here, and it works for me. Or it could be a bug.