Disabling OID in AWS Redshift - excel

I'm running into an ODBC issue with OIDS from Redshift
I have to build dynamic reports in excel using an ODBC connection and it says it can't find the Oid column.
I'm waiting to see if the DBA can change the default settings but every PostgreSQL command to disable OIDS doesn't work in Redshift. Suggestions? Please, no comments on Excel as reporting tool, it's all I've got at the moment.
I've tried the following to no avail:
CREATE TABLE (
...
) WITHOUT OIDS;
CREATE TABLE (
...
) WITH ( OIDS = FALSE );
alter table [tablename] SET WITHOUT OIDS;

We had the exact same problem and we fixed it by changing the way Excel setup the connection to AWS Redshift. Instead of creating a connection From Microsoft Query, we did it From Data Connection Wizard:
By the way, this allow you to see the connection string (which by default includes FAKEOIDINDEX=0;SHOWOIDCOLUMN=0).
PS: More informations on what are OID and why Excel (Microsoft Query) is using OIDS to handle rows (it's a king of unique row ID). But if you write your own SQL requests, Excel should not annoy you with OIDS..
Hope that helps..

Related

SqlState 24000, Invalid cursor state

I am trying to help one of our developers resolve an Azure SQL DB error. He has attempted to run a script, connecting using sqlcmd and (I presume) ODBC. It seems no matter what he does he receives the error message "SqlState 24000, Invalid cursor state".
His script consists of roughly 80 "insert into table where not exists sub-select" statements. Some of the sub-selects return zero records.
I read this post which is admittedly almost a year old now. The short version seems to be "this is a known Azure SQL DB bug".
sqlcmd on Azure SQL Data Warehouse - SqlState 24000, Invalid cursor state after INSERT statement
I know for certain my developer has been able to run these statements previously. Is that just the nature of a bug - sometimes it occurs and sometimes it doesn't? Does he need to use a different ODBC driver? Any other suggestions?
Please make sure you are using ODBC driver 13.1 or later. You can download it from here.

VBA odbc connection accessing a single library only

I have successfully connected to an as400 server. But whenever i execute an sql statement
select * from nosd0
It doesn't work, because nosd0 is in lib1/fil1(nosd0)
it gives an error saying nosd0 is not in lib2.
When I execute the query on STRSQL on as400 it works fine.
I tried creating an alias and it's malfunctioning. Please I really need some help on this one
Alias is working, I am accessing the wrong file.
Ok I figured out the problem, this will also help all those who wants to connect to their AS400 iSeries using VBA. ;)
My problem above is that when I try my Query on the box, it accesses lib1/nosd0, and in VBA, I was trying to get lib2/fil1(nosd0) which is a description of the table nosdo itself. The simple solution is to query
select * from lib1.nosd0
More on that when connecting to an AS400 iSeries using ODBC, there is a parameter called DBQ
Connection String Parameters
My final connection string would be.
ConnectString = "Driver={ISeries Access ODBC Driver};System=" & DCServer(I) & ";Uid=--;Pwd=--;NAM=0;DBQ=lib1,*ALL;"

pyodbc fetchall() returns no results when a column returned by the query contains too much data

Setup:I am using Python 3.3 on a Windows 2012 client.
I have a select query running using pyodbc which is not returning any results via fetchall(). I know the query works fine because i can take it out and run it from Microsoft SQL Management Studio without any issues.
I can also remove one column from the select list and the query will return results. For the database row in question, this column contains a large amount of XML data (> 10,000 characters), so it seems as though there is some buffer overflow issue going on causing fetchall() to fail, though it doesn't throw any exceptions. I have tried googling around and i have seen rumors of a config option to raise the buffer size, but i haven't been able to nail down exactly how to do it, or what a workaround would be.
Is there a configuration option that I can use, or any alternative to pyodbc.
Disclaimer: I have only been using python for about 2 weeks now so i
am still quite the noob, though i have made every attempt to research
my problems thoroughly this one has proven to be elusive:
On a side note, i tried using odbc instead of pyodbc but the same query throws this oddball error which google isn't helping me solve either
[ERROR] An exception while executing the Select query: [][Negative size passed to PyBytes_FromStringAndSize]
It seems this issue was resolved by changing my SQL connection string
FROM:
DRIVER={SQL Server Native Client 11.0}
TO:
DRIVER={SQL Server}

How to get Excel to reliably execute sp_executesql from a query table on a worksheet?

In MS Excel, if you create a QueryTable with Microsoft Query, and your SQL query cannot be visually presented by Microsoft Query, then you are not allowed to provide parameters for that query. Which is a shame, so there is this awesome technique that allows parameters anyway:
{CALL sp_executesql (N'select top (#a) * from mytable', N'#a int', ?)}
You provide the query in the ODBC CALL form and it works with parameters.
Unless it does not.
While on some computers it works flawlessly, on other computers Excel throws an error when trying to refresh the query table:
For SQL Native Client 10: Invalid parameter number
For SQL Native Client 11: Procedure or function sp_executesql has too many arguments specified.
With a profiler I can see Excel (actually, the native client when poked by Excel) is doing this before actually executing sp_executesql:
exec sp_describe_undeclared_parameters N' EXEC sp_executesql N''<actual query>;'',N''<declared parameters>'',#P1 '
Here #p1 is the parameter placeholder that is supposed to go to sp_executesql later, and that is where sp_describe_undeclared_parameters fails. It does not expect any custom parameters for sp_executesql -- only the two intrinsic ones, #stmt and #params. If I manually remove the ,#p1 bit from the query, it executes fine in all cases.
So that is the problem: On some computers the above auto-generated sp_describe_undeclared_parameters works with the unnecessary/wrong ,#P1 bit, on some it fails.
We need to make it work on all computers.
Weird things to consider:
I fail to see anything common in computers that don't have the problem. Bitness or the Windows version do not seem to matter.
I fail to manually execute the said query with the ,#P1 bit attached - whatever tool I use, I get the "too many arguments" error, and yet, Excel is able to execute it no problem when it feels like. I can see with the profiler that is the exact query that hits the server. Maybe it has something to do with a very peculiar combination of connection settings, but they appear to be same on all computers (the data source is an ODBC system data source using SQL Server Native Client 11, and all parameters are same on all tabs across the computers).

Odd Oracle + .net behaviour when comparing types

My workplace has a .net application supplied to us by a postal service, it connects to an oracle database running on the same machine and is responsible for registering, storing and printing shipping labels.
Seeing as the database host etc. is configurable we asked the company if the application could be used over the network (simply copying it over to another machine resulted in "literal does not match format string" errors), all we were told is "it isn't possible". Not wanting to take no for an answer I poked around the exe with reflector.
Together with Oracle's v$sqlarea view I pinpointed the errors to a few date comparison functions, but I have no idea why the application was working in the first place on the original machine.
The original application uses queries similar to
SELECT * FROM shipping WHERE date = '2011/03/28' --error
easily fixed with something like
SELECT * FROM shipping WHERE to_char(date, 'yyyy/mm/dd') = '2011/03/28'
Why does the original application work without throwing any errors? The incorrect query pops up in the v$sqlarea view when the application is used on the original host, if I copy the query and run it manually using anything else it throws the error, if I run the application on any other machine it throws the error too, is there some setting in Oracle that is modifying queries on the fly, but only for queries originating from the local machine, while storing the original query in v$sqlarea?
This sounds like a regional settings difference between the two client machines, since formatting of dates will be dependent on the culture used to convert the date to a string in .NET, and unless the application specifies a culture, it will use the settings of the current logged on user running the application. This is obviously a problem if the database engine is expecting them in a certain format. This problem is less likely to arise with parametrized queries, where the date parameters are passed separate from the query and as a date datatype instead of a string.
If you work with dates, you must avoid String.Format based query generation. Use parametrized selects and parameters to set those values.
OracleCommand cmd = new OracleCommand("SELECT * FROM shipping WHERE date = :dataParam", connection);
var param = cmd.Parameters.Add("date", OracleDbType.Date);
param.Value = DateTime.Now;
It worked, because the format was matching the datetime settings on the developer machine and on the target database.
In other words: the issue is connected to an incorrect date time format you are trying to provide.
This could be because of regional settings on the server. Please check that the new server is configured for the same Locale (EN-GB, EN-US, or whatever the original server is configured to use).

Resources