Exception while retrieving schema from Informix database using Db2Client DB2Connection.GetSchema("Tables") - entity-framework-5

I am getting this error when entity framework trying to get schema from Informix database:
IBM.Data.DB2.DB2Exception (0x80004005): ERROR [IX000] [IBM][IDS/UNIX64] SQL0969N
There is no message text corresponding to SQL error "-23103" in the message fi
le on this workstation. The error was returned from module "IFX11500" with orig
inal tokens "". SQLSTATE=IX000
Please help.

Probably you have some LOCALE misconfiguration between the client and server or some problem with the client installation.
Try confirm the locale used at the database and set the CLIENT_LOCALE with compatible value.
-23103 Code-set conversion function failed due to an illegal sequence or invalid value.
Illegal or invalid characters occur in the character string. The
program could not execute the code-set conversion on the characters
that this string contains. Reexamine the input string for illegal or
invalid characters and reexecute the program.
If you have an alternative way to connect at the database , run this SQL :
select * from sysmaster:sysdbslocale
You will get a information like this..
dbs_dbsname sysmaster
dbs_collate en_US.819
dbs_dbsname sysha
dbs_collate en_US.819
dbs_dbsname sysuser
dbs_collate en_US.819
dbs_dbsname onpload
dbs_collate en_US.819
This links maybe can help you : dbaccess info , database locale

This problem temporarily solved by recycle the database service.
Still trying to find root cause of the problem as this may not be solution for every time.
Thanks
Phani
The above problem got solves after installing a patch. The problem is not exist in newer versions of Informix.

Related

Push in existing local table failure (windows): InvalidRegionNumberException then IllegalArgumentException

I want to push data into an already existing table, single column family, no records.
I am using shc-core:1.1.1-2.1-s_2.11 on a windows machine. I have hbase 1.2.6 installed and use scala 2.11.8.
When I try to push data I got first the following error: org.apache.spark.sql.execution.datasources.hbase.InvalidRegionNumberException: Number of regions specified for new table must be greater than 3.
After following the advice of this link https://github.com/hortonworks-spark/shc/issues/249#issue-318285217, I added: HBaseTableCatalog.newTable -> "5" to my options.
It still failed but with: java.lang.IllegalArgumentException: Can not create a Path from a null string.
Following this link: https://github.com/hortonworks-spark/shc/issues/151#issuecomment-313800739, I added to my catalog: , "tableCoder":"PrimitiveType".
Still facing the same error.
I saw people are expecting some clarification about this issue (https://github.com/hortonworks-spark/shc/issues/249#issuecomment-463528032).
It is known issue and apparently it seems fixed (https://github.com/hortonworks-spark/shc/issues/155#issuecomment-315236736).
I do not know what to do next.
Is there a solution about this?

SqlState 24000, Invalid cursor state

I am trying to help one of our developers resolve an Azure SQL DB error. He has attempted to run a script, connecting using sqlcmd and (I presume) ODBC. It seems no matter what he does he receives the error message "SqlState 24000, Invalid cursor state".
His script consists of roughly 80 "insert into table where not exists sub-select" statements. Some of the sub-selects return zero records.
I read this post which is admittedly almost a year old now. The short version seems to be "this is a known Azure SQL DB bug".
sqlcmd on Azure SQL Data Warehouse - SqlState 24000, Invalid cursor state after INSERT statement
I know for certain my developer has been able to run these statements previously. Is that just the nature of a bug - sometimes it occurs and sometimes it doesn't? Does he need to use a different ODBC driver? Any other suggestions?
Please make sure you are using ODBC driver 13.1 or later. You can download it from here.

"appcmd start site" command fails with message "The object identifier does not represent a valid object."

When I run
C:\Windows\System32\inetsrv\appcmd.exe start site /site.name:"Some_site_name"
on Windows Server 2008 R2, it fails with message
ERROR ( hresult:800710d8, message:Command execution failed.
The object identifier does not represent a valid object.
)
... although the site exists.
I forgot to check if there is a binding present! Site without bindings cannot be started and the utility thinks that "The object identifier does not represent a valid object." is a good way to remind me of this.
When I encountered this error it was because I had a typo in Advanced Settings>Enabled Protocols.
Instead of specifying "http,NET.TCP" I had "http.NET.TCP" (note the comma/period difference) and that caused this error too.
It took me ages to spot it, so I'd thought I'd publish the solution just in case there's another, equally shortsighted, developer out there, scratching their head.

How to resolve the "Error occurred while loading translation library" Linux ODBC connection issue?

After installing unixODBC and the Netezza drivers on a Linux client and configuring ~/.odbcinst.ini and ~/.odbc.ini data sources according to the documentation, attempting to connect to a Netezza PureData warehouse via some tools may yield an error similar to:
(Error) ('HY000', '[HY000] [unixODBC]Error occurred while loading translation library (45) (SQLDriverConnect)')
For example, this was output by the Python SQLAlchemy library via a DBAPI connection on a RHEL7 box (though has been reported from other distributions and other tools).
Does anyone know the detail of what is happening and how to properly resolve it?
Extra info: I had a similar issue with the exact same error message.
Turns out I had /etc/odbcinst.ini set up with UnicodeTranslationOption=utf8 for the [NetezzaSQL] driver.
Also a wrong (32-bit) driver was used.
Fixed with /etc/odbcinst.ini:
[NetezzaSQL]
Driver = /opt/netezza/lib64/libnzodbc.so
DebugLogging = true
LogPath = /tmp
Trace = 0
TraceAutoStop = 0
TraceFile = /tmp/trace.log
UsageCount = 1
One way to work-around the issue, is to add the following lines to the specific data-source section of your ~/.odbc.ini file:
TranslationDLL=
TranslationName=
TranslationOption=
I don't know what other implications doing that may have (for example on non-English error messages or using unusual character encodings).
Set the UnicodeTranslationOption to be UTF16 instead of UTF8 in your odbc.ini. Make sure that it has that setting in both the odbc.ini recognized by 'odbcinst -j' and also a spare copy in your home directory called just 'odbc.ini'. The latest netezza files seems to read the configuration without being a hidden file '.odbc.ini'
You can also enable Netezza ODBC debug log to figure out detailed error. In odbcinst.ini , set
DebugLogging = true

pyodbc fetchall() returns no results when a column returned by the query contains too much data

Setup:I am using Python 3.3 on a Windows 2012 client.
I have a select query running using pyodbc which is not returning any results via fetchall(). I know the query works fine because i can take it out and run it from Microsoft SQL Management Studio without any issues.
I can also remove one column from the select list and the query will return results. For the database row in question, this column contains a large amount of XML data (> 10,000 characters), so it seems as though there is some buffer overflow issue going on causing fetchall() to fail, though it doesn't throw any exceptions. I have tried googling around and i have seen rumors of a config option to raise the buffer size, but i haven't been able to nail down exactly how to do it, or what a workaround would be.
Is there a configuration option that I can use, or any alternative to pyodbc.
Disclaimer: I have only been using python for about 2 weeks now so i
am still quite the noob, though i have made every attempt to research
my problems thoroughly this one has proven to be elusive:
On a side note, i tried using odbc instead of pyodbc but the same query throws this oddball error which google isn't helping me solve either
[ERROR] An exception while executing the Select query: [][Negative size passed to PyBytes_FromStringAndSize]
It seems this issue was resolved by changing my SQL connection string
FROM:
DRIVER={SQL Server Native Client 11.0}
TO:
DRIVER={SQL Server}

Resources