Rocket Software SQL ODBC, Retrieve Wrong Filenames - u2

I installed Rocket Software for accessing an Unidata Db through SQL Server 2008. The idea is to write SQL Procedures for populating SQL Tables, but the problem I am getting is retrieving wrong filenames i. e. Select * from MyDb_Members. I got the field names as Member{Name, Phone{number. In my unidata core these fields are named as Member Name, Phone Number.
Do you know if there is way to run sql queries with those field names without getting sql query errors. It looks sql server does not like to use that name convention:
Select Member{Name from MyDb_Members
Error near '{'
Thanks for your help

Try formatting the query using NATIVE keyword. I don't use Unidata, but in UniVerse this works well. I have a lot of columns that contain periods and those are illegal column names in standard SQL.
{ NATIVE "Select * from MyDb_Members" }

Related

Can't receive text from PostgreSQL into Excel via ODBC due to character coding problem (UTF-8 vs Win1250)

My base scenario is that I want to make an excel report with data from a PostgreSQL DB.
I get them via ODBC, making a simple linked table with PowerQuery.
For DSN I choose (None), then I write the connectio string and the SQL statement. Generally it works fine, but with one column, it doesn't. I recive the following error message:
ODBC: ERROR [22P05] ERROR: character with byte sequence 0xc2 0xb2 in encoding "UTF8" has no equivalent in encoding "WIN1250";Error while executing the query
So that is clear, the source is in UTF-8 with characters that are not compatible with Win1250.
What I am looking for is a general solution either on DB or excel site.
The used SQL statement is a simple SELECT * FROM [view], so I can use any replacement or converting or anything just to be able to hanle it with transformations on the column. I can replace the view with function if that is better.
But it would be better, if you can suggest an excel site solution.
With it there is some criteria. That scenario, when "I get the data first in text, then I convert it to Win1250, then import to excel" wont't fit, and I need something which connects to the excel file itself, so if I move it to an other pc, it need to work too without any more modification.
Thanks for all the help!

Why does PySide6.QSql.QSqlTableModel not see one of the existing tables MS Access?

There are N tables in the DB with the following data types:
Numeric, long text, date and time bigint, boolean.
All of them opens, except one
I'm opening a database
db = QSqlDatabase("QODBC")
db.setDatabaseName(r"DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\Users\...\file.accdb")
db.open(username, password)
I output the tables contained in the db
db.tables()
Output:
["messages", "table1", "table2", ..., "tableN"]
And I'm trying to open the "messages" table
model = QSqlTableModel(db=db)
model.setTable("messages")
model.select()
Output:
False
Then I checked which other tables are not opening
for i in db.tables():
model.setTable(i)
if model.select() == False:
print(i)
Output:
"messages"
This means that the problem is only in this table.
But directly through MS Access the table opens
I have already tried to open it through the cycle. The keyword was found in db.tables(), but QSqlTableModel does not see the 'messages' table specifically.
I tried to change MS Access to the 2016 version. I thought, suddenly some certain type from MS Access 2019 conflicts with the old driver. It didn't help.
I was thinking of downloading a newer driver, but I didn't find one. I tried to dig into the registry... I didn't find anything either.
Please help
So, I figured out the problem by poking. I was initially right about the driver conflict with the bigint data type, but a few additional actions were missing.
Apparently, when you try to set the bigint data type and save it, Access warns you that because of it, databases may not support older versions, and you save anyway, it automatically sets the minimum supported version, or what?
Data separation helped, but before that you need to change the bigint data type to another data type. Database Tools->Move Data->Access Database.

Malformed SQL Statement: Expected token 'USING' but found Identifier with value 't' instead

I am trying to merge to a SQL Database using the following code in Databricks with pyspark
query = """
MERGE INTO deltadf t
USING df s
ON s.SLAId_Id = t.SLAId_Id
WHEN MATCHED THEN UPDATE SET *
WHEN NOT MATCHED THEN INSERT *
"""
driver_manager = spark._sc._gateway.jvm.java.sql.DriverManager
con = driver_manager.getConnection(url) #
stmt = con.createStatement()
stmt.executeUpdate(query)
stmt.close()
But I'm getting the following error:
SQLException: Malformed SQL Statement: Expected token 'USING' but found Identifier with value 't' instead at position 25.
Any thoughts on where might be going wrong?
I don't know why you're getting this exact error. However I believe there are a number of issues with what you are trying to do.
Running the query via JDBC makes it run in SQL Server only. Construct like WHEN MATCHED THEN UPDATE SET * / WHEN NOT MATCHED INSERT * will not work. Databricks accepts it, but for SQL Server you need to explicitly provide columns to update and values to insert (reference).
Also, do you actually have tables named deltadf and df in SQL Server? I suppose you have a Dataframe or temporary view named df... this will not work. As said, this query executes in SQL Server only. If you want to upload data from Dataframe use df.write.format("jdbc").save (reference).
See this Fiddle - if deltadf and df are tables, running this query in SQL Server (any version) will only complain about Incorrect syntax near '*'.
SQLException: Malformed SQL Statement: Expected token 'USING' but found Identifier with value 't' instead at position 25.
if you missed updating any specific field or specific syntax, you will get this error.
I performed merge operation its working fine for me without error, Please follow below reference .
Reference:
https://www.youtube.com/watch?v=i5oM2bUyH0o
https://docs.databricks.com/delta/delta-update.html#upsert-into-a-table-using-merge
https://www.sqlshack.com/sql-server-merge-statement-overview-and-examples/

Moved from sql server to snowflake finding case sensitive collation issue

in snowflake it searches data with case sensitiveness while in sql server it used to search with case insensitiveness i changed database level collation with below command
ALTER DATABASE IF EXISTS powerdb SET COLLATION = 'en-ci'
but it did not help is there any other way to achive case insensitiveness
There's a number of ways really.
one of them is using ILIKE for your string comparison: https://docs.snowflake.net/manuals/sql-reference/functions/ilike.html
another one is setting up collation at column level:
https://docs.snowflake.net/manuals/sql-reference/collation.html
- but please note that not all of the string functions are supported on collated columns
also you can use COLLATION functions (also described in the link below) or set it on database level with account-level parameter of DEFAULT_DDL_COLLATION = 'en-ci'
everything depends on what you want to achieve really...

Can we run complex multi line SQL queries using Blueprism?

I am new to SQL stuff in blueprism, I am able to configure SQL object and execute simple queries, but I am facing trouble while trying to run multiline complex SQL queries.
when I was trying to execute the below query in blueprism, getting some error message, saying "Incorrect Syntax near Database2"
"select top 10 * from [Database1].[dbo].[Table1]
join [Database1].[dbo].[Table2] on [Database1].[dbo].[Table2].Fieldname1=[Database1].[dbo].[Table1].Fieldname2
join [Database2].[dbo].[Table1] on [Database2].[dbo].[Table1].Fieldname1=[Database1].[dbo].[Table2].Fieldname2"
Can somebody please help me, what was the wrong in the above query...
I found the answer myself, there should not be any additional white space characters in the query, entire query should be in continuous line. The beauty of blueprism is, it can execute any level of complex queries without any constraints, but need to modify the syntax accordingly. always we should mention the filename and table names in the following format - [databasename].[dbo].[tablename].[fieldname]

Resources