Impossible to process multiple tables with ODBC connection in SSAS Tabular 2017 - azure

I'm currently building a cube in SSAS Tabular with compatibility level 1400 (on an Azure workspace server) and here is my problem. I have an ODBC connection to source my cube and I have to use a connection string and a SQL query for each tables I need (the connection string is always the same and the SQL query is always different).
When I have my first table (and only one table), I can Build, Process and Deploy easily without any problem. But, when I add a new table, I can't process anymore. I have that kind of message for both tables : Failed to save modifications to the server. Error returned: 'Column' column does not exist in the rowset.
I think the problem comes from the connection string which is the same for every table. I only have one Data source at the end because I only have one connection string for every table. In my opinion, it might be the cause of my problem but I'm not sure about that. Any idea ?
I hope I made myself clear.
Thanks a lot.

I found the solution to my problem. It was not related to my data source but to the table properties of each table.
Indeed, there was only the connection string in it and not the SQL query. I had to replace it with the correct M language query. It's still a bit strange because I had to make the same "Get Data" in Power BI to have the right M query and then copy and paste it to the table properties in SSAS. There should be a way to make it automatically I guess but I didn't find how.

Related

Truncate tables on databricks

I'm working with two environments in Azure: Databricks and SQL Database. I'm working with a function that generate a dataframe that it's going to be used to overwrite the table that is stored in the SQL Database. I have many problems because the df.write.jdbc(mode = 'overwrite') only drops the table and, I'm guessing, my user didn't have the right permissions to created again (I've already seen for DML and DDL permission that I need to do that). In resume, my functions only drops the table but without recreating again.
We discuss about what could be the problem and we conclude that maybe the best thing that I can do is truncate the table and re-add the new data there. I'm trying to find how to truncate the table, I tried these two approaches but I can't find more information related to that:
df.write.jdbc()
&
spark.read.jdbc()
Can you help me with these? The overwrite doesn't work (maybe I don't have the adequate permissions) and I can't figure out how to truncate that table using a jdbc.
It's in the Spark documentation - you need to add the truncate when writing:
df.write.mode("overwrite").option("truncate", "true")....save()
Also, if you have a lot of data, then maybe it's better to use Microsoft's Spark connector for SQL Server - it has some performance optimizations that should allow to write faster.
You can create stored procedure for truncating or dropping in SQL Server and call that stored procedure in databricks using ODBC connection.

How to Bind sharepoint LIST to a Sql DB table

Is there a way to Bind share point list to Sql Db tables to fetch the updates dynamically. Example: I have a share point list of 2 columns and, I have azure sql Db table with 2 columns. I would like to bind them together so when an update is happened in DB column, respective share point list column data will be updated.
I have tried write a sprint book job to do this but, it is lot of code to maintain. Also we need to manage the real time sync on our own.
I am expecting there might be some out of the box connecter in Microsoft flow, or azure logic app or something automation which will help me automate this.
I would suggest you check BCS so your db data could sync with SharePoint external list.
https://learn.microsoft.com/en-us/sharepoint/make-external-list
Another thread with demo.
https://www.c-sharpcorner.com/article/integrate-azure-sql-db-with-sharepoint-online-as-an-external-list-using-business/
There is SQL Server connector, suppose this is what you want. You could use the trigger When an item is created or When an item is modified to get the sql updates details.
The output would be like the below pic shows.
Further more information you could refer to this doc:SQL Server.Note: there are some known limitation, like When invoking triggers,
A ROWVERSION column is required for OnUpdatedItems
An IDENTITY column is required for OnNewItems
After the trigger, you could use the table details to update sharepoint list.
Hope this could help you.

Azure Data Factory copy data is slow

Source database: PostgreSQL hosted on Azure VM D16s_v3
Destination database: SQL Server developer edition hosted on Azure VM D4s_v3
Source database is around 1TB in size
Destination database is empty with existing schema identical to source database
Throughput is only 1mb/s. Nothing helps. (I've selected max DIU) SQL Server doesn't have any keys or indexes at this point.
Batch size is 10000
See screenshot:
I got nailed by something similar when using ADF to copy data from an on-premises Oracle source to an Azure SQL Database sink. The same exact job performed via SSIS was something like 5 times faster. We began to suspect that something was amiss with data types, because the problem disappeared if we cast all of our high-precision Oracle NUMBER columns to less precision, or to something like integer.
It got so bad that we opened a case with Microsoft about it, and our worst fears were confirmed.
The Azure Data Factory runtime decimal type has a maximum precision of 28. If a decimal/numeric value from the source has a higher precision, ADF will first cast it to a string. The performance of the string casting code is abysmal.
Check to see if your source has any high-precision numeric data, or if you have not explicitly defined schema, see if you're perhaps accidentally using string.
Increase the batch size to 1000000.
If you are using TableName option then you should have that Table inside Dataset dropdown box. If you are extracting using SQL query then please check inside Dataset connection, click on edit and remove table name.
I had hit the same issue. If you select the query option and provide tablename in dataset, then you are confusing Azure Datafactory and making it ambiguous to decide on which option.

Sharepoint source and OLE DB destination .. data filtering

I pulled data from sharepoint to sql database through SSIS package
I need to schedule this for every 10 minutes everything is good
Every time i run package.we are having duplicate records
I need to pull only updated and new items only to sql
I have applied composit primary key option at destination its not working
Please help me
Without knowing much about the details of what you are doing, two things come to mind.
Put constraints on your database so that duplicates aren't allowed.
Better a foreign key violation or constraint error than duplicate
data arriving.
If you are utilizing an execute sql task, try using a merge
statement.

an error in RS when I try to aggregate a field

I haver cognos on windows server connecting to posgres via postgreSQL ODBC.
I created a report in RS. Whenever I try adding a numeric field to the report, i get this error:
RQP-DEF-0177 An error occurred while performing operation 'sqlPrepareWithOptions' status='-9'.
UDA-SQL-0107 A general exception has occurred during the operation "prepare".
No query has been executed with that handle
If I change the field's Aggregate Function to 'None', everything works fine.
Any ideas, anyone?
Do all measures fail aggregation? What specific data types are there? Are they correctly seen from within Framework Manager as these data types?
I would generate the SQL from the report without the aggregation, edit it to have aggregation, and run it direct against the database to rule out any data overflow similar issue. [You will not be able to generate the SQL with the aggregation in place].
If not that, my next guess would be the driver itself. What version of Cognos? What OS? 32/64 bit for each? What version of postgre?

Resources