Foreign Key Reference Error for Azure Data Sync - azure

In My Data Sync Group there have 2 tables . Let say A and B . some columns of A reference to B table. B is Master Table.
When sync occur table A is do the processing ahead of Table B. So I got the Foreign Key Reference Error.
IS Azure Data sync processing with alphabetic order ?
Any solution for this ?
Thz in advance.

As #JuneT suggested, you might want to post this question in the "SQL Azure Forum" (http://aka.ms/sqlazureforum). When you do, they have asked specifically that you include the following information when asking Data Sync related questions:
Server id: Your server id
Region: Where your Data Sync Server is located/created
Time / Date: Date and time when you encountered the issue
Tracing Id: The tracing id you see in the message in UI

Related

Unable to get scalar value of a query on cosmos db in azure data factory

I am trying to get the count of all records present in cosmos db in a lookup activity of azure data factory. I need this value to do a comparison with other value activity outputs.
The query I used is SELECT VALUE count(1) from c
When I try to preview the data after inserting this query I get an error saying
One or more errors occurred. Unable to cast object of type
'Newtonsoft.Json.Linq.JValue' to type 'Newtonsoft.Json.Linq.JObject'
as shown in the below image:
snapshot of my azure lookup activity settings
Could someone help me in resolving this error and if this is the limitation of azure data factory how can I get the count of all the rows of the cosmos db document using some other ways inside azure data factory?
I reproduce your issue on my side exactly.
I think the count result can't be mapped as normal JsonObject. As workaround,i think you could just use Azure Function Activity(Inside Azure Function method ,you could use SDK to execute any sql as you want) to output your desired result: {"number":10}.Then bind the Azure Function Activity with other activities in ADF.
Here is contradiction right now:
The query sql outputs a scalar array,not other things like jsonObject,or even jsonstring.
However, ADF Look Up Activity only accepts JObject,not JValue. I can't use any convert built-in function here because the query sql need to be produced with correct syntax anyway. I already submitted a ticket to MS support team,but get no luck with this limitation.
I also tried select count(1) as num from c which works in the cosmos db portal. But it still has limitation because the sql crosses partitions.
So,all i can do here is trying to explain the root cause of issue,but can't change the product behaviours.
2 rough ideas:
1.Try no-partitioned collection to execute above sql to produce json output.
2.If the count is not large,try to query columns from db and loop the result with ForEach Activity.
You can use:
select top 1 column from c order by column desc

ADF copy data activity - check for duplicate records before inserting into SQL db

I have a very simple ADF pipeline to copy data from local mongoDB (self-hosted integration environment) to Azure SQL database.
My pipleline is able to copy the data from mongoDB and insert into SQL db.
Currently if I run the pipeline it inserts duplicate data if run multiple times.
I have made _id column as unique in SQL database and now running pipeline throws and error because of SQL constraint wont letting it insert the record.
How do I check for duplicate _id before inserting into SQL db?
should I use Pre-copy script / stored procedure?
Some guidance / directions would be helpful on where to add extra steps. Thanks
Azure Data Factory Data Flow can help you achieve that:
You can follow these steps:
Add two sources: Cosmos db table(source1) and SQL database table(source2).
Using Join active to get all the data from two tables(left join/full join/right join) on Cosmos table.id= SQL table.id.
AlterRow expression to filter the duplicate _id, it not duplicate then insert it.
Then mapping the no-duplicate column to the Sink SQL database table.
Hope this helps.
You Should implement your SQL Logic to eliminate duplicate at the Pre-Copy Script
Currently I got the solution using a Stored Procedure which look like a lot less work as far this requirement is concerned.
I have followed this article:
https://www.cathrinewilhelmsen.net/2019/12/16/copy-sql-server-data-azure-data-factory/
I created table type and used in stored procedure to check for duplicate.
my sproc is very simple as shown below:
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[spInsertIntoDb]
(#sresults dbo.targetSensingResults READONLY)
AS
BEGIN
MERGE dbo.sensingresults AS target
USING #sresults AS source
ON (target._id = source._id)
WHEN NOT MATCHED THEN
INSERT (_id, sensorNumber, applicationType, place, spaceType, floorCode, zoneCountNumber, presenceStatus, sensingTime, createdAt, updatedAt, _v)
VALUES (source._id, source.sensorNumber, source.applicationType, source.place, source.spaceType, source.floorCode,
source.zoneCountNumber, source.presenceStatus, source.sensingTime, source.createdAt, source.updatedAt, source.updatedAt);
END
I think using stored proc should do for and also will help in future if I need to do more transformation.
Please let me know if using sproc in this case has potential risk in future ?
To remove the duplicates you can use the pre-copy script. OR what you can do is you can store the incremental or new data into a temp table using copy activity and use a store procedure to delete only those Ids from the main table which are in temp table after deletion insert the temp table data into the main table. and then drop the temp table.

Multiple member mdx query returns error (permission to access the specified member)

I want to mention that I'm new to SSAS and MDX.
In the past several days I've been dwelling with an excel generated query that errors out.
The problem is that a query is generated by excel when trying to read data from an online cube data source preventing other reads for that cube. The query is executed against an AZURE cube and I manage to profile it and get the following query:
with set __XLUniqueNames as {[Stores].[Chain].[Chain].&[SuperBrugsen], [Stores].[Chain].[Chain].&[Salling], [Stores].[Chain].[Chain].&[SuperBrugsen] }
set __XLDrilledUp as
Generate(__XLUniqueNames,
{ IIF([Stores].[Chain].currentmember.LEVEL_NUMBER <= 2147483647,
[Stores].[Chain].currentmember,
Ancestor([Stores].[Chain].currentmember,
[Stores].[Chain].currentmember.LEVEL_NUMBER - 2147483647)) } )
member [Measures].__XLPath as
Generate(
Ascendants([Stores].[Chain].currentmember),
[Stores].[Chain].currentmember.unique_name,
"__XLPSEP")
select { [Measures].__XLPath } on 0,
__XLDrilledUp on 1
from [SomeCube]
cell properties value
Each time query contains more than one member (an existing member from that dimension) it errors out with this message:
"Either you do not have permission to access the specified member or the specified member does not exist.".
What I have tried:
First, I tried to identify a pattern of member combinations that errors out, with no luck. It seems that for some certain members I get the error and for some, It doesn't. For single member, duplicate members and combination of members that don't exist in the cube it doesn't error.
Second, I did try the query on a different cube (on-premise SSAS) and I didn't get the error.
Third, by modifying the connection string I tried to make Excel ignore the missing members in the hope it will work using the "MDXMissingMemberMode" flag set to Ignore. I didn't work.
Forth, I tried to dissect the query to see which clause was giving the error. With my limited knowledge of MDX I suspect that "currentmember" with its "LEVEL_NUMBER" property is at fault. My guess is that it fails to get the current member for the next member in the set.
Fifth, the last thing and the longest, by accident I discovered that in SSMS you can execute a query in an mdx session (Right-click on cube -> New query) or you can open the cube in browse mode (Right-click on cube -> Browse) which results in a UI similar to the mdx query like.
No here comes the surprise, in this browse "mode" my query executes successfully each time. Intrigued by this I started to profile the request and see what was different. The difference was some additional xml structure like a list with properties. Seeing this I figured I could manipulate my connection string from excel to send some of the properties to make it work, but in the end, I didn't work.
Additional proprieties that worked:
<PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis">
<Catalog>SomeCatalog</Catalog>
<ShowHiddenCubes>true</ShowHiddenCubes>
<SspropInitAppName>Microsoft SQL Server Management Studio</SspropInitAppName>
<Timeout>3600</Timeout>
<LocaleIdentifier>1033</LocaleIdentifier>
<ClientProcessID>24400</ClientProcessID>
<DataSourceInfo/>
<Format>Tabular</Format>
<Content>Schema</Content>
<DbpropMsmdFlattened2>true</DbpropMsmdFlattened2>
<ReturnCellProperties>true</ReturnCellProperties>
<DbpropMsmdActivityID>2309dfa2-3607-41b2-9446-8ece2f5ababa</DbpropMsmdActivityID>
<DbpropMsmdCurrentActivityID>2309dfa2-3607-41b2-9446-8ece2f5ababa</DbpropMsmdCurrentActivityID>
<DbpropMsmdRequestID>d3dbd079-5ca7-496c-ab55-afea71889238</DbpropMsmdRequestID>
</PropertyList>
Additional properties that didn't work:
<PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis">
<Catalog>SomeCatalog</Catalog>
<SspropInitAppName>Microsoft SQL Server Management Studio - Query</SspropInitAppName>
<LocaleIdentifier>1033</LocaleIdentifier>
<ClientProcessID>24400</ClientProcessID>
<DataSourceInfo/>
<Format>Native</Format>
<AxisFormat>TupleFormat</AxisFormat>
<Content>SchemaData</Content>
<Timeout>0</Timeout>
<DbpropMsmdActivityID>e5e75ad6-8fca-4f25-abba-047f86198602</DbpropMsmdActivityID>
<DbpropMsmdCurrentActivityID>e5e75ad6-8fca-4f25-abba-047f86198602</DbpropMsmdCurrentActivityID>
<DbpropMsmdRequestID>8901787f-15a7-48a0-86eb-18ff0b92bdc4</DbpropMsmdRequestID>
</PropertyList>
Excel additional properties:
<PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<Catalog>SomeCatalog</Catalog>
<Timeout>0</Timeout>
<Format>Native</Format>
<DbpropMsmdFlattened2>false</DbpropMsmdFlattened2>
<SafetyOptions>2</SafetyOptions>
<Dialect>MDX</Dialect>
<MdxMissingMemberMode>Error</MdxMissingMemberMode>
<DbpropMsmdOptimizeResponse>9</DbpropMsmdOptimizeResponse>
<DbpropMsmdActivityID>9D69640F-553A-4970-BD4E-7234F1CD928C</DbpropMsmdActivityID>
<DbpropMsmdRequestID>B5E10FF0-EF2F-409E-83BF-CD2DBA20C2BE</DbpropMsmdRequestID>
<LocaleIdentifier>1030</LocaleIdentifier>
<DbpropMsmdMDXCompatibility>1</DbpropMsmdMDXCompatibility>
</PropertyList>
Result of a single member working mxd query:
SuperBrugsen [Stores].[Chain].[Chain].&[SuperBrugsen]__XLPSEP[Stores].[Chain].[All]
This all the info that I could gather for my problem. My next step is to get to Microsoft for help by I don't want to do that just yet due to the costs.
Can someone of you guys please help me out? any ideas or suggestion are most welcomed because I ran out of ideas.
It seems that the problem solved itself. Most likely there was an update that solved this issue. Ref. to azure update logs page: https://azure.microsoft.com/en-us/updates/?product=analysis-services&status=nowavailable

No data from view sys.resource_usage and sys.resource_stats in azure

When i run the following query in the Azure, I am getting no records. and getting the message "Query succeeded: Affected rows: 0."
1) SELECT * FROM sys.resource_usage where database_name='DB_NAME';
When i run this following query in azure, i get this error , "Failed to execute query. Error: Invalid object name 'sys.resource_stats'."
2) SELECT * FROM sys.resource_st where database_name='DB_NAME';
Please help me solve this issue, as im trying to track the daily database usage i.e amount of data stored in DB.
I suppose the issue is with your current db context, what does SELECT DB_NAME() return? Is that a user database? As per BOL 'This view is available to all user roles with permissions to connect to the virtual master database.' you'd have to connect to the master database before query against both sys.resource_usage and sys.resource_stats.
As Lin mentioned ,You have to connect to Master database to see output of sys.resource_stats
For DBspecific DTU usage , you can use sys.dm_db_resource_stats ..this stores data for every 15 seconds
and your question says
as im trying to track the daily database usage i.e amount of data stored in DB.
you can use below DMV to track that
SELECT sys.objects.name, SUM(reserved_page_count) * 8.0 / 1024
FROM sys.dm_db_partition_stats, sys.objects
WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id
GROUP BY sys.objects.name;
GO

Azure SQL Data Warehouse: No catalog entry found for partition ID <id> in database <id>. The metadata is inconsistent. Run DBCC CHECKDB

I am working on moving stored procedures from an on-prem SQL Server database to an Azure SQL Data Warehouse (ASDW). Throughout the process I have had to work around a few missing features - time consuming but not impossible. One thing I have had to do is replace CTE's followed by MERGE statements with temp tables followed by UPDATE/INSERT/DELETE statements (since CTE's cannot be followed by these statements). At the beginning of each SP I check for the temp tables and delete them if they exist.
Today, I created another stored procedure in the ASDW without any temp tables (no updates/inserts/deletes so I left the CTE's in there), it "compiled", and I was able to run it without issue (returned an empty result set, as there is no data yet). I created another SP after this, and when I went to execute it, I got the following error:
...No catalog entry found for partition ID (id) in database 26. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption...
I then went back to the first SP that I mentioned, and it gave me the same error, even though it had previously run without flaw.
I tried running DBCC CHECKDB as instructed but alas, it is not supported/doesn't work.
I dug around a lot, and what I ended up doing was scaling my database from 100DWU's to 500DWU's. I am at 0.16% of my database storage size limit, and there is barely any data anywhere (total DB size is <300MB).
Is there an explanation for this? If not, I can't in good conscience use this platform in a production environment.
Full error:
Msg 110802, Level 16, State 1, Line 1
110802;An internal DMS error occurred that caused this operation to fail.
Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Workers.DmsSqlNativeException,
Message: SqlNativeBufferReader.Run, error in OdbcExecuteQuery: SqlState:
42000, NativeError: 608, 'Error calling: SQLExecDirect(this->GetHstmt(), (SQLWCHAR *)statementText, SQL_NTS), SQL return code: -1 | SQL Error Info:
SrvrMsgState: 1, SrvrSeverity: 16, Error <1>: ErrorMsg: [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]No catalog entry found for partition ID
72057594047758336 in database 36. The metadata is inconsistent. Run DBCC
CHECKDB to check for a metadata corruption. | Error calling: pReadConn-
>ExecuteQuery(statementText, bufferFormat) | state: FFFF, number: 134148,
active connections: 100', Connection String: Driver={pdwodbc};APP=TypeC01-
DmsNativeReader:DB196\mpdwsvc (2504)- ODBC;Trusted_Connection=yes;AutoTranslate=no;Server=\\.\pipe\DB.196-
bb5f9dd884cf\sql\query
I'm sorry to hear about your experience with Azure SQL Data Warehouse. I believe this is a defect related to BIT data type handling for NOT NULL columns. Can you confirm that you have a BIT NOT NULL column (e.g., CREATE TABLE t1 (IsTrue BIT NOT NULL);)?
If so, a fix has been coded and is in testing for release. To mitigate this now, you can either switch to a TINY INT or remove the NOT NULL setting for the column.

Resources