I'm using Azure Data Factory - Copy Data activity from retrieving the data from NetSuite. It's all working fine for most of the tables except one large table which has around 800+ columns. Below are the errors that i'm facing in different places.
Unable to run Copy Data Activity - It fails with error "Failed to retrieve data" (I don't think it's a timeout issue as I set the timeout to 5 mins and it fails within 3 mins without even connecting to the source)
Within Copy Data - Unable to Import Schema - It fails with error "Failed to retrieve data"
Preview Data in Dataset - It fails with error "The operation has timed out"
Is there any limitation in the Data Factory in terms of the number of columns that you could have in the source? Please let me know
Thanks,
Praveen Sreeram
Related
I am trying to extract the data from SAP Solution manager using SAP Table connector in ADF.
It is working for some table and throwing a below error few tables.
Could someone please help me with any lead on how to resolve it.
and also my objective is extract the data from SAP solution manager using ADF tool.
Failure happened on 'Source' side.
ErrorCode=UserErrorRfcFunctionInvokeFailed,
'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
Message=Failed to invoke function /SAPDS/RFC_READ_TABLE2 with error:
SAP.Middleware.Connector.RfcAbapRuntimeException,
message: Error with ASSIGN ... CASTING in program /SAPDS/SAPLRS_BASIS .
SAP.Middleware.Connector.RfcConnection.ThrowRfcErrorMsg()
at SAP.Middleware.Connector.RfcConnection.ReadBytes(Byte* buffer, Int32 count)
at SAP.Middleware.Connector.RfcConnection.ReadRfcIDBegin(Int32& length)
at SAP.Middleware.Connector.RfcConnection.ReadUpTo(RFCGET readState, RfcFunction function, RFCID toRid)
at SAP.Middleware.Connector.RfcConnection.RfcReceive(RfcFunction function)
...
...
...
I am trying to import data from a CSV file into a Dynamics 365 Account table. As I need to do some transformations I am using a dataflow rather than a basic copy activity.
I was having difficulties getting it to work using a dataflow for writing to a multi lookup field so I tried using a copy activity to see if that worked using the exact same source,sink and mappings. I was able to import the
data successfully with the copy activity. I'm confused as to why the data flow does not work using the same source,sink and mappings. Below are screenshots of the various elements I set up and configured. Would appreciate any suggestions to get the dataflow working.
I'm using a cut down version of what will ultimately be my source CSV file. This is just so I can concentrate on getting the writing to the lookup field working.
Source CSV file
Copy Activity Source
Copy Activity Sink
Dynamics 365 Sink
Dataflow Source
Dataflow Sink
Copy Activity Mapping
Dataflow Mapping
Copy Activity Success
Dataflow Failure
Dataflow Error
Details
{"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: DF-REST_001 - Rest - Error response received from the server (url:https://##############v9.0/accounts,request body: Some({"accountid":"8b0257ea-de19-4aaa-9945-############","name":"A User","ownerid":"7d64133b-daa8-eb11-9442-############","ownerid#EntityReference":"systemuser"}), request method: POST, status code: 400), response body: Some({"error":{"code":"0x0","message":"An error occurred while validating input parameters: Microsoft.OData.ODataException: A 'PrimitiveValue' node with non-null value was found when trying to read the value of the property 'ownerid'; however, a 'StartArray' node, a 'StartObject' node, or a 'PrimitiveValue' node with null value was expected.\r\n at Microsoft.OData.JsonLight.ODataJsonLightPropertyAndValueDeserializer.ValidateExpandedNestedResourceInfoPropertyValue(IJsonReader jsonReader, Nullable1 isCollection, String propertyName, IEdmTypeReference typeReference)\r\n at Microsoft.OData.JsonLight.ODataJsonLightResourceDeserializ","Details":"com.microsoft.dataflow.Issues: DF-REST_001 - Rest - Error response received from the server (url:https://dev-gc.crm11.dynamics.com/api/data/v9.0/accounts,request body: Some({"accountid":"8b0257ea-de19-4aaa-9945-############","name":"A User","ownerid":"7d64133b-daa8-eb11-9442-############","ownerid#EntityReference":"systemuser"}), request method: POST, status code: 400), response body: Some({"error":{"code":"0x0","message":"An error occurred while validating input parameters: Microsoft.OData.ODataException: A 'PrimitiveValue' node with non-null value was found when trying to read the value of the property 'ownerid'; however, a 'StartArray' node, a 'StartObject' node, or a 'PrimitiveValue' node with null value was expected.\r\n at Microsoft.OData.JsonLight.ODataJsonLightPropertyAndValueDeserializer.ValidateExpandedNestedResourceInfoPropertyValue(IJsonReader jsonReader, Nullable1 isCollection, String propertyName, IEdmTypeReference typeReference)\r\n at Microsoft.OData.JsonLight.ODataJsonLightResourceDeser"}
I am running into a same wall,
but a temporary solution here is to sink the dataflow output to a Csv/or similar file into ADLS and then use a Copy activity to extract those files and Upsert it into the Dynamics.
Other references: https://vishalgrade.com/2020/10/01/how-to-populate-multi-lookup-attribute-in-ce-using-azure-data-factory/
Always got same 'OutputDataConversionError.TypeConversionError' , even I remove the datetime column in output in Synapse DW sql pool, and got same error after delete and recreated stream analystic.
Stream Input is event hub, get dignostic log from azure sql database. Tested pass.
Stream output is a table in azure synapse analystic DW sql pool. Tested ok.
Query is like:
SELECT
Records.ArrayValue.count as [count],
Records.ArrayValue.total as [total],
Records.ArrayValue.minimum as [minimum],
Records.ArrayValue.minimum as [maximum],
Records.ArrayValue.resourceId as [resourceId],
CAST(Records.ArrayValue.time AS datetime) as [time],
Records.ArrayValue.metricName as [metricName],
Records.ArrayValue.timeGrain as [timeGrain],
Records.ArrayValue.average as [average]
INTO
OrderSynapse
FROM
dbhub d
CROSS APPLY GetArrayElements(d.records) AS Records
the query passed the test run. but stream job got into degraded state. and got error:
Source 'dblog' had 1 occurrences of kind 'OutputDataConversionError.TypeConversionError' between processing times '2021-11-12T05:28:08.7922407Z' and '2021-11-12T05:28:08.7922407Z'.
But even I deleted the stream job, drop the [time] column in output table, remove the "CAST(Records.ArrayValue.time AS datetime) as [time], " in the query statement, and recreated a new stream job, still got same error?
Part of the Activity log:
"ErrorCategory": "Diagnostic",
"ErrorCode": "DiagnosticMessage",
"Message": "First Occurred: 11/12/2021 7:39:12 AM | Resource Name: dblog | Message: Source 'dblog' had 1 occurrences of kind 'OutputDataConversionError.TypeConversionError' between processing times '2021-11-12T07:39:12.8681135Z' and '2021-11-12T07:39:12.8681135Z'. ",
"Type": "DiagnosticMessage",
Why? is there a hidden cache I can not clean?
It looks like a bug in the output adapter is provoking that issue. While the fix is rolling out, you can re-order the field list to match the column order in the destination table.
When i run the following query in the Azure, I am getting no records. and getting the message "Query succeeded: Affected rows: 0."
1) SELECT * FROM sys.resource_usage where database_name='DB_NAME';
When i run this following query in azure, i get this error , "Failed to execute query. Error: Invalid object name 'sys.resource_stats'."
2) SELECT * FROM sys.resource_st where database_name='DB_NAME';
Please help me solve this issue, as im trying to track the daily database usage i.e amount of data stored in DB.
I suppose the issue is with your current db context, what does SELECT DB_NAME() return? Is that a user database? As per BOL 'This view is available to all user roles with permissions to connect to the virtual master database.' you'd have to connect to the master database before query against both sys.resource_usage and sys.resource_stats.
As Lin mentioned ,You have to connect to Master database to see output of sys.resource_stats
For DBspecific DTU usage , you can use sys.dm_db_resource_stats ..this stores data for every 15 seconds
and your question says
as im trying to track the daily database usage i.e amount of data stored in DB.
you can use below DMV to track that
SELECT sys.objects.name, SUM(reserved_page_count) * 8.0 / 1024
FROM sys.dm_db_partition_stats, sys.objects
WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id
GROUP BY sys.objects.name;
GO
I am working on moving stored procedures from an on-prem SQL Server database to an Azure SQL Data Warehouse (ASDW). Throughout the process I have had to work around a few missing features - time consuming but not impossible. One thing I have had to do is replace CTE's followed by MERGE statements with temp tables followed by UPDATE/INSERT/DELETE statements (since CTE's cannot be followed by these statements). At the beginning of each SP I check for the temp tables and delete them if they exist.
Today, I created another stored procedure in the ASDW without any temp tables (no updates/inserts/deletes so I left the CTE's in there), it "compiled", and I was able to run it without issue (returned an empty result set, as there is no data yet). I created another SP after this, and when I went to execute it, I got the following error:
...No catalog entry found for partition ID (id) in database 26. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption...
I then went back to the first SP that I mentioned, and it gave me the same error, even though it had previously run without flaw.
I tried running DBCC CHECKDB as instructed but alas, it is not supported/doesn't work.
I dug around a lot, and what I ended up doing was scaling my database from 100DWU's to 500DWU's. I am at 0.16% of my database storage size limit, and there is barely any data anywhere (total DB size is <300MB).
Is there an explanation for this? If not, I can't in good conscience use this platform in a production environment.
Full error:
Msg 110802, Level 16, State 1, Line 1
110802;An internal DMS error occurred that caused this operation to fail.
Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Workers.DmsSqlNativeException,
Message: SqlNativeBufferReader.Run, error in OdbcExecuteQuery: SqlState:
42000, NativeError: 608, 'Error calling: SQLExecDirect(this->GetHstmt(), (SQLWCHAR *)statementText, SQL_NTS), SQL return code: -1 | SQL Error Info:
SrvrMsgState: 1, SrvrSeverity: 16, Error <1>: ErrorMsg: [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]No catalog entry found for partition ID
72057594047758336 in database 36. The metadata is inconsistent. Run DBCC
CHECKDB to check for a metadata corruption. | Error calling: pReadConn-
>ExecuteQuery(statementText, bufferFormat) | state: FFFF, number: 134148,
active connections: 100', Connection String: Driver={pdwodbc};APP=TypeC01-
DmsNativeReader:DB196\mpdwsvc (2504)- ODBC;Trusted_Connection=yes;AutoTranslate=no;Server=\\.\pipe\DB.196-
bb5f9dd884cf\sql\query
I'm sorry to hear about your experience with Azure SQL Data Warehouse. I believe this is a defect related to BIT data type handling for NOT NULL columns. Can you confirm that you have a BIT NOT NULL column (e.g., CREATE TABLE t1 (IsTrue BIT NOT NULL);)?
If so, a fix has been coded and is in testing for release. To mitigate this now, you can either switch to a TINY INT or remove the NOT NULL setting for the column.