Reputation: 23
I try to transform big tables to azure SQL server. while the small one are completed, the big ones aren't, and fall on timeout sink side. the errors are attached. while the sql server doesn’t has any timeout specified, it still wont work.
the sql db is 800 DTU.
how do i increase the timeout at sink side, if that is the problem.
isn't the data factory supposed to save the connection and retry if failed?
errors:
{
"dataRead": 1372864152,
"dataWritten": 1372864152,
"sourcePeakConnections": 1,
"sinkPeakConnections": 2,
"rowsRead": 2205634,
"rowsCopied": 2205634,
"copyDuration": 8010,
"throughput": 167.377,
"errors": [
{
"Code": 11000,
"Message": "Failure happened on 'Sink' side. 'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Timeouts in SQL write operation.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.,Source=.Net SqlClient Data Provider,SqlErrorNumber=-2,Class=11,ErrorCode=-2146232060,State=0,Errors=[{Class=11,Number=-2,State=0,Message=Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.,},],''Type=System.ComponentModel.Win32Exception,Message=The wait operation timed out,Source=,'",
"EventType": 0,
"Category": 5,
"Data": {
"FailureInitiator": "Sink"
},
"MsgId": null,
"ExceptionType": null,
"Source": null,
"StackTrace": null,
"InnerEventInfos": []
}
],
"effectiveIntegrationRuntime": "XXX",
"billingReference": {
"activityType": "DataMovement",
"billableDuration": [
{
"meterType": "SelfhostedIR",
"duration": 2.0166666666666666,
"unit": "Hours"
}
]
},
"usedParallelCopies": 1,
"executionDetails": [
{
"source": {
"type": "SqlServer"
},
"sink": {
"type": "SqlServer"
},
"status": "Failed",
"start": "2020-08-03T17:16:58.8388528Z",
"duration": 8010,
"usedParallelCopies": 1,
"profile": {
"queue": {
"status": "Completed",
"duration": 810
},
"preCopyScript": {
"status": "Completed",
"duration": 0
},
"transfer": {
"status": "Completed",
"duration": 7200,
"details": {
"readingFromSource": {
"type": "SqlServer",
"workingDuration": 7156,
"timeToFirstByte": 0
},
"writingToSink": {
"type": "SqlServer"
}
}
}
},
"detailedDurations": {
"queuingDuration": 810,
"preCopyScriptDuration": 0,
"timeToFirstByte": 0,
"transferDuration": 7200
}
}
],
"dataConsistencyVerification": {
"VerificationResult": "NotVerified"
},
"durationInQueue": {
"integrationRuntimeQueue": 810
}
}
Upvotes: 2
Views: 7674
Reputation: 15227
Would like to supplement my case: same error as OP but the exception is raised during executing dataflow.
Following the accepted answer's direction, I set the Batch size to some limit in sink of my dataflow will help to solve my issue.
Upvotes: 0
Reputation: 16411
Please try to set the Write batch timeout in sink side:
Ref: Azure SQL Database as the sink
Upvotes: 5