JLo
JLo

Reputation: 3157

SQL Datediff in seconds with decimal places

I am trying to extract the difference between two SQL DateTime values in seconds, with decimal places for some performance monitoring.

I have a table, "Pagelog" which has a "created" and "end" datetime. In the past I have been able to do the following:

SELECT DATEDIFF(ms, pagelog_created, pagelog_end)/1000.00 as pl_duration FROM pagelog

However I have started getting the following error:

Msg 535, Level 16, State 0, Line 1
The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.

I have seen numerous responses to this error stating that I should use a less precise unit of measurement. But this hardly helps when I need to distinguish between 2.1 seconds and 2.9 seconds, because DATEDIFF(s,..,..) will return INT results and lose the accuracy I need.

I originally thought that this had been caused by a few values in my table having a huge range but running this:

SELECT DATEDIFF(s, pagelog_created, pagelog_end) FROM pagelog
ORDER BY DATEDIFF(s, pagelog_created, pagelog_end) DESC

Returns a max value of 30837, which is 8.5 hours or 30,837,000 milliseconds, well within the range of a SQL INT as far as I know?

Any help would be much appreciated, as far as I can tell I have two options:

Thanks!

Upvotes: 4

Views: 8644

Answers (4)

praveen
praveen

Reputation: 1

with cte as(
select rownum = row_number() over(partition by T.TR_ID order by T.[date]),
T.* from [dbo].[TR_Events] T 
)
select  cte.[date],nex.[date],convert(varchar(10),datediff(s, cte.[date], nex.[date])/3600)+':'+
convert(varchar(10),datediff(s, cte.[date], nex.[date])%3600/60)+':'+
convert(varchar(10),(datediff(s,cte.[date], nex.[date])%60)) 
from cte
left join cte prev on prev.rownum = cte.rownum - 1
left join cte nex on nex.rownum = cte.rownum + 1

Upvotes: 0

JLo
JLo

Reputation: 3157

The StackOverflow magic seems to have worked, despite spending hours on this problem last week, I re-read my question and have now solved this. I thought I'd update with the answer to help anyone else who has this problem.

The problem here was not that there was a large range, there was a negative range. Which obviously results in a negative overflow. It would have been helpful if the SQL Server error was a little more descriptive but it's not technically wrong.

So in my case, this was returning values:

SELECT * FROM pagelog
WHERE pagelog_created > pagelog_end

Either remove the values, or omit them from the initial result set!

Thanks to Ivan G and Andriy M for your responses too

Upvotes: 2

Ivan Golović
Ivan Golović

Reputation: 8832

You can try to avoid overflow like this:

DECLARE @dt1 DATETIME = '2013-01-01 00:00:00.000'
DECLARE @dt2 DATETIME = '2013-06-01 23:59:59.997'

SELECT  DATEDIFF(DAY, CAST(@dt1 AS DATE), CAST(@dt2 AS DATE)) * 24 * 60 * 60
SELECT  DATEDIFF(ms, CAST(@dt1 AS TIME), CAST(@dt2 AS TIME))/1000.0

SELECT DATEDIFF(DAY, CAST(@dt1 AS DATE), CAST(@dt2 AS DATE)) * 24 * 60 * 60 
        + DATEDIFF(ms, CAST(@dt1 AS TIME), CAST(@dt2 AS TIME))/1000.0 

First it gets number of seconds in whole days from the DATE portion of the DATETIME and then it adds number of seconds from the TIME portion, after that, it just adds them.

There won't be error because DATEDIFF for minimum and maximum time in TIME data type cannot produce overflow.

Upvotes: 1

Andriy M
Andriy M

Reputation: 77657

You could of course do something like this:

SELECT
  DATEDIFF(ms, DATEADD(s, x.sec, pagelog_created), pagelog_end) * 0.001
  + x.sec AS pl_duration
FROM pagelog
CROSS APPLY (
  SELECT DATEDIFF(s, pagelog_created, pagelog_end)
) x (sec)
;

As you can see, first, the difference in seconds between pagelog_created and pagelog_end is taken, then the seconds are added back to pagelog_created and the difference in milliseconds between that value and pagelog_end is calculated and added to the seconds.

However, since, as per your investigation, the table doesn't seem to have rows that could cause the overflow, I'd also double check whether that particular fragment was the source of the error.

Upvotes: 0

Related Questions