Strictly, method a
is the least resource intensive:
a) select DATEADD(dd, DATEDIFF(dd, 0, getdate()), 0)
Proven less CPU intensive for the same total duration a million rows by someone with way too much time on their hands: Most efficient way in SQL Server to get a date from date+time?
I saw a similar test elsewhere with similar results too.
I prefer the DATEADD/DATEDIFF because:
Edit, Oct 2011
For SQL Server 2008+, you can CAST to date
i.e. CAST(getdate() AS date)
. Or just use date
datatype so no time
to remove.
Edit, Jan 2012
A worked example of how flexible this is: Need to calculate by rounded time or date figure in sql server
Edit, May 2012
Do not use this in WHERE clauses and the like without thinking: adding a function or CAST to a column invalidates index usage. See number 2 here Common SQL Programming Mistakes
Now, this does have an example of later SQL Server optimiser versions managing CAST to date correctly, but generally it will be a bad idea ...
Edit, Sep 2018, for datetime2
DECLARE @datetime2value datetime2 = '02180912 11:45' --this is deliberately within datetime2, year 0218
DECLARE @datetime2epoch datetime2 = '19000101'
select DATEADD(dd, DATEDIFF(dd, @datetime2epoch, @datetime2value), @datetime2epoch)
Best Answer
SQL Server 2008 and up
In SQL Server 2008 and up, of course the fastest way is
Convert(date, @date)
. This can be cast back to adatetime
ordatetime2
if necessary.What Is Really Best In SQL Server 2005 and Older?
I've seen inconsistent claims about what's fastest for truncating the time from a date in SQL Server, and some people even said they did testing, but my experience has been different. So let's do some more stringent testing and let everyone have the script so if I make any mistakes people can correct me.
Float Conversions Are Not Accurate
First, I would stay away from converting
datetime
tofloat
, because it does not convert correctly. You may get away with doing the time-removal thing accurately, but I think it's a bad idea to use it because it implicitly communicates to developers that this is a safe operation and it is not. Take a look:This is not something we should be teaching people in our code or in our examples online.
Also, it is not even the fastest way!
Proof – Performance Testing
If you want to perform some tests yourself to see how the different methods really do stack up, then you'll need this setup script to run the tests farther down:
Please note that this creates a 427.57 MB table in your database and will take something like 15-30 minutes to run. If your database is small and set to 10% growth it will take longer than if you size big enough first.
Now for the actual performance testing script. Please note that it's purposeful to not return rows back to the client as this is crazy expensive on 26 million rows and would hide the performance differences between the methods.
Performance Results
Some Rambling Analysis
Some notes about this. First of all, if just performing a GROUP BY or a comparison, there's no need to convert back to
datetime
. So you can save some CPU by avoiding that, unless you need the final value for display purposes. You can even GROUP BY the unconverted value and put the conversion only in the SELECT clause:Also, see how the numeric conversions only take slightly more time to convert back to
datetime
, but thevarchar
conversion almost doubles? This reveals the portion of the CPU that is devoted to date calculation in the queries. There are parts of the CPU usage that don't involve date calculation, and this appears to be something close to 19875 ms in the above queries. Then the conversion takes some additional amount, so if there are two conversions, that amount is used up approximately twice.More examination reveals that compared to
Convert(, 112)
, theConvert(, 101)
query has some additional CPU expense (since it uses a longervarchar
?), because the second conversion back todate
doesn't cost as much as the initial conversion tovarchar
, but withConvert(, 112)
it is closer to the same 20000 ms CPU base cost.Here are those calculations on the CPU time that I used for the above analysis:
round is the CPU time for a round trip back to
datetime
.single is CPU time for a single conversion to the alternate data type (the one that has the side effect of removing the time portion).
base is the calculation of subtracting from
single
the difference between the two invocations:single - (round - single)
. It's a ballpark figure that assumes the conversion to and from that data type anddatetime
is approximately the same in either direction. It appears this assumption is not perfect but is close because the values are all close to 20000 ms with only one exception.One more interesting thing is that the base cost is nearly equal to the single
Convert(date)
method (which has to be almost 0 cost, as the server can internally extract the integer day portion right out of the first four bytes of thedatetime
data type).Conclusion
So what it looks like is that the single-direction
varchar
conversion method takes about 1.8 μs and the single-directionDateDiff
method takes about 0.18 μs. I'm basing this on the most conservative "base CPU" time in my testing of 18458 ms total for 25,920,000 rows, so 23218 ms / 25920000 = 0.18 μs. The apparent 10x improvement seems like a lot, but it is frankly pretty small until you are dealing with hundreds of thousands of rows (617k rows = 1 second savings).Even given this small absolute improvement, in my opinion, the
DateAdd
method wins because it is the best combination of performance and clarity. The answer that requires a "magic number" of0.50000004
is going to bite someone some day (five zeroes or six???), plus it's harder to understand.Additional Notes
When I get some time I'm going to change
0.50000004
to'12:00:00.003'
and see how it does. It is converted to the samedatetime
value and I find it much easier to remember.For those interested, the above tests were run on a server where @@Version returns the following: