If you are looking for a one-size-fits-all, I'd suggest DECIMAL(19, 4)
is a popular choice (a quick Google bears this out). I think this originates from the old VBA/Access/Jet Currency data type, being the first fixed point decimal type in the language; Decimal
only came in 'version 1.0' style (i.e. not fully implemented) in VB6/VBA6/Jet 4.0.
The rule of thumb for storage of fixed point decimal values is to store at least one more decimal place than you actually require to allow for rounding. One of the reasons for mapping the old Currency
type in the front end to DECIMAL(19, 4)
type in the back end was that Currency
exhibited bankers' rounding by nature, whereas DECIMAL(p, s)
rounded by truncation.
An extra decimal place in storage for DECIMAL
allows a custom rounding algorithm to be implemented rather than taking the vendor's default (and bankers' rounding is alarming, to say the least, for a designer expecting all values ending in .5 to round away from zero).
Yes, DECIMAL(24, 8)
sounds like overkill to me. Most currencies are quoted to four or five decimal places. I know of situations where a decimal scale of 8 (or more) is required but this is where a 'normal' monetary amount (say four decimal places) has been pro rata'd, implying the decimal precision should be reduced accordingly (also consider a floating point type in such circumstances). And no one has that much money nowadays to require a decimal precision of 24 :)
However, rather than a one-size-fits-all approach, some research may be in order. Ask your designer or domain expert about accounting rules which may be applicable: GAAP, EU, etc. I vaguely recall some EU intra-state transfers with explicit rules for rounding to five decimal places, therefore using DECIMAL(p, 6)
for storage. Accountants generally seem to favour four decimal places.
PS Avoid SQL Server's MONEY
data type because it has serious issues with accuracy when rounding, among other considerations such as portability etc. See Aaron Bertrand's blog.
Microsoft and language designers chose banker's rounding because hardware designers chose it [citation?]. It is enshrined in the Institute of Electrical and Electronics Engineers (IEEE) standards, for example. And hardware designers chose it because mathematicians prefer it. See Wikipedia; to paraphrase: The 1906 edition of Probability and Theory of Errors called this 'the computer's rule' ("computers" meaning humans who perform computations).
Decimal
(alias for System.Decimal
structure in the BCL) is designed for storing monetary values. It's a 128 bit decimal floating point type (as opposed to binary floating point) and is useful for storing "real-world" values with high decimal precision. By real-world, I specifically mean measurements that are originally made in decimal. Double is generally suitable for calculations that don't need as much accuracy when they are represented as decimal numbers.
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
Best Answer
The answer is easy. Never floats. NEVER!
Floats were according to IEEE 754 always binary, only the new standard IEEE 754R defined decimal formats. Many of the fractional binary parts can never equal the exact decimal representation.
Any binary number can be written as
m/2^n
(m
,n
positive integers), any decimal number asm/(2^n*5^n)
. As binaries lack the primefactor 5
, all binary numbers can be exactly represented by decimals, but not vice versa.So you end up with a number either higher or lower than the given decimal number. Always.
Why does that matter? Rounding.
Normal rounding means 0..4 down, 5..9 up. So it does matter if the result is either
0.049999999999
.... or0.0500000000
... You may know that it means 5 cent, but the the computer does not know that and rounds0.4999
... down (wrong) and0.5000
... up (right).Given that the result of floating point computations always contain small error terms, the decision is pure luck. It gets hopeless if you want decimal round-to-even handling with binary numbers.
Unconvinced? You insist that in your account system everything is perfectly ok? Assets and liabilities equal? Ok, then take each of the given formatted numbers of each entry, parse them and sum them with an independent decimal system!
Compare that with the formatted sum. Oops, there is something wrong, isn't it?
It doesn't help against this error. Because all people automatically assume that the computer sums right, and practically no one checks independently.