How Should I Store Currency Values in SQL Server?
See how you should store currency values in SQL Server.
Join the DZone community and get the full member experience.
Join For FreeOn Twitter, Michael Dyrynda writes:
Never ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever store currency as a float in your database.
I completely agree with this statement. Never store values used in financial calculations as floating point values, because a floating point is an approximate representation of a decimal value, stored as binary. In most cases it is inaccurate as soon as you store it. You can read more in this excellent - if a little dry - technical paper.
With that out of the way, we get into an interesting discussion about the correct data type to store currency values.
I've contributed to several books about SQL Server, and I wrote recently on this website about MONEY
and SMALLMONEY
. I like to think I know a little bit about picking the right data type for a certain column.
So I was surprised when a friend on that Twitter thread mentioned that he stores currency values as integers (multiplying them by 100 first). Being as literal as I am, I thought he meant the INTEGER
(or INT
) data type, which is a four-byte data type that goes up to around 2.1 billion (for values greater than zero), but Michael reminded me that "big integers exist," so of course my friend was talking about BIGINT
.
I wrote in a reply that "Decimal will have to do. I'm not going to store money as a BIGINT
." Nevertheless, I found it an interesting thought experiment to write about here.
Assume that your currency has a hundred possible decimal values from 0.00 to 0.99. Now assume that you can eliminate pesky rounding errors by storing currency data in cents as a BIGINT
. This requires you to multiply the currency amount by 100 when inserting it, and dividing by 100 when displaying it. That seems counterproductive in my mind, but bear with me as we investigate this further.
What problem is this meant to solve? Why avoid the use of the DECIMAL
data type, which is engineered to store decimal values with a decimal precision? Is the risk of rounding errors so great as to avoid a data type completely as the floating point? Surely DECIMAL
can handle this? Most of the time we use simple sums ( SUM
), averages ( AVG
), and regular arithmetic operations like addition, subtraction, multiplication and division, and in my experience - provided I put the parentheses in the right place - I've never run into a rounding error that was significant enough (i.e. outside of the tolerance of the organization calculating the amounts) to be a problem.
As I've pointed out before, BIGINT
ranges from negative 9 quintillion, to positive 9 quintillion (a number that is 19 digits wide). That's a lot of squillions. Even storing these values as cents, you can store amounts that could handle even the ill-fated Zimbabwean dollar, which - at the time it was discontinued - was distributed in $100-trillion notes (100,000,000,000,000.00, or 17 digits wide). A BIGINT
needs 8 bytes of uncompressed storage for each value.
Now DECIMAL
is a different beastie. If you need the highest precision, a DECIMAL
can use up to 17 bytes for each value. Generally though, I like using DECIMAL(19,4)
for currency, which needs 9 bytes and can store numbers 19 digits wide, where the last four digits are after the decimal place. Coincidentally, this has the same range of values as the MONEY
data type. Maybe it's not a coincidence that I chose that then, eh?
Purists will say that the MONEY
data type is woefully inaccurate when it comes to calculations, including multiplication and division. To those purists I say "pish!" Unless we're talking Bitcoin, a currency amount usually has two decimal places. By using four decimal places, this means that rounding errors are less likely. In the vast majority of cases you will be doing things like summing values, calculating percentages, working out differences between two values, and so on. If you are being caught out by rounding errors, perhaps you should increase the precision of the input values for the purposes of that calculation only.
After all, if you're going to the effort of storing currency values as cents in a BIGINT
column, you're already doing conversions of your data. Why not just eliminate that overhead of multiplication and division on storage and retrieval respectively, store these values as DECIMAL(19,4)
, and increase the precision only when you need it? In fact, sometimes all you need to do is change the order of operations in your calculation.
When I design tables on SQL Server that are going to store a currency value, I generally use DECIMAL(19,4)
. If I encounter a customer database that stores these values using the MONEY
data type, I have no problem whatsoever with that, because they are basically the same thing anyway.
I don't see the point of using BIGINT
to store currency values. It causes confusion, and will introduce bugs when multiplying or dividing by 100. This is especially moot since MONEY
is stored on disk in exactly the same format as BIGINT
anyway.
If you're trying to save storage space, just compress your data using row or page compression. One thing we can all agree on though, is never to store currency as a floating point.
Share your thoughts in the comments below.
Published at DZone with permission of Randolph West, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments