It’s easy: always use decimals for variables that represent money.

Reason to avoid floats or doubles: They will produce incorrect results when you try to calculate something. I have to add, it will produce errors in **very rare** cases. However you can avoid it by using decimals!

You don’t believe me? Try this: 77.1 * 850 should be exactly 65,535.

decimal correct = (decimal)77.1 * (decimal)850d;

correct will be = 65535

double incorrect = (double)77.1 * (double)850;

incorrect will be = 65534.999999999993

Why is that? That’s because 0.1 has no exact representation in binary… it’s a repeating binary number. It’s sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3’s forever. If you lose patience, you get something inexact.

So what are the downsides to decimal?

- They use a lot more space, 128 bit to be exact. But who cares these days…
- They are slow as a dog. Again, doesn’t matter if you are not about to program your own spreadsheet application.