I’m tempted to respond because some people seem confused by IEEE-754 floating point behavior. If the issue is rounding before display or ToString, then if Lucee were to round output to 15 digits before display, that should solve the issue, I would think. IEEE-754 has a 53-bit significand (often incorrectly termed “mantissa”), which is enough for about 15.95 digits, so that 16th digit can be inaccurate. ( IEEE 754 - Wikipedia )
% perl -e ‘print 81.32 * 100’
8132 % perl -e ‘printf “%.12f”, 81.32 * 100’
8131.999999999999
% perl -e ‘printf “%.11f”, 81.32 * 100’
8132.00000000000
% python -c ‘print(81.32 * 100)’
8131.999999999999
% node
Welcome to Node.js v16.11.0.
console.log(81.32 * 100)
8131.999999999999
An example of an unrepresentable number in IEEE-754 is 0.1 (which can only be approximated), as can be seen below (trying to print 18 digits when only 15 or 16 are valid) :
% perl -e 'printf+("%.18f\n"x6), 0.1, 0.2, 0.3, 0.1+0.1, 0.1+0.1+0.1, 0.1*3 ’
0.100000000000000006
0.200000000000000011
0.299999999999999989
0.200000000000000011
0.300000000000000044
0.300000000000000044
Printing 16 digits; everything rounds nicely:
perl -e ‘printf+("%.16f\n"x6),0.1,0.2,0.3,0.1+0.1,0.1+0.1+0.1,0.1*3’
0.1000000000000000
0.2000000000000000
0.3000000000000000
0.2000000000000000
0.3000000000000000
0.3000000000000000
I’m pretty sure most people on this thread knew this already, but some posters seemed confused by the behavior. The question is clearly just whether rounding should be performed differently before display or a tostring function. Although, of course, computation with BigDecimal (using PrecisionEvaluate) should eliminate decimal rounding issues.
(But if I’m not adding anything interesting to the post, you can delete this…)