Bug : Number, multiplication and .99999999999


I know that in computers, comma numbers are sometimes a problem. Except that here, I find it difficult to understand the reason. With Adobe Coldfusion 2021, it’s all good.

<cfset amount = 81.32>
<cfset amount = amount * 100>
<cfdump var="#amount#">

The result is a number with the value : 8131.999999999999

Any workaround?

Thank you!

OS: Windows Server 2016 (10.0) 64bit
Java Version: 1.8.0_181 (Oracle Corporation) 64bit
Tomcat Version: Apache Tomcat/8.5.33
Lucee Version: Lucee

I tried this code and most of the number are fine, but some have this issue. In Coldfusion 2021, they are all good.

I round the index i because even that have a problem adding 0.01 to the number without adding .99999999 and values like that.

<cfset start = 0.01>
<cfset end = 100.00>

	<cfloop from="#start#" to="#end#" index="i" step="0.01">
		<cfset i = round(i,2)>
		<cfset amount = i>
		<cfset amount = amount * 100>

It’s look like a bug, no?

have a look at PrecisionEvaluate() :: Lucee Documentation


You are a life saver! Thank you!!!

In the documentation and the video, it says we must pass to it a string and it return a string, but I tried it on my Lucee version and I can pass the number and math operation like that #PrecisionEvaluate(81.32 * 100)# and it returns a number.

Am I suppose to be able to do it? Should the documentation be updated?

Honestly, this behaviour makes float unusable, if you have to wrap every operation into PrecisionEvaluate. Looks for me like a bug. I just did a quick check, even CF10 returns results as expected.

1 Like

I have to admit that this complicates things and that I will have to revise every line of code where there are mathematical operations on decimal numbers. Even the old Adobe Coldfusion 8 returned the numbers as expected. The advantages of Lucee outweigh this inconvenience though.

1 Like

My priority is whether I can use the first of the following two methods:

<cfdump var="#PrecisionEvaluate(81.32 * 100)#">
<cfdump var="#PrecisionEvaluate("81.32 * 100")#">

The first returns a number, the second a string. Am I free to use the method best suited to my situation? I wouldn’t want to introduce a bug in my application by passing the non-string mathematical formula directly.

I too am curious if the CF behavior is more of a happy accident, or if they have taken steps internally to detect these situations and catch them. While floating point math is a well-documented issue in computer programming, it does seem more reasonable that I’d encounter it when using decimals that were very long and that I shouldn’t be an issue when I simply have two decimal places. Perhaps there is more Lucee can do to detect and improve on decimal math.



All version are affected. Adobe CF returns correct results

This is less a bug and more an issue of semantics. The result returned could be argued as absolutely correct – it’s the way computers compute.

Lucee could add PrecisionEvaluate() to every decimal calculation. The problem I suspect is that PrecisionEvaluate() adds an overhead to the calculation.

Lucee gives you the fastest, lowest overhead by default. And then gives you the option of resolving that lack of precision with a dedicated function if needed in your environment.

A possible compromise would be an application level flag that allows you to turn on PrecisionEvaluate() by default.

I think what the user/developer expects is if there is a simple mathematical float operation that the result is returned correctly. At least for humans. If it’s a speed issue and float operations are not used intentionally I agree that a setting would make sense.

I tried to compare and output results of PrecisionEvaluate(81.32 * 100) using TryCF.com and received a “Sorry, some CF functions/tags are disabled.” I thought that this was strange. Are there any reasons, security or otherwise, that would result in this function being blacklisted by TryCF?

(NOTE: Performing the same test on CFFiddle resulted in a “Parent directory not found” error. Performing a HelloWorld test didn’t work either, so there’s else something broken there.)

There is already a ticket for this issue: [LDEV-646] Floating point arithm. inaccuracy is handled inconsistently - Lucee. Which is quite old.

If anybody is interested in this, please upvote it!

I’m tempted to respond because some people seem confused by IEEE-754 floating point behavior. If the issue is rounding before display or ToString, then if Lucee were to round output to 15 digits before display, that should solve the issue, I would think. IEEE-754 has a 53-bit significand (often incorrectly termed “mantissa”), which is enough for about 15.95 digits, so that 16th digit can be inaccurate. ( IEEE 754 - Wikipedia )

% perl -e ‘print 81.32 * 100’
8132 % perl -e ‘printf “%.12f”, 81.32 * 100’
% perl -e ‘printf “%.11f”, 81.32 * 100’
% python -c ‘print(81.32 * 100)’
% node
Welcome to Node.js v16.11.0.

console.log(81.32 * 100)

An example of an unrepresentable number in IEEE-754 is 0.1 (which can only be approximated), as can be seen below (trying to print 18 digits when only 15 or 16 are valid) :

% perl -e 'printf+("%.18f\n"x6), 0.1, 0.2, 0.3, 0.1+0.1, 0.1+0.1+0.1, 0.1*3 ’

Printing 16 digits; everything rounds nicely:
perl -e ‘printf+("%.16f\n"x6),0.1,0.2,0.3,0.1+0.1,0.1+0.1+0.1,0.1*3’

I’m pretty sure most people on this thread knew this already, but some posters seemed confused by the behavior. The question is clearly just whether rounding should be performed differently before display or a tostring function. Although, of course, computation with BigDecimal (using PrecisionEvaluate) should eliminate decimal rounding issues.

(But if I’m not adding anything interesting to the post, you can delete this…)