...making Linux just a little more fun!
Ben Okopnik [ben at linuxgazette.net]
On Wed, Oct 03, 2007 at 12:36:53PM +0200, Pierre Habouzit wrote:
> Hi, > > I would like to report that the article "A Question Of Rounding" in > your issue #143 is completely misleading, because its author doesn't > understand how floating point works. Hence you published an article that > is particularly wrong.
Thanks for your opinion; I've forwarded your response to the author.
> I'm sorry, but this article is very wrong, and give false informations > on a matter that isn't very well understood by many programmers, hence I > beg you to remove this article, for the sake of the teacher that already > have to fight against enough preconceived ideas about ieee 754 numbers > already.
Sorry, that's not in the cards - but we'll be happy to publish your email in the next Mailbag.
I understood, even before I approved the article for publication, that a lot of people had rather strong feelings and opinions on this issue; i.e., the author getting flamed when he tried to file a bug report on this was a bit of a clue. Those opinions, however, don't make him wrong: whatever other evils can be ascribed to Micr0s0ft, their approach to IEEE-754 agrees with his - and is used by the majority of programmers in the world. That's not a guarantee that they (or he) are right - but it certainly implies that his argument stands on firm ground and has merit.
You are, of course, welcome to write an article that presents your viewpoint. If it meets our requirements and guidelines (https://linuxgazette.net/faq/author.html), I'd be happy to publish it.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Pierre Habouzit [madcoder at debian.org]
On Wed, Oct 03, 2007 at 12:36:22PM +0000, Ben Okopnik wrote:
> On Wed, Oct 03, 2007 at 12:36:53PM +0200, Pierre Habouzit wrote: > > Hi, > > > > I would like to report that the article "A Question Of Rounding" in > > your issue #143 is completely misleading, because its author doesn't > > understand how floating point works. Hence you published an article that > > is particularly wrong. > > Thanks for your opinion; I've forwarded your response to the author.
he was in Cc through the glibc bug anyways.
> > I'm sorry, but this article is very wrong, and give false informations > > on a matter that isn't very well understood by many programmers, hence I > > beg you to remove this article, for the sake of the teacher that already > > have to fight against enough preconceived ideas about ieee 754 numbers > > already. > > Sorry, that's not in the cards - but we'll be happy to publish your > email in the next Mailbag. > > I understood, even before I approved the article for publication, that a > lot of people had rather strong feelings and opinions on this issue; > i.e., the author getting flamed when he tried to file a bug report on > this was a bit of a clue. Those opinions, however, don't make him wrong: > whatever other evils can be ascribed to Micr0s0ft, their approach to > IEEE-754 agrees with his - and is used by the majority of programmers in > the world.
Their approach is just that the Microsoft libc does not uses the same rounding method.
> That's not a guarantee that they (or he) are right - but it certainly > implies that his argument stands on firm ground and has merit.
The issue is that microsoft and the glibc upstream are probably both right. Rounding 0.125 to 0.12 or to 0.13 is correct. IEEE754 defines 4 rounding modes[0]. By default glibc uses the first one, aka round to nearest even. What the author doesn't grok is that you cannot rely any IEEE754 implementation to round in one sense or the other. And if you read the glibc bug log carefully, you'll see that everyone in the thread is trying to explain that in different ways.
Unlike Microsoft, glibc does not only works on x86 CPUs, and also supports many many other architectures. So it finds a common ground. An article saying "beware that the glibc and microsoft libc behavior differs" would have been 100% correct and OK. The current article implies that the glibc is wrong, and it's a blatant misunderstanding of the standard. That's all.
Please also note that the "Vicent Lef?vre[1]" in the glibc bug report did a PhD on computer arithmetics, has been a student in one of the 3 most elitist schools in France, and has a record of knowing "some stuff" about IEEE754[2]. So it definitely not flaming. The bug report is wrong, there isn't a bug, the glibc implementation is conform to the standard (I mean I don't say the glibc is bug-free, but what was reported isn't a bug). It just is one of the many quirks IEEE754 floating point generates if you really use it with portability in mind.
BTW, you can quote whatever you want from my mails.
Best regards,
[0] https://en.wikipedia.org/wiki/IEEE_754#Rounding_floating-point_numbers
[2] https://www.vinc17.org/research/fptest.en.html https://www.vinc17.org/cv.en.html
-- ?O? Pierre Habouzit ??O madcoder at debian.org OOO https://www.madism.org
Ben Okopnik [ben at linuxgazette.net]
On Wed, Oct 03, 2007 at 01:12:04PM +0200, Vincent Lefevre wrote:
> Hi, > > I'd like to warn you about Paul Sephton's article "A Question Of Rounding" > in the October 2007 issue of the Linux Gazette. > > https://linuxgazette.net/143/TWDT.html#sephton > > Despite several people explained him he was wrong in [1], Paul Sephton > still hasn't understood the problems related to the IEEE-754 binary > floating-point arithmetic and what the various standards require, and > several parts of his article are non-sense. As an example of paragraph > from his article: > > "Whilst a number might be inexactly stored at a precision of 15 > decimals, that same number is exact when viewed at 14 decimals of > precision. For example, the value 2.49999999999992 is promoted > (using IEEE rounding) to 2.4999999999999 and then to 2.5 (with > precision of 14) using the same rounding rules." > > Isn't anyone there who reviews the submitted articles?
You're always welcome to volunteer and improve our review process instead of just bitching about it. Anyone can do the latter, and I give this kind of nonsense the amount of respect that it deserves - i.e., none. If you expect a (very) small number of volunteers to be universally competent in every possible area of programming, Open Source issues, hardware, astronomy, cooking, and nuclear science, you've got a little rethinking to do. Significantly more than the author of this article would even if he was wrong, in fact.
Vincent, so far, the people who have complained about this article have almost all been the same people that participated in that bug report (and flamed Paul for asking questions.) Now, all of you are stirring the same tempest in the same teapot. Frankly, I don't see this as an issue of any concern - and if this little controversy gets more people looking at, and being aware of, this problem in glibc, then I consider my end of the job to be well completed.
I note that even you yourself admitted in that "discussion" that there's a problem:
"People are working to make it better. Since you need an arithmetic for financial applications, you should know that work has been done in this way: decimal floating-point, with some specific functions, defined in IEEE754r. But the glibc doesn't support them yet (and decimal support is quite recent and incomplete in GCC)."
This implies, among many other things, that his argument has merit - even if you disagree with him.
> As anyway the whole article is pointless (the glibc is correct), > I think you should remove the article in question.
You have a very interesting view about what constitutes sufficient reason for removing an article. Good luck with that.
> For information > about problems related to binary-decimal conversions and more > generally decimal arithmetic, users should read the decimal FAQ [2]. > > [1] https://sourceware.org/bugzilla/show_bug.cgi?id=4943 > [2] https://www2.hursley.ibm.com/decimal/decifaq.html
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Vincent Lefevre [vincent at vinc17.org]
On 2007-10-03 08:09:51 -0500, Ben Okopnik wrote:
> On Wed, Oct 03, 2007 at 01:12:04PM +0200, Vincent Lefevre wrote: > > Isn't anyone there who reviews the submitted articles? > > You're always welcome to volunteer and improve our review process > instead of just bitching about it. Anyone can do the latter, and I give > this kind of nonsense the amount of respect that it deserves - i.e., > none. If you expect a (very) small number of volunteers to be > universally competent in every possible area of programming, Open Source > issues, hardware, astronomy, cooking, and nuclear science, you've got a > little rethinking to do. Significantly more than the author of this > article would even if he was wrong, in fact.
That was just a question. I really do neither know the number of volunteers nor the review process (I found this article just because it was mentioned on the bug report). But perhaps you can find external volunteers.
> Vincent, so far, the people who have complained about this article have > almost all been the same people that participated in that bug report > (and flamed Paul for asking questions.)
Paul wasn't flamed *for asking questions*, but for not listening to arguments of other people.
> I note that even you yourself admitted in that "discussion" that there's > a problem: > > `` > "People are working to make it better. Since you need an arithmetic for > financial applications, you should know that work has been done in this > way: decimal floating-point, with some specific functions, defined in > IEEE754r. But the glibc doesn't support them yet (and decimal support is > quite recent and incomplete in GCC)." > ''
Yes, but this doesn't make Paul's article correct. If he can fix his article, then this would be fine (much information has been given in the bug report, as well as pointers to external documents); but most of it is wrong, in particular the whole conclusion.
-- Vincent Lef?vre <vincent at vinc17.org> - Web: <https://www.vinc17.org/> 100% accessible validated (X)HTML - Blog: <https://www.vinc17.org/blog/> Work: CR INRIA - computer arithmetic / Arenaire project (LIP, ENS-Lyon)
Paul Sephton [paul at inet.co.za]
On Wed, 2007-10-03 at 15:57, Vincent Lefevre wrote:
> > Vincent, so far, the people who have complained about this article have > > almost all been the same people that participated in that bug report > > (and flamed Paul for asking questions.) > > Paul wasn't flamed *for asking questions*, but for not listening to > arguments of other people.
The track record in the bug report shows a bit differently. Seems it was rather a case of being flamed for arguing with people. I really don't mind the flames that much, so long as you don't mind being flamed right back, and you eventually get right down and fix the problem.
Folks, if all of your arguments made ultimate logical sense, I would have dropped this matter long ago. However, that is not the case. I did promise to take this as far as it needs to go.
> > I note that even you yourself admitted in that "discussion" that there's > > a problem: > > > > `` > > "People are working to make it better. Since you need an arithmetic for > > financial applications, you should know that work has been done in this > > way: decimal floating-point, with some specific functions, defined in > > IEEE754r. But the glibc doesn't support them yet (and decimal support is > > quite recent and incomplete in GCC)." > > '' > > Yes, but this doesn't make Paul's article correct. If he can fix his > article, then this would be fine (much information has been given in > the bug report, as well as pointers to external documents); but most > of it is wrong, in particular the whole conclusion.
Nothing to fix there. Everything stated in the article is based in fact. Microsoft Lib does produce the value 3 when rounding 2.5, and GLibC does produce the value 2. GLibC is in keeping with IEEE rounding rules and in contradiction to the accepted rules of decimal arithmetic.
What's to change?
René Pfeiffer [lynx at luchs.at]
On Oct 03, 2007 at 1557 +0200, Vincent Lefevre appeared and said:
> On 2007-10-03 08:09:51 -0500, Ben Okopnik wrote: > > On Wed, Oct 03, 2007 at 01:12:04PM +0200, Vincent Lefevre wrote: > > > Isn't anyone there who reviews the submitted articles? > > > > You're always welcome to volunteer and improve our review process > > instead of just bitching about it. Anyone can do the latter, and I give > > this kind of nonsense the amount of respect that it deserves - i.e., > > none. If you expect a (very) small number of volunteers to be > > universally competent in every possible area of programming, Open Source > > issues, hardware, astronomy, cooking, and nuclear science, you've got a > > little rethinking to do. Significantly more than the author of this > > article would even if he was wrong, in fact. > > That was just a question.
The simple answer is that I did the technical review and I missed this crucial detail. It is as simple as that. I am a C programmer and I did quite some numerical calculations during my physics studies, nevertheless I failed to double check the IEEE754 standard and the bug reports that have been quoted. Rounding is tricky and bites occasionally, as we can see by the number of postings regarding the article.
> I really do neither know the number of volunteers nor the review > process (I found this article just because it was mentioned on the bug > report). But perhaps you can find external volunteers.
Well, of course, are you interested?
> > I note that even you yourself admitted in that "discussion" that there's > > a problem: > > > > `` > > "People are working to make it better. Since you need an arithmetic for > > financial applications, you should know that work has been done in this > > way: decimal floating-point, with some specific functions, defined in > > IEEE754r. But the glibc doesn't support them yet (and decimal support is > > quite recent and incomplete in GCC)." > > '' > > Yes, but this doesn't make Paul's article correct. If he can fix his > article, then this would be fine (much information has been given in > the bug report, as well as pointers to external documents); but most > of it is wrong, in particular the whole conclusion.
In this case it makes sense to rectify the misled conclusions maybe by writing another article as a follow-up (which I am suggesting to anyone who feels comfortable in doing so; I promise to review the technical details more thoroughly).
Best regards, Ren?.
Paul Sephton [paul at inet.co.za]
On Oct 03, 2007 at 1557 +0200, Vincent Lefevre appeared and said:
> "Whilst a number might be inexactly stored at a precision of 15 > decimals, that same number is exact when viewed at 14 decimals of > precision. For example, the value 2.49999999999992 is promoted > (using IEEE rounding) to 2.4999999999999 and then to 2.5 (with > precision of 14) using the same rounding rules." > > Isn't anyone there who reviews the submitted articles?
I will stick to my guns on the accuracy of the article, particularly with reference to the above complaint:
double x = 2.49999999999992; printf("%.14f\n", x); printf("%.13f\n", x); printf("%.1f\n", x); Result: 2.49999999999992 2.4999999999999 2.5
On Wed, 2007-10-03 at 18:47 +0200, Ren? Pfeiffer wrote:
> > That was just a question. > > The simple answer is that I did the technical review and I missed this > crucial detail. It is as simple as that. I am a C programmer and I did > quite some numerical calculations during my physics studies, > nevertheless I failed to double check the IEEE754 standard and the bug > reports that have been quoted. Rounding is tricky and bites > occasionally, as we can see by the number of postings regarding the > article.
I have yet to see any reference to an error which needs correcting. If you do find a glaring error in my logic, or any of my statements, which might be proven factually and not simply a result of preconceptions, I would be only too happy to correct that error.
In my conclusion I state that the differences in rounding between the Microsoft & GNU libraries will lead to widespread mistrust. Many applications, including Gnumeric & Openoffice use floating point arithmetic. I do not doubt the conclusion of my article.
René Pfeiffer [lynx at luchs.at]
On Oct 03, 2007 at 1951 +0200, Paul Sephton appeared and said:
> On Oct 03, 2007 at 1557 +0200, Vincent Lefevre appeared and said: > > "Whilst a number might be inexactly stored at a precision of 15 > > decimals, that same number is exact when viewed at 14 decimals of > > precision. For example, the value 2.49999999999992 is promoted > > (using IEEE rounding) to 2.4999999999999 and then to 2.5 (with > > precision of 14) using the same rounding rules." > > > > Isn't anyone there who reviews the submitted articles? > > I will stick to my guns on the accuracy of the article, particularly > with reference to the above complaint: > > `` > double x > printf("%.14f\n", x); > printf("%.13f\n", x); > printf("%.1f\n", x); > Result: > 2.49999999999992 > 2.4999999999999 > 2.5 > ''
Well, I remember the example and in the light of the discussion I see it as an example for the "Round to Nearest" behaviour defined in IEEE 754. The precision you describe has nothing to do with the "exactness" of floating point numbers. Floating point numbers aren't exact. You can even have troubles converting 0.1 to the IEEE 754 binary format. Trying the converter at https://www.h-schmidt.net/FloatApplet/IEEE754.html shows this nicely.
Everyone who tries to convert "real" numbers into floating point numbers knows that inevitably errors occur. There's a nice publication that mathematically describes this effect: https://docs.sun.com/source/806-3568/ncg_goldberg.html
The experts who mailed to TAG may comment this publication better than me.
> On Wed, 2007-10-03 at 18:47 +0200, Ren? Pfeiffer wrote: > > > That was just a question. > > > > The simple answer is that I did the technical review and I missed this > > crucial detail. It is as simple as that. I am a C programmer and I did > > quite some numerical calculations during my physics studies, > > nevertheless I failed to double check the IEEE754 standard and the bug > > reports that have been quoted. Rounding is tricky and bites > > occasionally, as we can see by the number of postings regarding the > > article. > > I have yet to see any reference to an error which needs correcting.
"Whilst a number might be inexactly stored at a precision of 15 decimals, that same number is exact when viewed at 14 decimals of precision." is what I meant with "crucial point". Whenever you convert real numbers to the floating point number format you introduce errors. Everyone doing numeric calculations knows this. This is especially true if you have to code exit conditions in iterations. https://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm illustrates that.
If you are really interested in having arbitrary precision operations then you have to use other means of processing numbers. https://gmplib.org/ is one way of doing this. https://en.wikipedia.org/wiki/Bignum#Arbitrary-precision_software shows more applications.
> [...] > In my conclusion I state that the differences in rounding between the > Microsoft & GNU libraries will lead to widespread mistrust. Many > applications, including Gnumeric & Openoffice use floating point > arithmetic. I do not doubt the conclusion of my article.
Your conclusion is very superficial. Even the Microsoft Office Suite is struck by conversion errors to and from floating point numbers as this article in Microsoft's knowledge base shows: https://support.microsoft.com/kb/214118
So the widespread mistrust should at least be distributed equally.
Best regards, Ren?.
Ben Okopnik [ben at linuxgazette.net]
On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren?? Pfeiffer wrote:
> > Floating point numbers aren't exact.
[blink] Pardon my ignorance, but... what's the use of them, then? Particularly since the rounding can happen (essentially arbitrarily) in any direction?
Also, given the above, is there a way of producing a meaningful fixed-point part of a number with precision?
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Paul Sephton [paul at inet.co.za]
On Wed, 2007-10-03 at 22:04 +0200, Ren? Pfeiffer wrote:
> On Oct 03, 2007 at 1951 +0200, Paul Sephton appeared and said: > > On Oct 03, 2007 at 1557 +0200, Vincent Lefevre appeared and said: > > > "Whilst a number might be inexactly stored at a precision of 15 > > > decimals, that same number is exact when viewed at 14 decimalsof
> > > precision. For example, the value 2.49999999999992 is promoted > > > (using IEEE rounding) to 2.4999999999999 and then to 2.5 (with > > > precision of 14) using the same rounding rules." > > > > > > Isn't anyone there who reviews the submitted articles? > > > > I will stick to my guns on the accuracy of the article, particularly > > with reference to the above complaint: > > > > `` > > double x = 2.49999999999992; > > printf("%.14f\n", x); > > printf("%.13f\n", x); > > printf("%.1f\n", x); > > Result: > > 2.49999999999992 > > 2.4999999999999 > > 2.5 > > '' > > Well, I remember the example and in the light of the discussion I seeit
> as an example for the "Round to Nearest" behaviour defined in IEEE754.
The oft quoted paragraph is merely intended to demonstrate how a number imprecise at a certain display size is rounded precisely at another display size. It does not demonstrate an error.
The paragraph is headed "GLibC and sprintf()" and should be read in that context. The problem has nothing at all to do with inexact storage, but with the fact that the GLibC applies the IEEE default rounding mode when performing decimal rounding operations while converting a number to text. The numbers 0.5, 1.5, 2.5 ... are EXACTLY represented by the FPU.
The IEEE does not specify a "round away from zero" mode at all, so this makes it rather difficult to adhere to decimal arithmetic standards. Microsoft seems to manage though.
> The precision you describe has nothing to do with the "exactness" of > floating point numbers. Floating point numbers aren't exact. You can > even have troubles converting 0.1 to the IEEE 754 binary format.Trying
> the converter athttps://www.h-schmidt.net/FloatApplet/IEEE754.html shows
> this nicely.
Oh goodness me. Don't you think I am aware of that?
> Everyone who tries to convert "real" numbers into floating pointnumbers
> knows that inevitably errors occur. There's a nice publication that > mathematically describes this effect: > https://docs.sun.com/source/806-3568/ncg_goldberg.html
... and there's a perfectly good piece of code at the end of the article demonstrating how to convert any IEEE double to decimal whilst taking inexact storage into account. Even better, the last incarnation of the code listed at the end of the linked bug report passes 10 000 000 iterations for conversion of randomly generated double to text and back at a precision of 15 without a single failure.
Point here is that it can be done, but no-one is doing it.
> The experts who mailed to TAG may comment this publication better than > me.
Um yes. Perhaps we should be consulting a mathematician here rather than a computer scientist. This has to do with decimal representation, not binary. It has more to do with arithmetic standards for rounding numbers than computers.
> If you are really interested in having arbitrary precision operations > then you have to use other means of processing numbers. > https://gmplib.org/ is one way of doing this. > https://en.wikipedia.org/wiki/Bignum#Arbitrary-precision_software shows > more applications.
Boy, am I having difficulty getting this across. I am not talking about arb precision. If I needed that, I would use the appropriate library.
I am talking about the process of displaying a floating point number to a desired precision using sprintf(). MS rounds it one way, and GNU C library does it another.
Simple stuff.
Cannot be argued.
Fact.
> > [...] > > In my conclusion I state that the differences in rounding betweenthe
> > Microsoft & GNU libraries will lead to widespread mistrust. Many > > applications, including Gnumeric & Openoffice use floating point > > arithmetic. I do not doubt the conclusion of my article. > > Your conclusion is very superficial. Even the Microsoft Office Suiteis
> struck by conversion errors to and from floating point numbers as this > article in Microsoft's knowledge base shows: > https://support.microsoft.com/kb/214118
Quite correct. However, you refer to problems caused by inaccuracies in binary representation of decimal values, and not by rounding.
The inaccuracy in the binary representation is not as problematic as one might think. I can guarantee that the code listing at the end of the article (or rather that listed at the end of the bug report) is not phased by inaccuracies.
Perhaps you would like to try the code before taking such a firm stance on this issue?
Kind regards, Paul
Thomas Adam [thomas at edulinux.homeunix.org]
On Wed, Oct 03, 2007 at 03:47:28PM -0500, Ben Okopnik wrote:
> On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren?? Pfeiffer wrote: > > > > Floating point numbers aren't exact. > > [blink] Pardon my ignorance, but... what's the use of them, then? > Particularly since the rounding can happen (essentially arbitrarily) in > any direction?
Indeed. I'm reminded of this:
https://en.wikipedia.org/wiki/0.999...
We should go back to using whole numbers. Much easier.
-- Thomas Adam
-- "He wants you back, he screams into the night air, like a fireman going through a window that has no fire." -- Mike Myers, "This Poem Sucks".
Paul Sephton [paul at inet.co.za]
On Wed, 2007-10-03 at 15:47 -0500, Ben Okopnik wrote:
> On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren?? Pfeiffer wrote: > > > > Floating point numbers aren't exact. > > [blink] Pardon my ignorance, but... what's the use of them, then? > Particularly since the rounding can happen (essentially arbitrarily) in > any direction?
FP numbers are extremely useful for performing calcs very accurately. Yes, there's a loss in accuracy with calcs, but they are still very convenient. Rounding is always consistent according to the currently selected IEEE rounding mode which averages out error. Effectively, as long as you need less than 16 significant decimals you are cooking.
> Also, given the above, is there a way of producing a meaningful > fixed-point part of a number with precision?
Yes, there is a way to produce a meaningful fixed point representation from the IEEE binary as per the code listing in the article. Currently GLibC and Microsoft disagree on rounding rules for converting the number to display, but 99% of the time they agree on the output. Inaccuracies introduced by calcs or inexact storage may further influence the output.
Regards, Paul
Ben Okopnik [ben at linuxgazette.net]
On Wed, Oct 03, 2007 at 10:59:43PM +0200, Paul Sephton wrote:
> On Wed, 2007-10-03 at 15:47 -0500, Ben Okopnik wrote: > > On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren???? Pfeiffer wrote: > > > > > > Floating point numbers aren't exact. > > > > [blink] Pardon my ignorance, but... what's the use of them, then? > > Particularly since the rounding can happen (essentially arbitrarily) in > > any direction? > > FP numbers are extremely useful for performing calcs very accurately. > Yes, there's a loss in accuracy with calcs, but they are still very > convenient. Rounding is always consistent according to the currently > selected IEEE rounding mode which averages out error.
Ah. That makes sense. I was visualizing long calculations where the inaccuracy just kept building up.
> Effectively, as > long as you need less than 16 significant decimals you are cooking.
No trouble for me - I usually get along with three.
> > Also, given the above, is there a way of producing a meaningful > > fixed-point part of a number with precision? > > Yes, there is a way to produce a meaningful fixed point representation > from the IEEE binary as per the code listing in the article.
As long as you use C, that is. Thanks, Paul!
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
René Pfeiffer [lynx at luchs.at]
Well, somehow the direct reply got to me faster than the list posting and so I answered directly. Here's a follow-up with my answer.
----- Forwarded message from Ren? Pfeiffer <lynx at luchs.at> -----
From: Ren? Pfeiffer <lynx@luchs.at> To: TAG <tag@lists.linuxgazette.net> Date: Wed, 3 Oct 2007 23:17:44 +0200 To: Paul Sephton <paul at inet.co.za> Subject: Re: [TAG] article "A Question Of Rounding" in issue #143 Message-ID: <20071003211744.GL9895 at nightfall.luchs.at>In-Reply-To: <1191444256.16715.213.camel at wart> Organization: GNU/Linux Manages! X-Kosh: "A stroke of the brush does not guarantee art from the bristles."
User-Agent: Mutt/1.5.16 (2007-06-09)Hello, Paul!
On Oct 03, 2007 at 2244 +0200, Paul Sephton appeared and said:
> On Wed, 2007-10-03 at 22:04 +0200, Ren? Pfeiffer wrote: > > On Oct 03, 2007 at 1951 +0200, Paul Sephton appeared and said: > > > On Oct 03, 2007 at 1557 +0200, Vincent Lefevre appeared and said: > > > > "Whilst a number might be inexactly stored at a precision of 15 > > > > decimals, that same number is exact when viewed at 14 decimals of > > > > precision. For example, the value 2.49999999999992 is promoted > > > > (using IEEE rounding) to 2.4999999999999 and then to 2.5 (with > > > > precision of 14) using the same rounding rules." > > > > > > > > Isn't anyone there who reviews the submitted articles? > > > > > > I will stick to my guns on the accuracy of the article, particularly > > > with reference to the above complaint: > > > > > > `` > > > double x > > > printf("%.14f\n", x); > > > printf("%.13f\n", x); > > > printf("%.1f\n", x); > > > Result: > > > 2.49999999999992 > > > 2.4999999999999 > > > 2.5 > > > '' > > > > Well, I remember the example and in the light of the discussion I see it > > as an example for the "Round to Nearest" behaviour defined in IEEE 754. > > The paragraph is headed "GLibC and sprintf()" and should be read in that > context. The problem has nothing at all to do with inexact storage, but > with the fact that the GLibC applies the IEEE default rounding mode when > performing decimal rounding operations while converting a number to > text. The numbers 0.5, 1.5, 2.5 ... are EXACTLY represented by the FPU.
Yes, but you don't seem to get the point of the nature of floating point arithmetics. https://www2.hursley.ibm.com/decimal/decifaq1.html#inexact was suggested to you multiple times and clearly states:
"Binary floating-point cannot exactly represent decimal fractions, so if binary floating-point is used it is not possible to guarantee that results will be the same as those using decimal arithmetic."
This can lead to effects when dealing with converted numbers in any operation, including rouding. Whenever you use floating point numbers you have to take that into accounts and be prepared for effects when dealing with decimal fractions converted to the binary format of FP.
> The IEEE does not specify a "round away from zero" mode at all, so this > makes it rather difficult to adhere to decimal arithmetic standards. > Microsoft seems to manage though.
Yes, but Microsoft fails to manage the "1*(.5-.4-.1)" case which the GNU C Library handles gracefully. BTW, what is the response of Microsoft you promised in the bug report?
> > The precision you describe has nothing to do with the "exactness" of > > floating point numbers. Floating point numbers aren't exact. You can > > even have troubles converting 0.1 to the IEEE 754 binary format. Trying > > the converter at https://www.h-schmidt.net/FloatApplet/IEEE754.html shows > > this nicely. > > Oh goodness me. Don't you think I am aware of that?
After reading the conversation in the bug tracking system I am not so sure about which answers get through to you and which do not.
> > Everyone who tries to convert "real" numbers into floating point numbers > > knows that inevitably errors occur. There's a nice publication that > > mathematically describes this effect: > > https://docs.sun.com/source/806-3568/ncg_goldberg.html > > ... and there's a perfectly good piece of code at the end of the article > demonstrating how to convert any IEEE double to decimal whilst taking > inexact storage into account. Even better, the last incarnation of the > code listed at the end of the linked bug report passes 10 000 000 > iterations for conversion of randomly generated double to text and back > at a precision of 15 without a single failure.
I don't agree with you about the precision. As far as I understand you are talking about a precision of 15 while the theorem 15 in the article states:
"When a binary IEEE single precision number is converted to the closest eight digit decimal number, it is not always possible to uniquely recover the binary number from the decimal one. However, if nine decimal digits are used, then converting the decimal number to the closest binary number will recover the original floating-point number."
By using a precision of 15 you are already over the limit of the IEEE 754 format, IMHO.
> Point here is that it can be done, but no-one is doing it.
Maybe because people use other means of processing numbers in certain cases.
> > The experts who mailed to TAG may comment this publication better than > > me. > > Um yes. Perhaps we should be consulting a mathematician here rather > than a computer scientist. This has to do with decimal representation, > not binary. It has more to do with arithmetic standards for rounding > numbers than computers.
My background is theoretical physics and although it has been a long time since I did simulations of particle collisions I remember to stay away from the "end" of the mantissa, not using all available bits and avoiding certain library functions because they might introduce more errors than the measured/simulated data can handle.
> > If you are really interested in having arbitrary precision operations > > then you have to use other means of processing numbers. > > https://gmplib.org/ is one way of doing this. > > https://en.wikipedia.org/wiki/Bignum#Arbitrary-precision_software shows > > more applications. > > Boy, am I having difficulty getting this across. I am not talking about > arb precision. If I needed that, I would use the appropriate library.
Yes, it was my mistake, after reading the bug report I know that maybe you should use a library that does decimal arithmetic, as was suggested to you. By doing this you save a lot of time since you avoid potential rounding errors done either in the FPU, the GNU C Library or other parts of the code. I never did any financial mathematics on computers, but I would really stay away from floating-point numbers. I can imagine rounding will be much easier and more exact then.
> I am talking about the process of displaying a floating point number to > a desired precision using sprintf(). MS rounds it one way, and GNU C > library does it another. > > Simple stuff. > > Cannot be argued. > > Fact.
Yes, and no one disputes this fact. It's just the behaviour of two different software packages well within the specification. This may not be the desired case, but if floating point numbers in combination with sprintf() fail, then you simply have to use another way, as was suggested multiple times. We did the same thing when doing numerical calculations in theoretical physics. Simple stuff, too. And daily tasks of developers.
> > > [...] > > > In my conclusion I state that the differences in rounding between the > > > Microsoft & GNU libraries will lead to widespread mistrust. Many > > > applications, including Gnumeric & Openoffice use floating point > > > arithmetic. I do not doubt the conclusion of my article. > > > > Your conclusion is very superficial. Even the Microsoft Office Suite is > > struck by conversion errors to and from floating point numbers as this > > article in Microsoft's knowledge base shows: > > https://support.microsoft.com/kb/214118 > > Quite correct. However, you refer to problems caused by inaccuracies in > binary representation of decimal values, and not by rounding.
Yes, but your C code example also has to convert between the different representations.
> The inaccuracy in the binary representation is not as problematic as one > might think. I can guarantee that the code listing at the end of the > article (or rather that listed at the end of the bug report) is not > phased by inaccuracies.
I wouldn't be so sure about that since this claim is very hard to proof. You call an awful lot of C library functions. You would have to trace every conversion between the binary and decimal formats, and in addition to the IEEE 754 representation after every mathematical operation. This is best done with a debugger by collecting all computed values and comparing them after every step in order to trace potential errors and all the bits of the FP representation.
I neither have the time nor the curiousity to do this.
> Perhaps you would like to try the code before taking such a firm stance > on this issue?
I tried the code, and the results didn't surprise me. I even looked at the assembler code produced by the compiler and traced the commands (x86_64 with SSE instructions in my case). I've seen things like that many times and simply changed my code to deal with the data differently. Usually that's why I write my own code and use a compiler.
The reason why I answered to the mails sent to the TAG list is the simple fact that I reviewed your article, did a sloppy job and try to comment on that. I should have read the bug report thread earlier. I didn't and there we are.
Best regards, Ren?.
----- End forwarded message -----
René Pfeiffer [lynx at luchs.at]
On Oct 03, 2007 at 1547 -0500, Ben Okopnik appeared and said:
> On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren?? Pfeiffer wrote: > > > > Floating point numbers aren't exact. > > [blink] Pardon my ignorance, but... what's the use of them, then? > Particularly since the rounding can happen (essentially arbitrarily) in > any direction?
The rounding is defined as variable in the IEEE 754 standard.
> Also, given the above, is there a way of producing a meaningful > fixed-point part of a number with precision?
Yes, there is; https://www2.hursley.ibm.com/decimal/ has some thoughts on that. There are other ways of dealing with numbers, that's why I posted the link to the GMP library and others.
Best wishes, Ren?.
Ben Okopnik [ben at linuxgazette.net]
On Wed, Oct 03, 2007 at 11:28:14PM +0200, Ren?? Pfeiffer wrote:
> On Oct 03, 2007 at 1547 -0500, Ben Okopnik appeared and said: > > On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren???? Pfeiffer wrote: > > > > > > Floating point numbers aren't exact. > > > > [blink] Pardon my ignorance, but... what's the use of them, then? > > Particularly since the rounding can happen (essentially arbitrarily) in > > any direction? > > The rounding is defined as variable in the IEEE 754 standard.
I was wondering if it affects anything that I do (as I'd mentioned, mostly 3-place accuracy stuff.) It doesn't seem to, so I'm happy with it.
> > Also, given the above, is there a way of producing a meaningful > > fixed-point part of a number with precision? > > Yes, there is; https://www2.hursley.ibm.com/decimal/ has some thoughts on > that. There are other ways of dealing with numbers, that's why I posted > the link to the GMP library and others.
Thanks, Ren??.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Ben Okopnik [ben at linuxgazette.net]
On Wed, Oct 03, 2007 at 09:52:00PM +0100, Thomas Adam wrote:
> On Wed, Oct 03, 2007 at 03:47:28PM -0500, Ben Okopnik wrote: > > On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren?? Pfeiffer wrote: > > > > > > Floating point numbers aren't exact. > > > > [blink] Pardon my ignorance, but... what's the use of them, then? > > Particularly since the rounding can happen (essentially arbitrarily) in > > any direction? > > Indeed. I'm reminded of this: > > https://en.wikipedia.org/wiki/0.999... > > We should go back to using whole numbers. Much easier.
I think we'd lose a little precision in expressing things, though...
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Karl-Heinz Herrmann [khh at khherrmann.de]
On Wed, 3 Oct 2007 16:47:02 -0500 Ben Okopnik <ben at linuxgazette.net> wrote:
> I think we'd lose a little precision in expressing things, though...
just use very long long long ints and think in units of 10^-12 of whatever physical or currency unit you use....
I very hazily believe to recall that was the way of COBOL which was explicitly written for use in finance software.
It's the TeX way anyway
K.-H.
René Pfeiffer [lynx at luchs.at]
Hello, Paul!
I am beginning to understand your motivations for your article and that's why I wish to reply to you and TAG in order to direct your and mine comments to the right place.
On Oct 04, 2007 at 0031 +0200, Paul Sephton appeared and said:
> On Wed, 2007-10-03 at 23:17 +0200, Ren? Pfeiffer wrote: > > Yes, but you don't seem to get the point of the nature of floating point > > arithmetics. https://www2.hursley.ibm.com/decimal/decifaq1.html#inexact > > was suggested to you multiple times and clearly states: > > "Binary floating-point cannot exactly represent decimal fractions, so if > > binary floating-point is used it is not possible to guarantee that > > results will be the same as those using decimal arithmetic." > Um. > > This was a huge frustration for me in the bug report. If I don't "get" > the point about FP after 23 years programming computers, and a > background in chemistry, then no-one "gets" it.
Well, probably, but statements like these may be regarded as arrogant and may lead to adding more fuel to a flame war. I believe this is why the bug report got a bit out of hand.
> I am perfectly aware of the inaccuracies of FP math. I deal with and > have dealt with it intimately. Everyone keeps telling me though, that I > "don't get it" and flame me when I argue with them.
I think this is due to a misunderstanding. I will come back to this later.
> > After reading the conversation in the bug tracking system I am not so > > sure about which answers get through to you and which do not. > > The bug report's a mess. I understand that.
And so was my review. I should have added some comments, and that's what I am doing now.
> > [...] > > I don't agree with you about the precision. As far as I understand you > > are talking about a precision of 15 while the theorem 15 in the article > > states: > > > > "When a binary IEEE single precision number is converted to the closest > > eight digit decimal number, it is not always possible to uniquely > > recover the binary number from the decimal one. However, if nine decimal > > digits are used, then converting the decimal number to the closest > > binary number will recover the original floating-point number." > > I think you refer to the 4 byte IEEE? 8 byte doubles have 15 > significant decimal digits of precision.
Yes, you are right, I was refering to the 4 byte IEEE format.
> > By using a precision of 15 you are already over the limit of the IEEE > > 754 format, IMHO. > > It depends on how you deal with the number. If you try to convert the > number to decimal starting with the most significant digit, then for > imprecisely stored values, you end up with a remainder at the end that > you can just keep dividing ad infinitum. Around half the time, the > number is below the actual, which results in rounding down. The other > half, you end up with a value higher than the actual which results in > rounding down. > > Working from the known precision of 15 backwards, allows you to carry > the imprecise portion (after decimal 15) forward, and correctly round > the numbers.
I wouldn't do that. I did some experiments by converting decimal numbers to IEEE 754 and back to see how accurate the conversion is. As I said I would stay away from the least significant digits, but frankly I haven't thought much about the error I'd be doing then.
> > > Point here is that it can be done, but no-one is doing it. > > > > Maybe because people use other means of processing numbers in certain > > cases. > > Yes. Mostly sprintf(), since no-one is aware of the difference between > MS and GNU LibC. The reason for the article in the first place was to > raise peoples awareness of the discrepancy.
And I think that's what went wrong. I compared the article with the bug report. I believe that the impression of your intentions was to continue your efforts on this article since the bug report deteriorated into a lengthy discussion and evnetually a bit of a flame war. Your article can be perceived as "if you don't listen to me, I'll find someone else to talk to". During the email exchange with you I now know that you wanted to raise awareness for the behaviour of sprintf() in combination with floating point to string conversion. This is different from the impression you left in the discussion of the bug, but it seems that emotions were boiling and everyone wanted you to be silent about the matter.
Your article can be read as an attack on the GNU C Library developers, and that may be the cause of all the reactions.
> > My background is theoretical physics and although it has been a long > > time since I did simulations of particle collisions I remember to stay > > away from the "end" of the mantissa, not using all available bits and > > avoiding certain library functions because they might introduce more > > errors than the measured/simulated data can handle. > > And mine is in Analytical and Nuclear Chemistry. That part of my > training is still ingrained in me. I absolutely agree with your points > on accuracy.
Thanks. It seems my memory is better than I think then.
> > > If you are really interested in having arbitrary precision > > > operations > > I'm not. I in fact need no more than a few decimal digits of precision > in our software. It's not as if I'm building an accounting or banking > package and need to count beans- although I wonder if the FP arithmetic > would not have sufficed even then- GnuCash I believe uses it quite > successfully. But I digress.
So now I know what your intentions are. The bug report and your article didn't tell me explicitly that you were aware of all this. I thought you wanted someone to "fix the FP behaviour of glibc into submission".
> [...] > All in all, I am not the one with the problem here. I don't need help > "fixing" anything. I reported what I believe is a bug introduced > through a misinterpretation of the C language spec- and incorrect > application of the IEEE specification to the process of displaying a > number. > > I would honestly like to see people owning up to that mistake and > correcting it rather than hiding behind a ream of specifications.
I wouldn't call it hiding. Vincent Lef?vre already the details of the specifications and how they are implemented. The developers of the glibc can't be blamed if standards leave things open. There are a lot of other protocols and specifications that do that as well. Sometimes you hit this uncharted territory and have to make sure that your code uses reasonable defaults and catches/corrects undesired behaviour of library functions.
> > > I am talking about the process of displaying a floating point > > > number to a desired precision using sprintf(). MS rounds it one > > > way, and GNU C library does it another. [...] > > > > Yes, and no one disputes this fact. It's just the behaviour of two > > different software packages well within the specification. This may not > > be the desired case, but if floating point numbers in combination with > > sprintf() fail, then you simply have to use another way, as was > > suggested multiple times. We did the same thing when doing numerical > > calculations in theoretical physics. Simple stuff, too. And daily tasks > > of developers. > > Again, I do not have a problem. Repeated suggestions as to how I could > correct this myself have been spurious and redundant. I am perfectly > capable of building my own binary to text conversion, as demonstrated.
I know that now, so I don't need to suggest anything anymore.
> Rather, my arguments are on behalf of thousands of programmers who are > blissfully unaware of the dangers of sprintf in binary to text > conversion.
And that's the main reason why I approved your article. In my opinion every bit of information that warns developers of unsuspected "dangers" or "deviant results" is a good thing.
> > I wouldn't be so sure about that since this claim is very hard to proof. > > You call an awful lot of C library functions. You would have to trace > > every conversion between the binary and decimal formats, and in addition > > to the IEEE 754 representation after every mathematical operation. This > > is best done with a debugger by collecting all computed values and > > comparing them after every step in order to trace potential errors and > > all the bits of the FP representation. > > Alternatively, prove it with the brute force approach, as I have done. > Throw randomly generated values at it until it has covered a > representative proportion of the problem space. I would think 10 > million random numbers without failure should be significant, don't you? > If not, I could run it overnight and do 500 million, or over a week and > do a couple of billion.
I could do that as well, but I doubt we would get a deeper insight into this.
> [...] > If you reviewed the article on it's own merit, and did not find it > wanting, then there is little reason to depart from your initial view. > Nothing I have said in the bug report is in contradiction to the > article.
No, it isn't, and due to the comments on TAG we now have a sufficiently annotated article. I don't think that "watch out for sprintf() doing something unexpected/unwanted" is a statement that needs to be retracted. We also have more than enough proposals how to get around this peculiarity and use other methods which may be less prone to inaccuracies.
> [...] > - neither did I claim that IEEE rounding to nearest even on boundaries > is incorrect for FP operations or storage. I said that using the IEEE > rounding mode to decide how to round decimal numbers for display is > incorrect according to accepted industry standards.
Indeed, but IEEE 754 has been written, new proposals are coming and everyone reading your article, the bug report and hopefully this conversation on TAG can now decide which methods to use. By seeing everything in this light I hope not to see any further flame wars on this topic.
> What I said right upfront, is that when converting a binary value to > text, the results are inconsistent. It was immediately proven to me > that the results were not inconsistent, but consistently wrong- at least > according to industry standards. Subsequently I proved that the results > were inconsistent with Microsoft's results. Thereafter the whole bug > report devolved into a flame war. > > A whole seperate discussion evolved about accuracy- which was never an > issue from my side. I am quite happy with the accuracy I have. I am > unhappy with sprintf's rather mediocre attempt at binary to text > conversion though.
Well, and until now I wasn't aware that you were happy about the accuracy you have.
> How would you interpret the C99 specification where it says: > > "Paragraph 2: Conversions involving IEC 60559 formats follow all > pertinent recommended practice. In particular, conversion between any > supported IEC 60559 format and decimal with DECIMAL_DIG or fewer > significant digits is correctly rounded, which assures that conversion > from the widest supported IEC 60559 format to decimal with DECIMAL_DIG > digits and back is the identity function."
I think that this is done according to the specification. You take a supported IEC 60559 format, convert into a decimal representation and use a rounding scheme that allows you to reconvert the decimal representation back into a supported number in IEC 60559 format. From my point of view the phrase "correctly rounded" does not necessarily refer to the industrial standard rounding you describe. This may be wrong, but I think this is simply due to the limitation of the binary floating point format.
I looked for lectures on numerical mathematics to remember how we did the rounding. I found a lecture by Prof. Dr. R. Rannacher from the University of Heidelberg. From what I saw in his script a common method of rounding in the binary FP format expects the number to be rounded to be from an interval of possible FP numbers and maps it to another number with a constraint shown in formula 1.1.2.
https://web.luchs.at/gfx/numeric_rounding.png
The constraint basically limits the distance between the original and the rounded number. The last formula give a recipe on how to do that in the IEEE format. Sadly I am too tired to translate the German and try out the rounding formula on one of your example numbers from your C code. I'll do that as soon as my brain catches up.
> I feel very strongly about the GNU C library, and want it to be the best > there is. Am I wrong to persue what I percieve to be a problem that > might affect it's future?
No, I don't think you are wrong, but you have to keep in mind that people might react strongly when being accused of being wrong. The Road to Hell is paved with Good Intentions. Again until now I wasn't aware that you feel strongly about the GNU C library; your article can also be seen as a bashing of the GNU C library. Fortunately we now know that this is not the case. And I think this is all due to the language used and a load of misunderstandings.
Best wishes, Ren?, off to bed.
Ben Okopnik [ben at linuxgazette.net]
On Thu, Oct 04, 2007 at 01:36:50AM +0200, Ren?? Pfeiffer wrote:
> Hello, Paul! > > I am beginning to understand your motivations for your article and > that's why I wish to reply to you and TAG in order to direct your and > mine comments to the right place.
[snip]
I've said this before, I'm sure, but - Ren??, you rock. This is one of the few bits of light in a field full of nothing but smoke; thank you for providing it. Paul's article may yet be incorrect in some particulars - I will admit that I'm not knowledgeable enough to decide one way or the other on my own - but what I am seeing, finally, is an attempt to hear what the man is actually saying, something that's been largely absent from the beginning of this bug report and discussion. In fact, having read your email, I now do understand most of the factors that bear on the problem.
Previously, I saw a lot of verbiage fly by in this discussion, and much of it from the "developers side" was loaded with barely-tempered arrogance - which always tends to make me wonder whose particular ox is being gored, since Paul Sephton had simply asked a series of civil questions. In fact, one of the people who contacted me, Paul Zimmermann (Director of Research at INRIA) got extremely haughty and prescriptive when I suggested that his colleagues, who reportedly contacted him about this, write an article to counter the original argument; it seems that discussion, or contribution to media which does not employ a staff of professional reviewers is beneath his notice as a Scientist. I was just supposed to bow down and obey his pronouncements from on high. [shrug] Not today, I'm afraid. And all the other days on my calendar look pretty damn unlikely, too.
As a meta-issue - there's a lot of that sort of entrenched arrogance in various pockets of the Open Source community (not only, of course - but it's what I'm concerned with for now.) I've seen Rick Moen treated with high-handed snottiness on 'debian-legal' when he dared suggest that the average Open Source geek's habit of capitulating in terror when faced with a trademark infringement demand (usually without any merit to it) is wrong. That kind of nonsense needs to be dealt with before it poisons us all.
I believe that those ingrown attitudes can - and should - be pruned, or at least tempered to mitigate their malignancy. Frankly, I like being able to bring those issues out into the public view; I'm happy that LG exists as a forum in which this kind of thing can be aired. Not that I see Paul or Rick as some sort of martyrs who need to be defended - but in the Open Source community, frank and open discussion of problems needs to be the order of the day. Yes, I understand about limited time; I understand not wanting to rehash the same old issues over and over. There are costs, however, to living in a free society - the concepts of freedom, rights, and private ownership can be damned inconvenient for the police and the government when they're "just trying to do their job", for example - but they're inseparable from the ability to live in that society; they are, in fact, what created that society in the first place.
In larger groups - countries - these things are considered to be a citizen's duty. In this community, we don't have a formalized concept of that sort. This makes me very glad that LG can be at least somewhat of a voice of conscience.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Paul Sephton [paul at inet.co.za]
On Thu, 2007-10-04 at 01:36 +0200, Ren? Pfeiffer wrote:
> Hello, Paul! > > I am beginning to understand your motivations for your article and > that's why I wish to reply to you and TAG in order to direct your and > mine comments to the right place.<Lots of snip>
Thank you, Ren?.
I truly appreciate your comments here. Finally, I feel that there is someone who understands what I have been trying to say. I was starting to wonder whether all of this was worth the effort- whether that of myself, the GLibC developers or your own.
Regards, Paul
Rick Moen [rick at linuxmafia.com]
Quoting Paul Sephton (paul at inet.co.za):
> Thank you, Ren?. > > I truly appreciate your comments here. Finally, I feel that there is > someone who understands what I have been trying to say. I was starting > to wonder whether all of this was worth the effort- whether that of > myself, the GLibC developers or your own.
For what it's worth, Paul, I likewise really appreciated your article.
Back in dinosaur days when I aspired to be an EECS (pronounced "Eeks!") major and ended up with a mathematics degree, one of the earliest and best lessons I learned was that rounding and representation were among the more subtle and nasty problems in numeric methods. I think your article ably highlights those problems, and the problematic results that can sometimes result from using canned heuristics without careful attention.
Thank you for taking the effort.
-- Cheers, "Don't use Outlook. Outlook is really just a security Rick Moen hole with a small e-mail client attached to it." rick at linuxmafia.com -- Brian Trosko in r.a.sf.w.r-j
Paul Sephton [paul at inet.co.za]
On Thu, 2007-10-04 at 00:43 -0700, Rick Moen wrote:
> Quoting Paul Sephton (paul at inet.co.za): > For what it's worth, Paul, I likewise really appreciated your article.
Hey, thanks Rick. That comment was worth far more than you think
Regards, Paul
Ben Okopnik [ben at linuxgazette.net]
On Wed, Oct 03, 2007 at 03:03:22PM +0200, Pierre Habouzit wrote:
> On Wed, Oct 03, 2007 at 12:36:22PM +0000, Ben Okopnik wrote: > > > > I understood, even before I approved the article for publication, that a > > lot of people had rather strong feelings and opinions on this issue; > > i.e., the author getting flamed when he tried to file a bug report on > > this was a bit of a clue. Those opinions, however, don't make him wrong: > > whatever other evils can be ascribed to Micr0s0ft, their approach to > > IEEE-754 agrees with his - and is used by the majority of programmers in > > the world. > > Their approach is just that the Microsoft libc does not uses the same > rounding method.
Yes; thus my contention that one approach versus the other is not wrong /per se/.
I'm also looking at this from a somewhat different perspective than yours; the fine technical details aren't the only thing that I'm interested in, here. When I was still writing programs in C - many years ago, that is - I would have been quite surprised to see this kind of "data loss" pop up in one of my programs; as a result, I think that having this article in LG, and whatever controversy follows it, is a Good Thing. If it makes more programmers aware of the problem, I will consider the Linux community well-served.
> > That's not a guarantee that they (or he) are right - but it certainly > > implies that his argument stands on firm ground and has merit. > > The issue is that microsoft and the glibc upstream are probably both > right. Rounding 0.125 to 0.12 or to 0.13 is correct.
...Let's just say that you're right in terms of standards only. It is not "right" for, say, a patient whose dose of radioactive chemicals is calculated by this method, and who gets under- or over-medicated as a result. Perhaps it makes sense to (re)consider the standards in that light. Even the process of defining standards can stand to be brought into question once in a while.
> IEEE754 defines 4 > rounding modes[0]. By default glibc uses the first one, aka round to > nearest even. What the author doesn't grok is that you cannot rely > any IEEE754 implementation to round in one sense or the other. And if > you read the glibc bug log carefully, you'll see that everyone in the > thread is trying to explain that in different ways.
Well, no. Ulrich Drepper's response was not attempting to explain anything - it was intemperate and pointless. Most of the other people there also spent a lot of time questioning Paul's competence instead of addressing what is obviously a valid concern. This does not reflect well on their own abilities.
> Please also note that the "Vicent Lef??vre[1]" in the glibc bug report > did a PhD on computer arithmetics, has been a student in one of the 3 > most elitist schools in France, and has a record of knowing "some stuff" > about IEEE754[2]. So it definitely not flaming.
Erm... educated people can't flame? That's a new one on me. Although I will say that Vincent has maintained a rational air through most of that discussion.
> The bug report is wrong, > there isn't a bug, the glibc implementation is conform to the standard > (I mean I don't say the glibc is bug-free, but what was reported isn't a > bug). It just is one of the many quirks IEEE754 floating point generates > if you really use it with portability in mind.
I think that you have hit the nail on the head: it is indeed a quirk. I believe that Paul's point is that it's a quirk that should be remedied - and that, rather than some nebulous "right" or "wrong", is purely a matter of opinion, one that I believe deserves some exposure.
> BTW, you can quote whatever you want from my mails.
Whenever anyone writes to me in my capacity as Editor-in-Chief of LG, that's my default assumption anyway - but thank you for making it explicit.
> [0] https://en.wikipedia.org/wiki/IEEE_754#Rounding_floating-point_numbers > > [1] https://www.vinc17.org/ > > [2] https://www.vinc17.org/research/fptest.en.html > https://www.vinc17.org/cv.en.html > -- > ??O?? Pierre Habouzit > ????O madcoder at debian.org > OOO https://www.madism.org
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Jim Jackson [jj at franjam.org.uk]
On Wed, 3 Oct 2007, Ben Okopnik wrote:
> On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren?? Pfeiffer wrote: > > > > Floating point numbers aren't exact. > > [blink] Pardon my ignorance, but... what's the use of them, then? > Particularly since the rounding can happen (essentially arbitrarily) in > any direction? > > Also, given the above, is there a way of producing a meaningful > fixed-point part of a number with precision?
Yes. Do integer arithmetic and adjust on input and output. Integer arithmetic is accurate - modulo overflowing the max int you can store.
Ben Okopnik [ben at linuxgazette.net]
On Thu, Oct 04, 2007 at 02:34:06PM +0100, Jim Jackson wrote:
> > > > On Wed, 3 Oct 2007, Ben Okopnik wrote: > > > On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren???? Pfeiffer wrote: > > > > > > Floating point numbers aren't exact. > > > > [blink] Pardon my ignorance, but... what's the use of them, then? > > Particularly since the rounding can happen (essentially arbitrarily) in > > any direction? > > > > Also, given the above, is there a way of producing a meaningful > > fixed-point part of a number with precision? > > Yes. Do integer arithmetic and adjust on input and output. > Integer arithmetic is accurate - modulo overflowing the max int you can > store.
Hmm. I think I understand what you mean in general terms - e.g., multiply all the operands by some X which will turn them into integers, perform the operation, then divide the result - but I'm not sure that this is easily implementable. If I have to calculate anything beyond 'a^2 + b^2', the code is going to become Really Unwieldy (unless that's just my lack of experience speaking; it certainly seems like it would be a pain.)
I know that for Perl, there's a Math::BigInt module (uses the GMP lib if that's available), but the problem I see would be in standardizing the conversions. Seems like overloading all the operators would be a bit of a job.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Kapil Hari Paranjape [kapil at imsc.res.in]
Hello,
Having long ago been part of a similar flamefest^Wdiscussion on a similar topic, I would like to insert my own 2 paise at the risk of getting some 1 paise worth of tomato thrown at me
Let me begin with some truisms:
1. There are technical terms that are "common language" words. The technical terms try to abstract the meaning of the words --- but once the former are defined (through "standards" or "definitions") they acquire a life of their own. Strictly speaking they should be treated as different words with the same spelling and pronounciation!
"Precision" is such a term when it comes to floating point computations.
2. The rules for the handling of floating point computations (printing a number is also a computation!) have been set up with the scientific and engineering community in mind. An experiment in science results in a certain "value with an error-bar". A computational simulation of such an experiment needs to be accurate in the sense that it produces the "same" result.
Rules and standards should fit the purpose for which they were designed. Expecting them to behave "reasonably" out of context is --- unreasonable.
3. One should be very careful while using floating point computations with glibc for financial computations --- perhaps floating point computations with glibc should be completely avoided in such contexts.
Such a bug could also be a feature in the eyes of some beholders.
If Paul wants to assert that his code "works properly" with Microsoft's library, I think the authors and maintainers of glibc should be willing to live with that. Note that such an assertion is largely a matter of faith since we do not have the source code to the latter library. Moreover, I think Vicent Lefevre has provided an example to show that such beliefs may be optimistic.
One of the respondents to the bug report has been somewhat intemperate. While criticising this we should note that Paul is not the first person to have tried to report this "feature" of glibc as a "bug". Extended discussions with my colleague did not prevent him from filing a similar bug report against glibc's floating point handling. (This was some years ago.) So glibc maintainers probably get their share of such reports.
On the whole, I tend to agree with the maintainers of glibc on the bug report. At the same time an article like this and the subsequent discussion may help some readers avoid the pitfalls of wrongly interpreting the results of floating-point computations.
Regards,
Kapil. --
Jim Jackson [jj at franjam.org.uk]
On Thu, 4 Oct 2007, Ben Okopnik wrote:
> On Thu, Oct 04, 2007 at 02:34:06PM +0100, Jim Jackson wrote: > > On Wed, 3 Oct 2007, Ben Okopnik wrote: > > > On Wed, Oct 03, 2007 at 10:04:15PM +0200, Ren???? Pfeiffer wrote: > > > > Floating point numbers aren't exact. > > > > > > [blink] Pardon my ignorance, but... what's the use of them, then? > > > Particularly since the rounding can happen (essentially arbitrarily) in > > > any direction? > > > > > > Also, given the above, is there a way of producing a meaningful > > > fixed-point part of a number with precision? > > > > Yes. Do integer arithmetic and adjust on input and output. > > Integer arithmetic is accurate - modulo overflowing the max int you can > > store. > > Hmm. I think I understand what you mean in general terms - e.g., > multiply all the operands by some X which will turn them into integers, > perform the operation, then divide the result - but I'm not sure that > this is easily implementable. If I have to calculate anything beyond > 'a^2 + b^2', the code is going to become Really Unwieldy (unless that's > just my lack of experience speaking; it certainly seems like it would be > a pain.)
Depends. I come from an era where floating point calculations were slow and integer ones fast - you soon got used to thinking scaled integer and only using floats when you were actually dealing with data that would scale and range well beyond normal int ranges (32 bits, or even 16 or you needed to do scientific stuff. A lot of the time you don't really need floats. However checking that all your int calculations stay within range (32 or 64 bits nowadays) can be a bit of a pain.
However I'd need some convincing that it would be appropriate to use floats for fincancial accounting[1].
Jim
[1] Probably will prompt a wave of emails saying where I'm wrong
Kapil Hari Paranjape [kapil at imsc.res.in]
On Thu, 04 Oct 2007, Ben Okopnik wrote:
> On Thu, Oct 04, 2007 at 02:34:06PM +0100, Jim Jackson wrote: > > Yes. Do integer arithmetic and adjust on input and output. > > Integer arithmetic is accurate - modulo overflowing the max int you can > > store. > > Hmm. I think I understand what you mean in general terms - e.g., > multiply all the operands by some X which will turn them into integers, > perform the operation, then divide the result - but I'm not sure that > this is easily implementable. If I have to calculate anything beyond > 'a^2 + b^2', the code is going to become Really Unwieldy (unless that's > just my lack of experience speaking; it certainly seems like it would be > a pain.)
Which is why it is not used unless one wants exact computations.
Given the time of night over here only two contexts where such exact computations are required suggest themselves --- finance and the rendering of vector graphics.
In both these cases, programs can afford to take such an approach since the largest and smallest numbers that can occur are usually decided before the computation begins. This allows the programmer to work with "int"s of fixed sizes. As a consequence one does find programs that take exactly this approach to "floating point" for such computations.
Regards,
Kapil. --
René Pfeiffer [lynx at luchs.at]
Hello again!
On Oct 03, 2007 at 2154 -0500, Ben Okopnik appeared and said:
> On Thu, Oct 04, 2007 at 01:36:50AM +0200, Ren? Pfeiffer wrote: > > [...] > > I am beginning to understand your motivations for your article and > > that's why I wish to reply to you and TAG in order to direct your and > > mine comments to the right place. > > [snip] > > I've said this before, I'm sure, but - Ren?, you rock. This is one of > the few bits of light in a field full of nothing but smoke; thank you > for providing it. Paul's article may yet be incorrect in some > particulars - I will admit that I'm not knowledgeable enough to decide > one way or the other on my own - but what I am seeing, finally, is an > attempt to hear what the man is actually saying, something that's been > largely absent from the beginning of this bug report and discussion. In > fact, having read your email, I now do understand most of the factors > that bear on the problem. [...]
During the investigations for my reply I also revisited some of the discussed issues and got a clearer picture. All in all I tend to agree with Kapil's point of view on this matter. The problem, the standards and the projects addressed in our discussion are complex and cannot be "solved" by simple lookups of man pages or lines of code. I hope this is a bit more transparent now.
Personally I think that every bit of information helps developers to decide which tools to use and what to question when processing data. During the past days I held a workshop about various aspects of computer security. We also spoke about the perils of software development, and in this light I wish to quote someone I don't know but whose words I used in the chapter about code development and security measures.
Let us not look back in anger or forward in fear, but around in awareness. -- James Thurber
I think this approach is much more beneficial than yelling at each other and distribution the blame among whoever has a statement on something. Most people consider a polite report of potential problems a good thing (and yes, I am aware that some of us may get the same reports again and again and are already fed up with that, I know how that feels).
So I am looking forward to all the beautiful submissions of articles explaining unsuspecting souls the pitfalls of writing proper code, whatever proper may mean in this context.
Best wishes, Ren?.
Kapil Hari Paranjape [kapil at imsc.res.in]
Hello,
On Thu, 04 Oct 2007, Kapil Hari Paranjape wrote:
> Such a bug could also be a feature in the eyes of some beholders.
This was meant in a lighter vein but could have a grain of truth in this context.
In order that "value with an error bar" can be interpreted as (Gaussian) symmetric bell shaped curve with the value as mean and the error bar as variance one needs to ensure that rounding rules are devoid of bias.
It is perfectly possible (though I have not checked this) that rounding binary floating point numbers using the "common rules of decimal arithmetic" could introduce a bias. This may be why the IEEE standard was defined the way it was. (Image a Monte-Carlo simulation which was free of bias ... until the results were printed!).
I learnt the perils of "floating point" from D. Knuth's book(s) on Algorithms. His approach to vector graphics avoids them altogether. Hey! We have come a long way since then --- at least we don't call them "real" numbers anymore (except in Fortran).
Now when will computer programmers stop calling numbers between 0 and 2^64-1 "integers"? People stuck with older hardware even do this for numbers between 0 and 2^32-1 ;-)
Regards,
Kapil. --
Kapil Hari Paranjape [kapil at imsc.res.in]
On Wed Oct 3 13:49:39 PDT 2007 Paul Sephton wrote:
> The numbers 0.5, 1.5, 2.5 ... are EXACTLY represented by the FPU.
Once you realise that "floating point numbers" are not "numbers" but "intervals" or (more strictly) "values with error bars", you will see why a lot of people who work with "floating point" implementations got upset with your article and other remarks. A statement such as the one above would be interpreted as "meaningless" by those "in the know".
To explain this a bit futher:
Think of a programming language where assigning a long array variable L to a shorter array variable A is supposed to result in random sub-collection chosen out of L. It may then happen that printing this sentence (which is long list of words) "upto four words" would result in the print out: would happen in print You could certainly argue that this is unexpected behaviour as you expected: It may then happen This expectation is even correct from the "common man's" perspective, but it is wrong from the perspective of how the programming language is defined.
They are implementing one type of thing and you are interested in something else.
"Doing the right thing according to decimal arithmetic" is not necessarily the same as "Doing the right thing for computer simulations"
"Floating point arithmetic" was designed for the latter.
It is only recently that a large number of people have started using floating point in the context of "decimal arithmetic". Hence there is a serious attempt to create a new standard which is aimed to resolve (among other things) the issues you raise.
So there are two ways out: 1. Figure out some way to do the right thing according to decimal arithmetic which is good enough for computer simulations --- since the latter needs to be right statistically, this may be possible but it would need some probability theory to back it up. 2. Go separate ways and define two different standards --- one for decimal arithmetic and one for computer simulations. You choose which library you want to use according your needs.
Regards,
Kapil. --
Kapil Hari Paranjape [kapil at imsc.res.in]
Hello,
On Fri, 05 Oct 2007, Kapil Hari Paranjape wrote:
> "Floating point arithmetic" was designed for [computer simulations]
Looking at the matter a little more closely (while still trying to avoid falling into deep water ) I came across the pages of Professor W. Kahan. (https://www.cs.berkeley.edu/~wkahan/)
Apparently, my assertion above is optimistic at best.
It seems that the IEEE rounding rules for binary floating point are based on some sound understanding combined with a lot of hope that such simulations would come out right.
However, my other assertion that Microsoft's libraries may only be appearing to do the right thing and not doing so seems to be borne out by his paper (Mindless.pdf on the same page) which says:
Apparently Excel rounds Cosmetically in a futile attempt to make Binary floating-point appear to be Decimal. This is why Excel confers supernatural powers upon some (not all) parentheses.
To keep my head dry I am now firmly switching off the browser containing Kahan's web page --- though I've saved it as a book mark
Regards,
Kapil. --
Paul Sephton [paul at inet.co.za]
On Fri, 2007-10-05 at 09:27 +0530, Kapil Hari Paranjape wrote:
> > On Wed Oct 3 13:49:39 PDT 2007 Paul Sephton wrote: > > > The numbers 0.5, 1.5, 2.5 ... are EXACTLY represented by the FPU. > > Once you realise that "floating point numbers" are not "numbers" but > "intervals" or (more strictly) "values with error bars", you will see > why a lot of people who work with "floating point" implementations > got upset with your article and other remarks. A statement such as the > one above would be interpreted as "meaningless" by those "in the know". > > To explain this a bit futher: > > This expectation is even correct from the "common man's" perspective, > but it is wrong from the perspective of how the programming language > is defined. > > They are implementing one type of thing and you are interested in > something else.
There is a principle involved here, called "The principle of least surprise" https://en.wikipedia.org/wiki/Principle_of_least_astonishment states that "when two elements of an interface conflict or are ambiguous, the behaviour should be that which will least surprise the human user or programmer at the time the conflict arises."
In the context of decimal arithmetic, and conversion from a binary format to decimal, one does not generally expect bankers rounding to be the default. One would rather expect scientific rounding.
> It is only recently that a large number of people have started using > floating point in the context of "decimal arithmetic". Hence there is > a serious attempt to create a new standard which is aimed to resolve > (among other things) the issues you raise.
Define "recent"? You might be interested in a quite old article entitled "How to print Floating-Point numbers accurately".
It covers VAX, IBM and IEEE arithmetic.
Paul Sephton [paul at inet.co.za]
On Fri, 2007-10-05 at 07:38, Kapil Hari Paranjape wrote:
> Hello, > > On Fri, 05 Oct 2007, Kapil Hari Paranjape wrote: > > "Floating point arithmetic" was designed for [computer simulations] > > Looking at the matter a little more closely (while still trying to > avoid falling into deep water ) I came across the pages of > Professor W. Kahan. (https://www.cs.berkeley.edu/~wkahan/)
A most amazingly wonderful article ^^^^^^^^^^^^^^^^
Thanks, and regards, Paul
Ben Okopnik [ben at linuxgazette.net]
----- Forwarded message from Eric Postpischil <edp at apple.com> -----
From: Eric Postpischil <edp@apple.com> To: TAG <tag@lists.linuxgazette.net> To: editor at linuxgazette.net Subject: article "A Question Of Rounding" in issue #143 Date: Fri, 5 Oct 2007 14:38:18 -0700
Dear Editor:
I understand you have received calls to withdraw the article "A Question Of Rounding" by Paul Sephton.
Appended please find my comments in the related bug report (at https://sourceware.org/bugzilla/show_bug.cgi?id=4943) and the author's response.
-- edp (Eric Postpischil)
------- Additional Comment #46 From Eric Postpischil 2007-10-05 19:20 [reply] -------
Paul Sephton's statements are consistent with a 15-decimal-digit model of arithmetic and a non-standard rounding rule. E.g., suppose sprintf behaved this way when passed a floating-point format string and an IEEE 754 double-precision number x:
Set y to the 15-decimal-digit number nearest to x. (This requires y be maintained in some format capable of representing decimal numbers.) (To simplify discussion, I omit consideration of underflow and overflow.) Round y according to the format string, with ties rounded away from zero.
I believe this would produce the behavior he desires.
Of course, neither of these is the way that IEEE 754 floating-point arithmetic or C's sprintf function is specified to behave.
IEEE 754 defines a floating-point number in terms of a sign s, a biased exponent e, and a fraction f. It also refers to a significand, which, for normal numbers, equals 1+f. Paul Sephton used the term "mantissa," but that is incorrect. A mantissa is the fractional part of a logarithm. If x is a normally representable positive number, E is an integer, and 2**E <= x < 2**(E+1), then the significand of x is x / 2**E and the mantissa of x is log[b](x) - floor(log[b](x)), for some logarithm base b, often ten.
Quoting ANSI/IEEE Std 754-1985, section 3.2.2, with some changes due to limited typography, the value of a double-precision number is "(-1)**s * 2**(e-1023) * (1.f)". That is the exact value represented. That is the entirety of the representation. IEEE 754 says nothing about considering it to be a 15-decimal-digit number. Any assertion that an IEEE 754 floating-point number represents other numbers, such as a small interval around the number, has no basis in the standard.
I will use the hexadecimal floating constant notation defined in C ISO/IEC 9899:TC2 6.4.4.2. In this notation, a number 0x1.234p56 stands for 0x1.234 * 2**56, that is, (1 + 2/16 + 3/16**2 + 4/16**3) * 2**56.
Nothing in the 1985 IEEE 754 specification indicates that double-precision numbers stand for 15-decimal-digit numbers. According to IEEE 754, if a double-precision number has sign bit 0, (unbiased) exponent 11, and fraction:
0x.44b0ccccccccd, 0x.44b4, or 0x.44b7333333333,
then the floating-point number represents, respectively:
1 * 2048 * 0x1.44b0ccccccccd, 1 * 2048 * 0x1.44b4, or 1 * 2048 * 0x1.44b7333333333.
In decimal, the number is exactly:
2597.52500000000009094947017729282379150390625, 2597.625, or 2597.72499999999990905052982270717620849609375.
Observe that this completely explains the existing sprintf behavior:
"2597.525" in source is translated at compile time to the floating-point number 0x1.44b0ccccccccdp+11. This number is passed to sprintf to be converted according to the format string "%.2f". sprintf examines 0x1.44b0ccccccccdp+11, which is 2597.52500000000009094947017729282379150390625, and it has a choice of rounding it to "2597.52" or "2597.53". Since 2597.52500000000009094947017729282379150390625 is closer to 2597.53, sprintf rounds to "2597.53". "2597.625" in source is translated to 0x1.44b4p11, which is 2597.625. Given the choices of "2597.62" and "2597.63", they are equally far from 2597.625. sprintf uses its rule for ties which is to round to even, so it produces "2597.62". "2597.725" in source is translated to 0x1.44b7333333333p11, which is 2597.72499999999990905052982270717620849609375. Given the choices of "2597.72" and "2597.73", 2597.72 is closer, so sprintf produces "2597.72".
This also demonstrates there are two problems producing the behavior Paul Sephton desires. One is that sprintf rounds ties in a way Paul Sephton does not like. The second problem is that IEEE 754 double-precision does not represent 15-decimal-digit numbers exactly. We can see this because even if sprintf's rule for ties were changed, "2597.725" in source results in passing a number smaller than 2597.725 to sprintf, so there is no tie, and sprintf rounds it down.
Is there any basis for adopting a 15-decimal-digit model of arithmetic? This is not part of the 1985 IEEE 754 specification. Paul Sephton does not cite any source for his statements regarding the behavior of floating-point arithmetic. The IEEE 754 standard says that floating-point numbers represent (-1)**s * 2**(e-1023) * (1.f). It does not say they represent anything else. Nothing in the standard tells us that 0x1.44b0ccccccccdp+11 represents 2597.525.
There is no basis for treating IEEE 754 floating-point numbers as 15-decimal-digit numbers.
It is true that many people think of 2597.525 as being represented by 0x1.44b0ccccccccdp+11. When they type "2597.525" into source or in a text string that is converted to a double-precision number, they get 0x1.44b0ccccccccdp+11. I expect this result leads them to think that 0x1.44b0ccccccccdp+11 represents 2597.525. Nevertheless, it does not. 0x1.44b0ccccccccdp+11 is only an approximation of 2597.525. It is actually itself and is not the thing it approximates for a particular user.
When sprintf receives 0x1.44b0ccccccccdp+11 or 0x1.44b7333333333p11, it has no way of knowing the user intends for these to be 2597.525 or 2597.725. It can only interpret them with the values that IEEE 754 says they have, which are 2597.52500000000009094947017729282379150390625 and 2597.72499999999990905052982270717620849609375.
Paul Sephton stated, "I would assume that Microsoft does at least loosely follow a standard." [1] "Assume" is the correct word; he does not give any basis for this belief. I know that Micr0s0ft C neither conforms to the 1999 C standard nor attempts to, and I do not have information that it conforms to the earlier C standard or to IEEE 754.
Paul Sephton wrote: "Please. I really really really want a solution here." From the above, we know what the two problems are and we can offer a solution: sprintf is obeying its specifications and will not do what you want. You must write your own code to produce the behavior you desire.
I suspect the behavior Paul Sephton desires can be produced in his application by passing x * (1 + 0x1p-52) to sprintf in lieu of x. This makes certain assumptions about the nature of the values he has -- it is not the same as converting the double-precision argument to 15 decimal digits and then rounding, but it may produce the same results in his situation. If a double-precision number passed to sprintf is the double-precision number nearest a 15-decimal-digit number, then I think (have not checked thoroughly) that passing x * (1 + 0x1p-52) instead will yield the same result as if sprintf were somehow passed the 15-decimal-digit number in a decimal format. This is because the replacement number corrects both problems: It changes 2597.625 to a slightly higher number, avoiding the problem with ties, and it changes the double-precision number just below 2597.725 to a number slightly higher, avoiding the second problem. However, if a number being passed to sprintf is more than one ULP away from a 15-decimal-digit number, this method can fail.
Paul Sephton refers to "the rules of decimal mathematics" but gives no citation for the rounding rule he proposes.
In summary:
If we use IEEE 754's statements about the values that double-precision numbers represent, then sprintf's behavior is fully explained. If we use Paul Sephton's statements about floating-point numbers being 15-decimal-digit numbers, then they conflict with IEEE 754 and are inconsistent with sprintf's behavior.
This speaks for itself and is entirely sufficient to resolve the matter.
[1] See Paul's statement in Bugzilla Bug 4943, https://sourceware.org/bugzilla/show_bug.cgi?id=4943
------- Additional Comment #47 From Paul Sephton 2007-10-05 19:52 [reply]
(In reply to comment #46)>This speaks for itself and is entirely sufficient to resolve the matter.What can I say, other than "Thank you"? I do indeed regard this matter as resolved. Kind regards, Paul Sephton ----- End forwarded message ------- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Top Back