[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Floating Point Computation
- To: Fahlman@c.cs.cmu.edu
- Subject: Re: Floating Point Computation
- From: mcvax!nplmg1a.uucp!jrp@seismo.CSS.GOV
- Date: Thu, 30 Jan 1986 13:42:00 -0000
- Cc: nplmg1a!jrp@seismo.CSS.GOV, firstname.lastname@example.org
- In-reply-to: Your message of Wed, 29 Jan 1986 13:52 EST. <FAHLMAN.12179154269.BABYL@C.CS.CMU.EDU>
Thanks for the reference. I'm not sure what point you are trying to
make, however. Is there some specific recommendation lurking in here?
I don't think that I have anything very profound to say, save don't take
*anything* for granted when it comes to floating point. The reference
gives a number of simple axioms which *should* be satisfied by any
reasonable processor, but are surprisingly often not satisfied in all
cases. In any case, these axioms should permit you to determine what
you can resonably do with floating-point in what might be called a
By the way, there are a number of packages for assessing the
floating-point performance of a given processor (e.g. from NAG, the
Numerical Algorithms Group in Oxford).
And when it comes to "double-length" arithmetic...
Or are you just saying that everything about floating-point is so
screwed up that we shouldn't waste time arguning about how to make
I would not go that far, but when it comes to "double-length"