# Re: integer-decode-float usage question

• To: Barry Margolin <barmar@think.com>
• Subject: Re: integer-decode-float usage question
• From: sandra%orion@cs.utah.edu (Sandra J Loosemore)
• Date: Wed, 28 Oct 87 11:32:19 MST
• Cc: Sandra J Loosemore <sandra%orion@cs.utah.edu>, Jeff Mincy <mincy@think.com>, common-lisp@sail.stanford.edu
• In-reply-to: Barry Margolin <barmar@Think.COM>, Wed, 28 Oct 87 12:15 EST

```  From: Barry Margolin <barmar@Think.COM>

The numbers [INTEGER-DECODE-FLOAT] returns are generally the exact same bit
patterns as were in the float to begin with (except for the sign).

All of the float formats I've ever come across assume there is an
implied radix point in front of the fraction part, and the value of the
the exponent stored in the bit pattern assumes the fraction part is
really a fraction.  If the float radix is 2, the exponent of 1.0 has a
value of 1 regardless of how many bits are used to represent the
fraction.  That's the case for the native Vax representations, the
Motorola FFP representation, the double precision floats handled by the
E&S PS300 ACP (an implementation directly derived from Knuth's book),
and I believe the IEEE representations as well (if my memory serves me
correctly).

The exponent returned by INTEGER-DECODE-FLOAT, which does depend on the
number of bits in the fraction part, is almost certainly not the bit
pattern actually stored in the number.  (Besides, the exponent is
often stored as an unsigned excess-N number instead of in its two's
complement form.)

The way CLtL defines the "exponent" makes some sense in its own right,
but it's a different interpretation than what I believe to be the usual
one.  Again, it's the terminology that got me confused in the first place.

-Sandra
-------

```