[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Functions for taking apart floating-point numbers



Proposal:  Remove @f[float-exponent] and @f[float-integer-exponent], and
	   return those numbers as the second values from
	   @f[float-significand] and @f[float-integer-significand].

This would serve to make these functions more similar to the division
functions, and emphasize the close relationship between the values.  It
also leads to slightly improved efficiency of implementation,
particularly on systems with unnormalized (or denormalized) numbers, and
those with "non-numeric" floating-point values [IEEE floating-point,
anyone?] which must be checked.  (I'm assuming that when one value is
wanted, the other is almost always wanted also.)

In addition, the comment:

    @Implementation{The typical implementation will be such that
    the difference between the values returned by @f[float-integer-exponent]
    and @f[float-exponent] will be equal to the number of internal-floating-radix
    digits used to represent the significand.  However, @clisp programs should
    not assume this to be the case.}

should either be removed or expanded to note that this is not true of
unnormalized or denormalized numbers.

---------
Proposal:  Add new functions @f[float-digits] and @f[float-precision].

@f[(float-digits float)] would return the number of @f[(float-radix
float)] digits used in the representation of a floating-point number.
(Including the "hidden bit", if any.)  For most implementations, this is
really a property of the type of @f[float].

@f[(float-precision float)] would return the number of significant
@f[(float-radix float)] digits present in @f[float].

@Implementation{For implementations using only normalized numbers, these
functions would be identical.  @f[(- (float-digits float) (float-precision
float))] would reflect the degree of denormalization of @f[float].}

These functions are needed to simplify the movement of floating-point
numbers from one implementation to another.  They are also needed to
convert numbers from one base to another in a way which reflects the
precision (doesn't print bogus digits).  @f[float-digits] could be
computed for most implementations by @f[(- (float-exponent 1.0)
(float-exponent single-float-epsilon))] or something like that.  But
making it a function seems cleaner than some sort of typecase.

---------
Proposal:  Require the result of @f[float-integer-significand] to
	   reflect the precision of the floating-point number.

Currently, @f[float-integer-significand] is only required to satisfy the
identity relating it to @f[float-integer-exponent].  That allows the
result to be scaled by various powers of the radix, so long as the
exponent is adjusted appropriately.  So if @i[b] is @f[(float-radix float)]
and @i[p] is @f[(float-precision float)], the result of
@f[(float-integer-significand float)] for non-zero @f[float] should be
between @f[(expt b p)] (exclusive) and @f[(expt b (1- p))] (inclusive).

---------
Proposal:  Clarify the result of @f[(float-sign 0.0)], @f[(float-sign
           -0.0)].

The possibility of signed zeros needs to be addressed.  Presumably
@lisp
(float-sign 0.0) @EQ 1.0
(float-sign -0.0) @EQ -1.0
@endlisp