[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: (declare (type fixnum ---)) considered etc.
- To: RAM@C.CS.CMU.EDU
- Subject: Re: (declare (type fixnum ---)) considered etc.
- From: Alan Snyder <snyder%hplsny@hplabs.HP.COM>
- Date: Thu, 24 Jul 86 09:21:27 PDT
- Cc: common-lisp@su-ai.ARPA
- In-reply-to: Your message of 23-Jul-86 15:53:00
From: Rob MacLachlan <RAM@C.CS.CMU.EDU>
Subject: (declare (type fixnum ---)) considered etc.
Almost every other language in the entire world has a "INTEGER"
type which has an ill-defined, fixed precision. If Common Lisp is
going to have comparable performance to these languages when running
on the same hardware, then it is going to have to be comparably
Unfortunately, I must agree. I think the PL/I experience shows
that if you force people to declare the number of bits of
precision they want, in most cases they will find out what number
produces the best results on their current machine and use that,
thus making their programs non-portable in terms of efficiency.
There is no guarantee either that the maximum-array-bound
corresponds to what we think of as FIXNUMs; why shouldn't a
generous implementation allow BIGNUMs as array indexes? There
are, I admit, cases when the programmer knows how big integers
will need to be, mostly when dealing with fixed-size arrays; in
those cases, people who are concerned about efficiency should be
encouraged to declare the exact range. But, I don't think
fixed-size arrays are a particularly good idea, either. The
conventional solution is to have one or more standard integer
types with strongly suggested (required?) minimum precisions. I
think that is the right pragmatic solution to this problem, given
the desire to produce efficient code on stock architectures.