[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

eval'd macros (the other point of view)



    I would like to second Kent Pitman's opinion.  Maybe no one has mentioned
    this objection because it is so obvious -- it is still very important.
    I think that it is a bad idea to have the interpreter defer expansion of
    macros because it is just one more case of differing interpreter and
    compiler semantics.  If I am to believe the statement regarding consistency
    in the introduction of CLtL: "The definition of COMMON LISP avoids such
    anomalies by explicitly requiring the interpreter and compiler to impose
    identical semantics on correct programs so far as possible," then as Kent
    suggests, the interpreter should not defer expansion because the compiler
    is required not to.

Well, if we wanted everything to be ABSOLUTELY IDENTICAL between
compiled and interpreted code, except that the compiled code runs faster,
there would be no point in having an interpreter at all.  The compiler,
in the name of efficiency, does certain things to the code that makes
debugging and incremental changes harder.  For example, in most systems
you can't single-step code once it is compiled, and in some systems you
can't trace it.  It would be nice if these things kept on working after
compilation, but it is hard to arrange this.

In Lisp we go to some trouble to allow you to redefine functions, even
after a lot of calls to those functions have been compiled.  Those calls
see the new definition, whether it is compiled or interpreted.  Macros
and inline functions get burned in and cannot later be changed without
finding all the calls and recompiling them.  That's too bad, but it's
the price you pay for avoiding the overhead of a function call.  (This
immutability is sometimes deliberately used to lock out later changes,
but that is fairly rare and could be handled by some specialized kind of
call.)

So we have to give up the ability to change macros (or define them after
the call) in compiled code, but to live without this ability in the
interpreter would be a tremendous pain.  People who have written
macro-memoizing functions to speed up interpreters have generally gone
to a lot of trouble to put in some way of un-memoizing when a macro gets
redefined.  I don't think I'd try to develop code in any system that
expanded all the macros at DEFUN time unless it included a very
well-engineered and transparent un-memoizing function.  

I grant that if the interpreter waits until use before expanding a
macro, it would be easy to create a file of code that works interpreted
and errs out when you compile it.  Any good compiler will make it very
clear what the problem is: "FOO being defined as a macro, earlier
assumed to be a function" or some such message, so it is not a treacherous
kind of incompatibility like some of the special/local stuff used to be.
So getting rid of this incompatibility is not nearly as important to me
as being able to fix up faulty macros while I am debugging.

-- Scott