I am reading Practical Common Lisp by Peter Seibel. In Chapter 9, he is walking the reader through creating a unit testing framework, and he includes the following macro to determine whether a list is composed of only true expressions:
(defmacro combine-results (&body forms)
(let ((result (gensym)))
`(let ((,result t))
,@(loop for form in forms collect `(unless ,form (setf ,result nil)))
,result)))
I'm not clear what the advantage to using a macro is here, though - it seems like the following would be clearer, as well as more efficient for dynamic values:
(defun combine-results (&rest expressions)
(let ((result t))
(loop for expression in expressions do (unless expression (setf result nil)))
result))
Is the advantage to the macro just that it's more efficient at runtime for any calls that are expanded at compile-time? Or is it a paradigm thing? Or is the book just trying to give excuses to practice different patterns in macros?
Your observation is basically right; and in fact your function can just be:
Since the macro unconditionally evaluates all of its arguments from left to right, and yields
T
if all of them are true, it is basically just inline-optimizing something which can be done by a function. Functions can be requested to be inlined with(declaim 'inline ...)
. Moreover, we can write a compiler macro for the function anyway withdefine-compiler-macro
. With that macro we can produce the expansion, and have this as a function that we canapply
and otherwise indirect upon.Other ways of calculating the result inside the function:
The example does look like macro practice: making a gensym, and generating code with
loop
. Also, the macro is the starting point for something that might appear in a unit testing framework.