I ran this in MIT/GNU Scheme:
(define x (+ 2 3))
The interpreter then prints:
;Value: x
But according to my textbook, the value returned by a define expression should be undefined. Why did the interpreter print ";Value: x" then?
I ran this in MIT/GNU Scheme:
(define x (+ 2 3))
The interpreter then prints:
;Value: x
But according to my textbook, the value returned by a define expression should be undefined. Why did the interpreter print ";Value: x" then?
If the standard report does not specify a return or mentions that it is undefined an implementation is literally free to choose the value returned and it would be according to the standard. That also means you cannot depend on one implementations behaviour would be the same as another.
To give an example in R6RS:
...is totally correct. Since the predicate evaluates to
#f
and there is no supplied alternative expression the implementation chose the string"banana"
as its result. Slightly unconventional but still a perfectly nice value not defined in the standard.Choosing sensible values that can be used might give the users bad ideas that might fool them into making erroneous code when run in a different, but standard compliant, implementation. Thus you have many implementation actually define a value to be the only undefined value and it would be used in place of all undefined values in the report and often ignored by the REPL.
Here are some examples evaluating
(list (if #f #t))
in different implementations. Wrapping it in alist
makes the REPL show a list with a value that might otherwise have been suppressed: