So our class was given an assignment to convert decimal numbers to their octal representations. I managed to just tinker with it till it worked, but I'm having some problems comprehending why it works. Any possibility at explaining the recursion in a more simplistic manner? Thanks.
(define octify
(lambda (n)
(cond
((zero? n) 0)
((zero? (quotient n 8)) n)
(else (+ (* 10 (octify (quotient n 8))) (remainder n 8))))))
First of all, a "number" is not decimal or octal. A "number" is a mathematical concept, which is stored in the computer in some sort of format with a bunch of bits. Decimal and octal refer to different string representations of a number. That is, "decimal" and "octal", etc. only make sense when talking about strings, and a particular number can be converted into a string as decimal or octal or whatever.
Producing the octal (or some other base) string representation of an integer is a common basic task in programming. The algorithm you have basically figured out: take the remainder of the number by the base to get the last digit, then recurse on the quotient of the number by the base to get the rest (all but last digit) of the number.
What is strange about what you are doing is that you are not producing a string, as one would normally do for this task. Instead, you are trying to pack it back into a number, in such a way that the resulting number's decimal representation would look like what the octal representation of the original number would look like. (This happens to be possible since any octal representation is also a valid decimal representation of some number. This wouldn't be possible with hex, for example.) In other words, you are converting a number to its octal string representation, then parsing that string into a number as if it were a decimal representation. For example, take the number 42, whose decimal representation is the string "42" and octal representation is the string "52". Your program returns the number 52 (whose octal representation is the string "64").
You may be confused because you are typing this into an interpreter, and when you compute or print a number, it outputs the decimal representation. But it is important to understand that a number is completely different from a string. (If you computed a string in your interpreter, perhaps it would surround it with quotes or something.) It would make most sense if your program outputted the string of the octal representation, rather than a number which when printed, looks like it.