When dealing with a pipelined architecture for executing instructions, one of the ways to avoid hazards is to use delay slots, or a rule that prevents certain instructions from accessing values computed in the lines above them. My understanding is that the assembler attempts to move around your instructions that don't depend on each other so that the non-dependent instructions can be executed while the dependent instructions wait. Is this feature possible or does this occur in the case of interpreted languages that have no real compile time?
(Note that if anything I said above reflects a gap in my understanding please correct it, because these concepts are new to me).
Think of the minecraft computer. It is, in effect, an interpreter: a program reading instructions and selecting which internal functions/routines to execute it's input directives in real-time rather than via compilation.
The interpreter itself - the minecraft program in this case - may be able to make use of cpu level tweaks, but the application - the redstone computer - can't.
One problem the redstone computer suffers is that it is very low level, the interpreter provides very few constructs for implementing a computer. As a result, the whole thing is very data-driven and there is minimal opportunity for the CPU to read ahead and optimize.
The higher level - So the more complex constructs you encode your interpreter for, the more it's programs will benefit from cpu tweaks.
But no, a purely interpreted language can't.