Reading Kyle Simpson's You Don't Know JS: Scopes & Closures, he argues that you should stay away from both the eval()
function and the with
keyword because whenever the compiler sees these 2 (i'm paraphrasing), it doesn't perform some optimizations related to lexical-scope and storing the location of identifiers because these keywords could potentially modify the lexical scope therefore making the compiler's optimizations sort of incorrect (i am assuming that the optimizations are something like the compiler storing the location of each identifier so it can provide an identifier's value without searching for it when it is requested during runtime).
Now I understand why that would happen when you used the eval()
keyword: your eval could be evaluating user input, and that user input could be the declaration of a new variable, which shadows another variable that you are accessing later in let's say the function that is executing, if the compiler had stored the static location, the access would return the value of the wrong identifier (since the access should have returned the value of the identifier that was declared by eval()
, but it returned the value of the variable that was stored by the compiler to optimize the look-ups). So I am just assuming this is why the compiler doesn't perform its scope-related look-ups whenever it spots an eval()
in your code.
But why does the compiler do the same thing for the with
keyword ? The book says that it does so because with
creates a new lexical scope during runtime and it uses the properties of the object passed as the argument to with
to declare some new identifiers. I literally have no idea what this means and i have a very hard time trying to visualize all this since it is all the compiler-related stuff in this book is all theory.
I know that i could be on the wrong track, in that case, please kindly correct all my misunderstandings :)
The optimization referred to here is based on this fact: the variables declared within a function can always be determined via simple static analysis of the code (i.e., by looking at
var
/let
andfunction
declarations), and the set of declared variables within a function never changes.eval
violates this assumption by introducing the ability to mutate a local binding (by introducing new variables to a function's scope at run time).with
violates this assumption by introducing a new non-lexical binding within a function whose properties are computed at runtime. Static code analysis cannot always determine the properties of awith
object, so the analyzer cannot determine what variables exist within awith
block. Importantly, the object supplied towith
may change between executions of the function, meaning that the set of variables within that lexical section of the function can never be guaranteed to be consistent.Consider the simple function:
All points in
foo
have three locally-scope variables,a
,b
andc
. An optimizer can attach a permanent kind of "note" to the function that says, "This function has three variables:a
,b
, andc
. This will never change."Now consider:
In the
with
block, there is no knowing what variables will or won't exist. If there is ana
,b
orc
in thewith
, we don't know until run time if that refers to a variable ofbar
or one created by thewith(egg)
lexical scope.To show a semi-practical example of how this is a problem, finally consider:
When the inner function executes (e.g.,
bar({...})()
), the execution engine will look up the scope chain to findwhereami
. If the optimizer had been allowed to attach a permanent scope-note tobaz
, then the execution engine would immediately know look in the function'sbaz
closure for the value ofwhereami
, because that would be guaranteed to be the home ofwhereami
(any similarly-named variable up the scope chain would be shadowed by the closest one). However, it doesn't know ifwhereami
exists inbaz
or not, because it could be conditionally created by the contents ofegg
on the particular run ofbar
that created that inner function. Therefore, it has to check, and the optimization is not used.