Why are the variables "i" and "j" considered dead in the control flow graph?

352 views Asked by At

I was going through the topic of induction variable elimination in the red dragon book, where I came across the following example.

Consider the control flow graph below : Original Control Flow Graph

Fig. 1 : Original Control Flow Graph

Now the authors apply strength reduction to the above graph to obtain the graph below:

Control flow graph after applying strength reduction

Fig 2: Control flow graph after applying strength reduction

Example 10.4. After reduction in strength is applied to the inner loops around B2 and B3, the only use of i and j is to determine the outcome of the test in block B4. We know that the values of i and t2 satisfy the relationship t2 = 4*i, while those of j and t4 satisfy the relationship t4 = 4* j, so the test t2>=t4 is equivalent to i> = j. Once this replacement is made, i in block B2 and j in block B3 become dead variables and the assignments to them in these blocks become dead code that can be eliminated, resulting in the flow graph shown in Fig. 3 below. □

Flow graph after induction variable elimination

Fig 3: Flow graph after induction variable elimination

What I do not get is the claim that "i in block B2 and j in block B3 become dead variables". But if we consider the following graph along the green path in Fig. 2 :

Doubt illustration image

The variables i and j are probably alive in the blocks B2 and B3 respectively, if we go along the path in green as shown and count for the use of i and j (in their respective blocks) on the right-hand side of their assignment. That particular use is a

1

There are 1 answers

5
rici On BEST ANSWER

The variables are no longer live because they have no observable effect.

They are incremented and decremented, but the values are never consulted for any purpose. They are not printed out. No control flow depends on them. No other variable is computed using their values. If they weren't incremented and decremented, nobody would notice.

Eliminating them will not affect program output in any way. So they should be eliminated.


As a more formal definition of liveness, we can start with the following:

  • A variable is live (at a point in the program) if that value of the variable will become observable (by being made visible outside of the execution of the program, see below).

  • A variable is also live if its current value is used in the computation of a live value.

That recursive definition excludes the use of a not-otherwise-used variable only for the computation of the value of itself or of other variables which are not live. It's simply a more precise way of saying what I said in the first part of the answer: an assignment is irrelevant if eliminating it would make no observable difference in the execution of the program.

The precise definition of "observable effect" will vary according to computation model, but it basically means that the value is in some way communicated to the world outside of the program execution. For example a value is live if it is printed on the console or written to a file (including being used as the name of a file to be created, because file directories are also files). It's live if it is stored in a database, or causes a light to blink. The C standard includes in the category of observable behaviour reading and writing volatile memory, which is a way of encapsulating CPUs which use loads and stores of specific memory addresses as a way of sending and receiving data from peripherals.

There's an old philosophical riddle: If a tree falls in an uninhabited forest, does it make a sound? If we ignore the anthoropocentricity of that question, it seems reasonable to answer, "No", as did many 19th century scientists. "Sound", they said, is not just a vibration of the air, but rather the result of the atmospheric vibration causing a neural reaction in an ear. (Certainly, it is possible to imagine a forest without any animate life at all, not just human life, so the philosopher can take refuge in that defense.) And that's basically where this model of computational liveness ends up: a computation is observable if it could be observed by someone. [Note 1]

Now, that's still open to interpretation because someone might, for example, "observe" a computation by measuring the amount of time that the computation took. In that sense, all optimisations should be observable, because they are pointless if they don't shorten computation time.

If we take that to be part of observable behaviour, then no useful optimisation is possible. So in most cases, this is not a particularly useful definition of observability. But there are a very few use cases in which preserving the amount of time a computation uses is necessary. The classic such case is countering a security attacks which deduce the value of what should be secret variables by timing various different uses of the value. If you were writing code designed to maintain a highly-confidential secret -- say, the password keys required to access a bank account -- then you might want to include loops in some control flows which have no computational purpose whatsoever, but rather are intended to take exactly the same amount of time as a different control flow which uses the same secret value.

For a more playful example, when I was much younger and computers used much more electricity to do much slower computations, we noticed that you could "listen" to the execution of a program by tuning a radio to pick up the electromagnetic vibrations being produced by the CPU. Then you could write different kinds of pointless loops to make different musical notes or rhythmic artefacts. This same kind of pointless loop can be used in a microcontroller in order to producing a blinking display, or even to directly drive an audio speaker. So there are definitely cases where you would want the compiler to not eliminate "useless" code.

Despite that, it is probably not a good idea to reject all optimisation techniques in order to enable predictable execution times. Most of the time, we would really prefer for our programs to work as fast as possible, or to consume the minimum amount of non-renewable energy; in other words, to avoid doing unnecessary work. But since there are use cases where optimisation can affect behaviour which is not normally considered observable, the compiler needs to provide the programmer with a mechanism to turn optimisation off in particular pieces of code. Those are not the cases being discussed by Aho&c, and with good reason.


Notes:

  1. George Berkeley, writing in 1710:

    … it seems no less evident that the various Sensations or Ideas imprinted on the Sense, however blended or combined together (that is, whatever Objects they compose) cannot exist otherwise than in a Mind perceiving them…

    Some philosophers of the time posited the necessity of the existence of an omniscient God, in order to avoid the chaos which Berkeley summons up in which the objects in his writing studio suddenly cease to exist when he closes his eyes and are recreated in a blink when he opens them again. In this argument, God, who continually sees all, guarantees the continuity of existence of the objects in Bishop Berkeley's studio. That has always struck me as a peculiarly menial purpose for a deity. (Surely She could delegate such a mundane task to a subordinate.) But to each their own.

    For more references and a little discussion, you can start here on Wikipedia. Or just listen to Bruce Cockburn's beautiful environmental anthem.