Can code that produces deep call stacks be considered as an anti-pattern?

1.4k views Asked by At

We are doing a research around LLVM libraries and we have found that the IR library sometimes reaches call stacks up to 29 method calls.

Sometimes when I see some crashes in iOS frameworks I also observe quite deep call stacks.

My question is whether we can reason about if there might be something wrong with a design of a code that calls itself at so big level of depth.

Here is an example:

/usr/local/LLVM/llvm/unittests/IR/AttributesTest.cpp:54
  /usr/local/LLVM/llvm/lib/IR/LLVMContext.cpp:162
    /usr/local/LLVM/llvm/lib/IR/LLVMContext.cpp:162
      /usr/local/LLVM/llvm/lib/IR/LLVMContextImpl.cpp:54
        /usr/local/LLVM/llvm/lib/IR/LLVMContextImpl.cpp:59
          /usr/local/LLVM/llvm/lib/IR/Module.cpp:60
            /usr/local/LLVM/llvm/lib/IR/Module.cpp:62
              /usr/local/LLVM/llvm/lib/IR/Module.cpp:456
                /usr/local/LLVM/llvm/lib/IR/Function.cpp:350
                  /usr/local/LLVM/llvm/lib/IR/BasicBlock.cpp:98
                    /usr/local/LLVM/llvm/include/llvm/ADT/ilist.h:282
                      /usr/local/LLVM/llvm/include/llvm/ADT/ilist.h:267
                        /usr/local/LLVM/llvm/lib/IR/SymbolTableListTraitsImpl.h:76
                          /usr/local/LLVM/llvm/lib/IR/BasicBlock.cpp:90
                            /usr/local/LLVM/llvm/lib/IR/SymbolTableListTraitsImpl.h:58
                              /usr/local/LLVM/llvm/lib/IR/ValueSymbolTable.cpp:75
                                /usr/local/LLVM/llvm/lib/IR/ValueSymbolTable.cpp:47
                                  /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:132
                                    /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:112
                                      /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:122
                                        /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:96
                                          /usr/local/LLVM/llvm/include/llvm/IR/Value.h:777
                                            /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:132
                                              /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:122
                                                /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:75
                                                  /usr/local/LLVM/llvm/include/llvm/IR/Value.h:771
                                                    /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:132
                                                      /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:122
                                                        /usr/local/LLVM/llvm/include/llvm/Support/Casting.h:75
                                                          /usr/local/LLVM/llvm/include/llvm/IR/Value.h:759

P.S. The example call stack is actually produced by a destructor of an LLVMContext class: LLVMContext::~LLVMContext(). This is another example from a very old post from Java world: Java call stack – from HTTP upto JDBC as a picture.

2

There are 2 answers

1
James K. Lowden On BEST ANSWER

My question is whether we can reason about if there might be something wrong with a design of a code that calls itself at so big level of depth.

I'm going to go out on a limb and say "yes", but there are problems with your question and that answer.

The notion reason about if there might be isn't much to hang your hat on. You can reason about whether a loop terminates; you can prove it does, or does not. You can reason about whether a race condition exists. You cannot reason about whether something might exist.

No standard, no metric, no authority will tell you how deep a call stack is too deep. Bear in mind almost any call stack is avoidable: the call stack is an artifact of the "factorization" (if you will) of the library. One can imagine replacing a function call with a macro or C++ template. Same logical effect, slightly lower instruction count. Maybe cheaper, because in-line, or more expensive, because duplicated code. But at least the stack pointer didn't change!

So I interpret your question as: Is a large call stack, relative to the implemented functionality, reason to scrutinize the code for unnecessary complexity? To that, I say Yes.

Faced with a situation like the one you describe, I would wonder whether some of those calls were "manager" functions: some kind of generalized wrapper that doesn't do very much. I remember reading a marshalling library a few years ago, where the call stack to write(2) was 14 deep. Most of the code didn't do anything except shuffle the data into another abstraction.

No coincidence: that library and yours are both C++. C++ makes it easy to make implicit function calls, and destructors are an example. If you were writing that destructor as a C function, I bet it would be long and flat. Destructors are also apt to do a lot of "cleaning up" just before freeing the memory; in C you might just have called free(3) a few times, and been done with it.

So it's not really the call-stack depth per se that's a problem. But IMO your instinct is right: a large call stack over a small amount of functionality indicates a kind of super-organized spaghetti code. It certainly can't hurt to look at the functionality anew, and perhaps look for ways to reduce the number of abstractions.

0
Jeffrey Rennie On

As a rule, yes I get suspicious when I see deep call stacks, for similar reasons as James K. Lowden.

But your example is the exception to the rule. It appears your code is traversing a representation of source code. Source code contains deeply nested structures, and your code is recursively traversing those structures. So the call stack will as grow as deep as the code is nested. That's totally expected and fine, though I hope you have a lot of memory allocated to the stack.