Readability vs. maintainability: nested functions

318 views Asked by At

What are the pros and cons of each option, considering long-term implications (increasing the number of functions / parameters, other developers taking over, etc.)?

Option 1: removes the need to pass foo and bar to each method, but will create nested functions that are hard to follow.

const myFunction = ({foo, bar}) => {
  const results = []

  const function1 = () => {
    return foo + bar;
  }

  const function2 = () => {
    return foo * bar;
  }

  const res1 = function1();
  const res2 = function2();

  results.push(res1, res2);

  return results;
}

Option 2: you pass the parameters to each function, but remove the nesting, which in my opinion makes it more readable.

const function1 = ({foo, bar}) => {
  return foo + bar;
}

const function2 = ({foo, bar}) => {
  return foo * bar;
}

const myFunction = ({foo, bar}) => {
  const results = []

  const res1 = function1({foo, bar});
  const res2 = function2({foo, bar});

  results.push(res1, res2);

  return results;
}

I would prefer to know how to improve my functional approaches here. Thank you!

3

There are 3 answers

10
Aadit M Shah On BEST ANSWER

The second approach is more idiomatic. In fact, the second approach has a name in functional programming. A function which takes in a shared static value as an input, a.k.a. an environment, is known as a reader.

// Reader e a = e -> a

// ask : Reader e e
const ask = x => x;

// pure : a -> Reader e a
const pure = x => _ => x;

// bind : Reader e a -> (a -> Reader e b) -> Reader e b
const bind = f => g => x => g(f(x))(x);

// reader : Generator (Reader e a) -> Reader e a
const reader = gen => (function next(data) {
    const { value, done } = gen.next(data);
    return done ? value : bind(value)(next);
}(undefined));

// Environment = { foo : Number, bar : Number }

// function1 : Reader Environment Number
const function1 = reader(function* () {
    const { foo, bar } = yield ask;
    return pure(foo + bar);
}());

// function2 : Reader Environment Number
const function2 = reader(function* () {
    const { foo, bar } = yield ask;
    return pure(foo * bar);
}());

// myFunction : Reader Environment (Array Number)
const myFunction = reader(function* () {
    const res1 = yield function1;
    const res2 = yield function2;
    return pure([res1, res2]);
}());

// results : Array Number
const results = myFunction({ foo: 10, bar: 20 });

console.log(results);

In the above example, we define function1, function2, and myFunction using the monadic notation. Note that myFunction doesn't explicitly take the environment as an input. It also doesn't explicitly pass the environment to function1 and function2. All of this “plumbing” is handled by the pure and bind functions. We access the environment within the monadic context using the ask monadic action.

However, the real advantage comes when we combine the Reader monad with other monads using the ReaderT monad transformer.


Edit: You don't have to use the monadic notation if you don't want to. You could define function1, function2, and myFunction as follows instead.

// Reader e a = e -> a

// Environment = { foo : Number, bar : Number }

// function1 : Reader Environment Number
const function1 = ({ foo, bar }) => foo + bar;

// function2 : Reader Environment Number
const function2 = ({ foo, bar }) => foo * bar;

// myFunction : Reader Environment (Array Number)
const myFunction = env => {
    const res1 = function1(env);
    const res2 = function2(env);
    return [res1, res2];
};

// results : Array Number
const results = myFunction({ foo: 10, bar: 20 });

console.log(results);

The disadvantage is that now you're explicitly taking the environment as input and passing the environment to sub-computations. However, that's probably acceptable.


Edit: Here's yet another way to write this without using the monadic notation, but still using ask, pure, and bind.

// Reader e a = e -> a

// ask : Reader e e
const ask = x => x;

// pure : a -> Reader e a
const pure = x => _ => x;

// bind : Reader e a -> (a -> Reader e b) -> Reader e b
const bind = f => g => x => g(f(x))(x);

// Environment = { foo : Number, bar : Number }

// function1 : Reader Environment Number
const function1 = bind(ask)(({ foo, bar }) => pure(foo + bar));

// function2 : Reader Environment Number
const function2 = bind(ask)(({ foo, bar }) => pure(foo * bar));

// myFunction : Reader Environment (Array Number)
const myFunction =
    bind(function1)(res1 =>
        bind(function2)(res2 =>
            pure([res1, res2])));

// results : Array Number
const results = myFunction({ foo: 10, bar: 20 });

console.log(results);

Note that the monadic notation using generators is just syntactic sugar for the above code.

2
customcommander On

Readability tends to be a by-product when a developer focus on making their intent clear.

Would the next developer (or the future you) understand what you intended?

Is IMHO the only question you should answer because it focus on something a little bit more tangible than "Does this look nice?"

From that perspective both versions do that.

Except that both:

  • could do with better names
  • could use new syntax to make it easier to the eyes
const sumproduct_pair = ({a, b}) => {
  const sum = () => a + b;
  const product = () => a * b;
  return [sum(), product()];
};

or

const sum = ({a, b}) => a + b;
const product = ({a, b}) => a * b;
const sumproduct_pair = ({a, b}) => [sum({a, b}), product({a, b})];

However both versions could be improved but again YMMV:

In the first version both sum and product don't need to exist. They are obviously not meant to be reused and are so simple that they could be reduced to their simplest expression:

const sumproduct_pair = ({a, b}) => [a+b, a*b];

In the second version, if you intend for sum and product to be reused then think about "design against an interface not an implementation".

The function sumproduct_pair expects an object with properties a and b but this doesn't mean that every other functions need to have the same interface:

const sum = (a, b) => a + b;
const product = (a, b) => a * b;
const sumproduct_pair = ({a, b}) => [sum(a, b), product(a, b)];

And while this seems like a trivial change, it has removed a few unnecessary curly brackets (if you want to improve readability start by writing less) and most importantly allows both sum and product to work with an unknown amount of numbers:

const sum = (...xs) => xs.reduce((ret, x) => ret + x, 0);
const product = (...xs) => xs.reduce((ret, x) => ret * x, 1);

sum(1, 2, 3);        //=> 6
sum(1, 2, 3, 4);     //=> 10
product(1, 2, 3);    //=> 6
product(1, 2, 3, 4); //=> 24
0
Raikish On

Both of your approaches are correct. The problem here is the context where this code is related to the application, are these functions related to the scope where are they defined/used?

Consider this example.

const Calculator = class {
    complexOperation({foo, bar}) {
      const results = []

      const res1 = this.sum({foo, bar});
      const res2 = this.dot({foo, bar});

      results.push(res1, res2);

      return results;
    }

    sum({foo, bar}) {
      return foo + bar;
    }

    dot({foo, bar}){
      return foo * bar;
    }
};

var calc = new Calculator();
calc.complexOperation({foo: 2, bar: 3})

In this example, we can see how the function level abstraction is related to the intention.

Always remember having The Stepdown Rule in mind.

Now let's change the application intention. Now we are doing an application for a legal agency and we have to do a complex operation to apply some taxes.

Now sum and the dot should not be part of the class because will be only used in the complex operation, new developers don't care what function1(I renamed it to sum) does, they don't have to read it, so we can change the abstraction level. In fact, you will end with a method with some steps.

In other languages such as c#, you can define functions after their usage, but in javascript not, so you can not apply The Stepdown Rule in local functions in JS. Some people invert the Stepdown Rule and define all the local functions in the function start, so their eyes just jump to the last local function end bracket and start reading.

const BusinessTaxes = class {
    complexOperation({foo, bar}) {
      const addTax = () => {
        return foo + bar;
      }
        
      const dotTax = () => {
        return foo * bar;
      }

      // Jump your eyes here
      const results = []
            
      const res1 = addTax({foo, bar});
      const res2 = dotTax({foo, bar});

      results.push(res1, res2);

      return results;
    };
};

var businessTax= new BusinessTaxes();
businessTaxes.complexOperation({foo: 2, bar: 3})

In summary, organize your code into the same level of abstractions, keep it structured and be consistent with your decisions and your code will be readable and maintainable.