Reputation: 869
I am trying to understand the concepts and basics of functional programming. I am not going hard-core with Haskell or Clojure or Scala. Instead, I am using JavaScript.
So, if I have understood correctly, the idea behind Functional Programming is to compose a software application using Pure Functions - which take care of handling single responsibility/functionality in an application without any side-effects.
Composition takes place in such a way that the output of one function is piped in as an input to another (according to the logic).
I write 2 functions for doubling and incrementing respectively which take an integer as an argument. Followed by a utility function that composes the functions passed in as arguments.
{
// doubles the input
const double = x => x * 2
// increments the input
const increment = x => x + 1
// composes the functions
const compose = (...fns) => x => fns.reduceRight((x, f) => f(x), x)
// input of interest
const arr = [2,3,4,5,6]
// composed function
const doubleAndIncrement = compose(increment, double)
// only doubled
console.log(arr.map(double))
// only incremented
console.log(arr.map(increment))
// double and increment
console.log(arr.map(doubleAndIncrement))
}
The outputs are as follows:
[4, 6, 8, 10, 12] // double
[3, 4, 5, 6, 7] // increment
[5, 7, 9, 11, 13] // double and increment
So, my question is, the reduceRight function will be going through the array twice in this case for applying the 2 functions.
If the array gets larger in size, wouldn't this be inefficient?
Using a loop, it can be done in a single traversal with the two operations in the same loop.
How can this be prevented or is my understanding incorrect in any way?
Upvotes: 3
Views: 555
Reputation: 14189
Optimising time complexity while still having small and dedicated functions is a widely discussed topic in fp.
Let's take this simple case:
const double = n => 2 * n;
const square = n => n * n;
const increment = n => 1 + n;
const isOdd = n => n % 2;
const result = [1, 2, 3, 4, 5, 6, 7, 8]
.map(double)
.map(square)
.map(increment)
.filter(isOdd);
console.log('result', result);
In linear composition, as well as chaining, this operation can be read as O(4n)
time complexity... meaning that for each input we perform roughly 4 operations (e.g. in a list of 4 billion items we would perform 16 billion operations).
we could solve this issue (intermediate values + unnecessary number of operations) by embedding all the operations (double, square, increment, isOdd) into a single reduce function... however, this would result in a loss of readability.
In FP you have the concept of Transducer
(read here) so that you could still keep the readability given by single purposed functions and have the efficiency of performing as little operations as possible.
const double = n => 2 * n;
const square = n => n * n;
const increment = n => 1 + n;
const isOdd = n => n % 2;
const transducer = R.into(
[],
R.compose(R.map(double), R.map(square), R.map(increment), R.filter(isOdd)),
);
const result = transducer([1, 2, 3, 4, 5, 6, 7, 8]);
console.log('result', result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js" integrity="sha256-xB25ljGZ7K2VXnq087unEnoVhvTosWWtqXB4tAtZmHU=" crossorigin="anonymous"></script>
Upvotes: 1
Reputation: 1800
It is map
that traverses the array, and that happens only once. reduceRight
is traversing the list of composed functions (2 in your example) and threading the current value of the array through that chain of functions. The equivalent inefficient version you describe would be:
const map = f => x => x.map(f)
const doubleAndIncrement = compose(map(increment), map(double))
// double and increment inefficient
console.log(doubleAndIncrement(arr))
This reveals one of the laws that map must satisfy, that:
map(compose(g, f))
is equivalent (isomorphic) to compose(map(g), map(f))
But as we now know, the latter can be made more efficient by simplifying it to the former, and it will traverse the input array only once.
Upvotes: 5