Reputation:
I want to create function that groups the array with specific key as follow:
var items = [
{name: 'n1', prop: 'p1', value: 90},
{name: 'b', prop: 'p2', value: 1},
{name: 'n1', prop: 'p3', value: 3}];
Into this:
{n1: {p1: 90, p3: 3}, {b: {p2: 1}
Basically group by column "name" and sets the prop name as key with the value.
I know there is groupBy function in RamdaJs but it accepts function to generate the group key.
I know I can format the data after that but I will be inefficient.
Is there any way to pass some kind of "transform" function which prepare the data for each item.
Thanks
Upvotes: 2
Views: 416
Reputation: 191986
An imperative for...of
loop, with a bit of destruction is readable, albeit verbose, and performant.
const fn = arr => {
const obj = {}
for(const { name, prop, value } of arr) {
if(!obj[name]) obj[name] = {} // initialize the group if it doesn't exist
obj[name][prop] = value // add the prop and it's value to the group
}
return obj
}
const items = [{name: 'n1', prop: 'p1', value: 90}, {name: 'b', prop: 'p2', value: 1}, {name: 'n1', prop: 'p3', value: 3}]
const result = fn(items)
console.log(result)
A functional solution using Ramda would be slower but depending on the number of items in the array it might be negligible. I usually start with a functional solution, and only if I have performance issues, I profile, and then fallback to the more performant imperative option.
A readable pointfree solution using Ramda - R.groupBy and R.map would be the basis. In this case I map each group items to their props, and then use R.fromPairs to generate the object.
const { pipe, groupBy, prop, map, props, fromPairs } = R
const fn = pipe(
groupBy(prop('name')),
map(pipe(
map(props(['prop', 'value'])),
fromPairs
))
)
const items = [{name: 'n1', prop: 'p1', value: 90}, {name: 'b', prop: 'p2', value: 1}, {name: 'n1', prop: 'p3', value: 3}]
const result = fn(items)
console.log(result)
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
Upvotes: 0
Reputation: 18901
I would use reduceBy
instead:
const items = [
{name: 'n1', prop: 'p1', value: 90},
{name: 'b', prop: 'p2', value: 1},
{name: 'n1', prop: 'p3', value: 3}];
// {name: 'n1', prop: 'p1', value: 90} => {p1: 90}
const kv = obj => ({[obj.prop]: obj.value});
// {p1: 90}, {name: 'n1', prop: 'p3', value: 3} -> {p1: 90, p3: 3}
const reducer = (acc, obj) => mergeRight(acc, kv(obj));
console.log(
reduceBy(reducer, {}, prop('name'), items)
)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
<script>const {reduceBy, prop, mergeRight} = R;</script>
Upvotes: 1
Reputation: 50797
There is a trade-off involved in using a generic library and writing custom code for every scenario. A library like Ramda with several hundred functions will offer many tools that can help, but they are not likely to cover every scenario. Ramda does have a specific function to combine groupBy
with some sort of fold, reduceBy
. But if I didn't know that, I would write a custom version.
I would start with what works and remains simple, only worrying about performance if tests showed an issue with this specific code. Here I show a number of steps of changing such a function each time to improve performance. I'll make the main point here: I would actually stick with my first version, which I find easily readable, and not bother with any of the performance enhancements unless I had hard numbers to show that this was a bottleneck in my application.
My first pass might look like this:
const addTo = (obj, {prop, value}) =>
assoc (prop, value, obj)
const transform1 = pipe (
groupBy (prop ('name')),
map (reduce (addTo, {}))
)
const items = [{name: 'n1', prop: 'p1', value: 90}, {name: 'b', prop: 'p2', value: 1}, {name: 'n1', prop: 'p3', value: 3}];
console .log (
transform1 (items)
)
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
<script>const {assoc, pipe, groupBy, prop, map, reduce} = R </script>
This to me is clear and easy to read.
But there is certainly a question of efficiency, given that we have to loop over the list to group and then loop over each group to fold. So perhaps we'd be better off with a custom function. Here's a fairly straightforward modern JS version:
const transform2 = (items) =>
items .reduce(
(a, {name, prop, value}) => ({...a, [name]: {...a[name], [prop]: value}}),
{}
)
const items = [{name: 'n1', prop: 'p1', value: 90}, {name: 'b', prop: 'p2', value: 1}, {name: 'n1', prop: 'p3', value: 3}];
console .log (
transform2 (items)
)
reduce ({...spread})
This version only loops once, which sounds like a nice improvement... but there is a real question about the performance of what Rich Snap calls the reduce ({...spread}) anti-pattern. So perhaps we want to use a mutating reduce instead. This shouldn't cause problems as it's only internal to our function. We can write an equivalent version that doesn't involve this reduce ({...spread}) pattern
:
const transform3 = (items) =>
items .reduce (
(a, {name, prop, value}) => {
const obj = a [name] || {}
obj[prop] = value
a[name] = obj
return a
},
{}
)
const items = [{name: 'n1', prop: 'p1', value: 90}, {name: 'b', prop: 'p2', value: 1}, {name: 'n1', prop: 'p3', value: 3}];
console .log (
transform3 (items)
)
Now we've removed that pattern (I don't in fact agree that it's always an anti-pattern), we have a more performant bit of code, but there is still one thing we can do. It's well known that the Array.prototype
functions such as reduce
are not as fast as their plain loop counterparts. So we can go one step further and write this with a for
-loop:
const transform4 = (items) => {
const res = {}
for (let i = 0; i < items .length; i++) {
const {name, prop, value} = items [i]
const obj = res [name] || {}
obj[prop] = value
}
return res
}
const items = [{name: 'n1', prop: 'p1', value: 90}, {name: 'b', prop: 'p2', value: 1}, {name: 'n1', prop: 'p3', value: 3}]; console.log('This version is intentionally broken. See the text for the fix.');
console .log (
transform4 (items)
)
We've reached the limit of what I can think of in terms of performance optimizations.
... And we've made the code much worse! Comparing that last version with the first,
const transform1 = pipe (
groupBy (prop ('name')),
map (reduce (addTo, {}))
)
we see a hand-down winner in terms of code clarity. Without knowing the details of the addTo
helper, we can still get a very good sense up-front, of what this function does on first reading. And if we want those details more obvious, we could simply in-line that helper. Version, though, will take a careful reading to understand how it works.
Oh wait; it doesn't work. Did you test it and see that? Do you see what's missing? I pulled this line from the end of the for
-loop:
res[name] = obj;
Did you notice that in the code? It's not particularly difficult to spot, but it's not necessarily obvious at a quick glance.
Performance optimization, when it's needed, has to be done very carefully, as you can't take advantage of many of the tools you get used to using. So, there are times when it's very important, and I do it then, but if my cleaner, easier-to-read code performs well enough, then I'll leave it there.
A similar argument applies for pushing too hard for point-free code. It's a useful technique, and many functions become cleaner by using it. But it can be pushed beyond its usefulness. Note that the helper function, addTo
, from the initial version above is not point-free. We can make a point-free version of it. There may be simpler ways, but the first thing that comes to my mind is pipe (lift (objOf) (prop ('prop'), prop ('value')), mergeAll)
. We could write an entirely point-free version of this function in-lining that this way:
const transform5 = pipe (
groupBy (prop ('name')),
map (pipe (
map (lift (objOf) (
prop ('prop'),
prop ('value')
)),
mergeAll
))
)
const items = [{name: 'n1', prop: 'p1', value: 90}, {name: 'b', prop: 'p2', value: 1}, {name: 'n1', prop: 'p3', value: 3}];
console .log (
transform5 (items)
)
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
<script>const {pipe, groupBy, prop, map, lift, objOf, mergeAll} = R </script>
Does this gain us anything? Not that I can see. The code is much more complex and much less expressive. This is as hard to read as the for
-loop variant.
So again, focus on keeping the code simple. That's my advice, and I'm sticking to it!
Upvotes: 2