Reputation: 14816
I'm building a React app using Ramda. I'm still new to functional programming (~ two months in).
I have a list of contacts like this:
const contacts = [
{
id: 1,
firstName: 'Sven',
lastName: 'Hillstedt',
city: 'Aachen',
company: '',
position: 'Student',
group: 'friends',
tendency: 'maintain'
},
{
id: 2,
firstName: 'David',
// ...
];
Given a string I need to filter this (very long, 10.000-100.000) list. But I only need to take into account the keys firstName
, lastName
, city
, company
and position
. There is an array containing these:
const FIRST_NAME = 'firstName';
const LAST_NAME = 'lastName';
const CITY = 'city';
const COMPANY = 'company';
const POSITION = 'position';
export const stringFields = [FIRST_NAME, LAST_NAME, CITY, COMPANY, POSITION];
Now, using Ramda I wrote the following function(s) that takes a string
and a list of contacts, maps over the contacts` keys picking the relevant ones and lowercasing them and then returns the filtered contacts:
import { any, filter, includes, map, pick, pipe, toLower, values } from 'ramda';
const contactIncludesValue = value =>
pipe(
pick(stringFields),
map(toLower),
values,
any(includes(value))
);
const filterContactsByValue = value => filter(contactIncludesValue(value));
As you can see this code is messy (even thought it is way prettier than doing it imperatively). I curry value =>
many times, which feels unoptimal. I'm also questioning, whether this code only iterates over the contacts once and if it is efficient.
How would you filter and map (pick only the relevant keys + lowerCase
) a large list of contacts without iterating over it twice or more? Is there a way to avoid my currying and write this cleaner?
Upvotes: 3
Views: 232
Reputation: 14199
R.innerJoin
would surely represent the most concise way of writing it, but I'm not sure about its time complexity.
const filter = value => R.innerJoin(
// you may lowercase, etc... here
(record, prop) => R.propEq(prop, value, record),
R.__,
['firstName', 'lastName', 'city', 'company', 'position'],
);
const onlySven = filter('Sven');
const onlyGiuseppe = filter('Giuseppe');
const data = [
{
id: 1,
firstName: 'Sven',
lastName: 'Hillstedt',
city: 'Aachen',
company: '',
position: 'Student',
group: 'friends',
tendency: 'maintain'
},
// ...
];
console.log('Giuseppe', onlyGiuseppe(data));
console.log('Sven', onlySven(data));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
Upvotes: 1
Reputation: 18941
How would you filter and map (pick only the relevant keys + lowerCase) a large list of contacts without iterating over it twice or more? Is there a way to avoid my currying and write this cleaner?
If you need to filter AND transform your data in one go, I don't see how you could do this using filter
alone.
For example, this won't keep a
and transform it:
const list = [
{a: 'foo'},
{b: 'bar'}
];
console.log(
filter(pipe(map(toUpper), has('a')), list)
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
<script>const {filter, pipe, map, toUpper, has} = R;</script>
For that you need to either use reduce
or a transducer.
Here's a solution using a transducer. In this example:
a
property is equal to 1
.b
property, add 10
b
const list = [
{a: 1, b: 2},
{a: 2, b: 20},
{a: 1, b: 3},
{a: 2, b: 30},
{a: 1, b: 4},
{a: 2, b: 40},
];
console.log(
into([],
compose(
filter(propEq('a', 1)),
map(over(lensProp('b'), add(10))),
map(pick(['b']))
),
list)
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
<script>const {into, compose, filter, propEq, map, pick, over, lensProp, add} = R;</script>
The nice thing about using transducers, is that it decouples the logic for producing the result (an array) from the logic for transforming data.
into([])
tells Ramda that you're producing an array and therefore whatever comes out of your compose
chain, will need to be appended to it.
into('')
tells Ramda that you're producing a string. Your compose
chain only needs to return a string. into
will take care of concatenating it to the final result:
const list = [
{a: 1, b: 2},
{a: 2, b: 20},
{a: 1, b: 3},
{a: 2, b: 30},
{a: 1, b: 4},
{a: 2, b: 40},
];
console.log(
into('',
compose(
filter(propEq('a', 1)),
map(over(lensProp('b'), add(10))),
map(prop('b'))
),
list)
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
<script>const {into, compose, filter, propEq, map, over, lensProp, add, prop} = R;</script>
Upvotes: 2
Reputation: 50807
There are several things to respond to here.
Even if the comments were slightly snarky, @zerkms has it right. It makes little sense to try performance optimization unless you know that the code actually has poor performance, especially if it makes the code harder to write or maintain.
You do not curry value =>
multiple times. It's curried only up front, and the partial application of your value happens once per filtering of the list.
You only iterate your contacts a single time. But inside each one is a call to any
over your list of fields. This one does an early return if it finds a match, so it's not trivial to calculate the number of calls, but it's probably O(m * n)
where m
is the number of fields and n
the number of contacts.
This version of your code is slightly more condensed. You might or might not find it more readable:
const contactIncludesValue = value =>
pipe(
props(stringFields),
map(toLower),
any(includes(value))
);
const filterContactsByValue = pipe(contactIncludesValue, filter);
Note that props
is more convenient than pick(...) -> values
, and the intermediate map(toLower)
works just as well afterward.
Upvotes: 3