Reputation: 22941
I looping through a large dataset (contained in a multidimensional associative array $values
in this example) with many duplicate index values with the goal of producing an array containing only the unique values from a given index 'data'
.
Currently I am doing this like:
foreach ($values as $value) {
$unique[$value['data']] = true;
}
Which accomplishes the objective because duplicate array keys simply get replaced. But this feels a bit odd since the indexes themselves don't actually contain any data.
It was suggested that I build the array first and then use array_unique()
to removes duplicates. I'm inclined to stick with the former method but am wondering are there pitfalls or problems I should be aware of with this approach? Or any benefits to using array_unique()
instead?
Upvotes: 0
Views: 1867
Reputation: 86
I would do it like this.
$unique = array();
foreach($values as $value) {
if(!in_array($value, $unique) {
$unique[] = value;
}
}
Upvotes: 1