Reputation: 1435
I have a hash of hashes like this:
authors = {"7"=> {"id"=>"60"} , "0"=> {"id"=>"60"} , "1"=> {"id"=>"99"}, "8"=> {"id"=>"99"}, "15"=> {"id"=>"19"} }
I want to merge each hash where the id of the hash in that hash is duplicated (or remove each second hash with same hash of hash id).
In this case, I want to end up with
authors = {"7"=> {"id"=>"60"} , "1"=> {"id"=>"99"}, "15"=> {"id"=>"19"}}
There are quite a few questions on sorting hashes of hashes, and I've been trying to get my head around this, but I don't see how to achieve this.
Upvotes: 2
Views: 56
Reputation: 110645
Here are two ways.
#1
require 'set'
st = Set.new
authors.select { |_,v| st.add?(v) }
#=> {"7"=>{"id"=>"60"}, "1"=>{"id"=>"99"}, "15"=>{"id"=>"19"}}
#2
authors.reverse_each.with_object({}) { |(k,v),h| h[v] = k }.
reverse_each.with_object({}) { |(k,v),h| h[v] = [k] }
#=> {"7"=>[{"id"=>"60"}], "1"=>[{"id"=>"99"}], "15"=>[{"id"=>"19"}]}
or
authors.reverse_each.to_h.invert.invert.reverse_each.to_h
Upvotes: 4
Reputation: 30056
Try this one
authors.to_a.uniq { |item| item.last["id"] }.to_h
=> {"7"=>{"id"=>"60"}, "1"=>{"id"=>"99"}, "15"=>{"id"=>"19"}}
uniq
method with a block can do the work
Upvotes: 4