brg
brg

Reputation: 3953

Ruby sort order of array of hash using another array in an efficient way so processing time is constant

I have some data that I need to export as csv. It is currently about 10,000 records and will keep growing hence I want an efficient way to do the iteration especially with regards to running several each loop, one after the other. My question is that is there a away to avoid the many each loops i describe below and if not is there something else I can use beside Ruby's each/map to keep processing time constant irrespective of data size.

For instance:

  1. First i will loop through the whole data to flatten and rename the fields that hold array values so that fields like issue that hol array value will be come issue_1 and issue_1 if it contains only two items in the array.

  2. Next I will do another loop to get all the unique keys in the array of hashes.

  3. Using the unique keys from step 2, I will do another loop to sort this unique keys using a different array that holds the order that the keys should be arranged in.

  4. Finally another loop to generate the CSV

So I have iterated over the data 4 times using Ruby's each/map every time and the time to complete this loops will increase with data size.

Original data is in the form below :

def data
  [
     {"file"=> ["getty_883231284_200013331818843182490_335833.jpg"], "id" => "60706a8e-882c-45d8-ad5d-ae898b98535f", "date_uploaded" => "2019-12-24", "date_modified" => "2019-12-24", "book_title_1"=>"", "title"=> ["haha"], "edition"=> [""], "issue" => ["nov"], "creator" => ["yes", "some"], "publisher"=> ["Library"], "place_of_publication" => "London, UK"]},

    {"file" => ["getty_883231284_200013331818843182490_335833.jpg"], "id" => "60706a8e-882c-45d8-ad5d-ae898b98535f", "date_uploaded" => "2019-12-24", "date_modified"=>"2019-12-24", "book_title"=> [""], "title" => ["try"], "edition"=> [""], "issue"=> ["dec", 'ten'], "creator"=> ["tako", "bell", 'big mac'], "publisher"=> ["Library"], "place_of_publication" => "NY, USA"}]
end

Remapped date by flattening arrays and renaming the keys holding those array

def csv_data
  @csv_data = [
     {"file_1"=>"getty_883231284_200013331818843182490_335833.jpg", "id"=>"60706a8e-882c-45d8-ad5d-ae898b98535f", "date_uploaded"=>"2019-12-24", "date_modified"=>"2019-12-24", "book_title_1"=>"", "title_1"=>"haha", "edition_1"=>"", "issue_1"=>"nov", "creator_1"=>"yes", "creator_2"=>"some", "publisher_1"=>"Library", "place_of_publication_1"=>"London, UK"},

    {"file_1"=>"getty_883231284_200013331818843182490_335833.jpg", "id"=>"60706a8e-882c-45d8-ad5d-ae898b98535f", "date_uploaded"=>"2019-12-24", "date_modified"=>"2019-12-24", "book_title_1"=>"", "title_1"=>"try", "edition_1"=>"", "issue_1"=>"dec", "issue_2" => 'ten', "creator_1"=>"tako", "creator_2"=>"bell", 'creator_3' => 'big mac', "publisher_1"=>"Library", "place_of_publication_1"=>"NY, USA"}]

end

Sorting the headers for the above data

def csv_header

  csv_order = ["id", "edition_1", "date_uploaded",  "creator_1", "creator_2", "creator_3", "book_title_1", "publisher_1", "file_1", "place_of_publication_1", "journal_title_1", "issue_1", "issue_2", "date_modified"]

  headers_object = []
  sorted_header = []
  all_keys = csv_data.lazy.flat_map(&:keys).force.uniq.compact

  #resort using ordering by suffix eg creator_isni_1 comes before creator_isni_2
  all_keys = all_keys.sort_by{ |name| [name[/\d+/].to_i, name] }

  csv_order.each {|k| all_keys.select {|e| sorted_header << e if e.start_with? k} }

  sorted_header.uniq
end

The generate the csv which also involves more loop:

def to_csv
  data = csv_data
  sorted_headers = csv_header(data)

  csv = CSV.generate(headers: true) do |csv|
    csv << sorted_header
    csv_data.lazy.each do |hash|
      csv << hash.values_at(*sorted_header)
    end
  end
end

Upvotes: 0

Views: 148

Answers (1)

wiesion
wiesion

Reputation: 2455

To be honest, I was more intrigued to see if I am able to find out what your desired logic is without further description, than about the programming part alone (but of course i enjoyed that as well, it has been ages i did some Ruby, this was a good refresher). Since the mission is not clearly stated, it has to be "distilled" by reading your description, input data and code.

I think what you should do is to keep everything in very basic and lightweight arrays and do the heavy lifting while reading the data in one single big step. I also made the assumption that if a key ends with a number, or if a value is an array, you want it to be returned as {key}_{n}, even if there's only one value present.

So far i came up with this code (Logic described in comments) and repl demo here

class CustomData
  # @keys array structure
  # 0: Key
  # 1: Maximum amount of values associated
  # 2: Is an array (Found a {key}_n key in feed,
  #    or value in feed was an array)
  #
  # @data: is a simple array of arrays
  attr_accessor :keys, :data
  CSV_ORDER = %w[
    id edition date_uploaded creator book_title publisher
    file place_of_publication journal_title issue date_modified
  ]

  def initialize(feed)
    @keys = CSV_ORDER.map { |key| [key, 0, false]}
    @data = []
    feed.each do |row|
      new_row = []
      # Sort keys in order to maintain the right order for {key}_{n} values
      row.sort_by { |key, _| key }.each do |key, value|
        is_array = false
        if key =~ /_\d+$/
          # If key ends with a number, extract key
          # and remember it is an array for the output
          key, is_array = key[/^(.*)_\d+$/, 1], true
        end
        if value.is_a? Array
          # If value is an array, even if the key did not end with a number,
          # we remember that for the output
          is_array = true
        else
          value = [value]
        end
        # Find position of key if exists or nil
        key_index = @keys.index { |a| a.first == key }
        if key_index
          # If you could have a combination of _n keys and array values
          # for a key in your feed, you need to change this portion here
          # to account for all previous values, which would add some complexity
          #
          # If current amount of values is greater than the saved one, override
          @keys[key_index][1] = value.length if @keys[key_index][1] < value.length
          @keys[key_index][2] = true if is_array and not @keys[key_index][2]
        else
          # It is a new key in @keys array
          key_index = @keys.length
          @keys << [key, value.length, is_array]
        end
        # Add value array at known key index
        # (will be padded with nil if idx is greater than array size)
        new_row[key_index] = value
      end
      @data << new_row
    end
  end

  def to_csv_data(headers=true)
    result, header, body = [], [], []
    if headers
      @keys.each do |key|
        if key[2]
          # If the key should hold multiple values, build the header string
          key[1].times { |i| header << "#{key[0]}_#{i+1}" }
        else
          # Otherwise it is a singular value and the header goes unmodified
          header << key[0]
        end
      end
      result << header
    end
    @data.each do |row|
      new_row = []
      row.each_with_index do |value, index|
        # Use the value counter from @keys to pad with nil values,
        # if a value is not present
        @keys[index][1].times do |count|
          new_row << value[count]
        end
      end
      body << new_row
    end
    result << body
  end

end

Upvotes: 1

Related Questions