Arun Vasu
Arun Vasu

Reputation: 367

Sorting a huge text file using hadoop

Is it possible to sort a huge text file lexicographically using a mapreduce job which has only map tasks and zero reduce tasks?

The records of the text file is separated by new line character and the size of the file is around 1 Terra Byte.

It will be great if any one can suggest a way to achieve sorting on this huge file.

Upvotes: 1

Views: 5893

Answers (3)

fjxx
fjxx

Reputation: 945

Sorting in Hadoop is done using a Partitioner - you can write a custom partitioner to sort according to your business logic needs. Please see this link on writing a custom partitioner http://jugnu-life.blogspot.com/2012/05/custom-partitioner-in-hadoop.html

I do not advocate sorting terabytes of data using plain vanilla linux sort commands - you will need to split the data to fit into memory to sort large file sizes: Parallel sort in linux

Its better and more expedient to use Hadoop MergeSort instead: Hadoop MergeSort

You can look at some Hadoop sorting benchmarks and analysis from the Yahoo Hadoop team (now Hortonworks) here : Hadoop Sort benchmarks

Upvotes: 0

Arun Vasu
Arun Vasu

Reputation: 367

Used TreeSet in Map method to hold entire data in the input split and persisted it. Finally I got the sorted file!

Upvotes: 3

Amar
Amar

Reputation: 12010

There is in fact a sort example that is bundled with Hadoop. You can look at how the example code works by examining the class org.apache.hadoop.examples.Sort. This itself works pretty well, but if you want more flexibility with your sort, you can check this out.

Upvotes: 2

Related Questions