Jeff Storey
Jeff Storey

Reputation: 57222

Smaller scale Java distributed programming

I'm learning a bit more about hadoop and its applications, and I understand it is geared toward massive datasets and large files. Let's say I had an application in which I was processing a relatively small number of files (say 100k), which isn't a huge number for something like hadoop/hdfs. However, it does take a macro amount of time to run on a single machine, so I'd like to distribute the process.

The problem can be broken down into a map reduce style problem (e.g. each of the files can be processed independently and then I can aggregate the results). I'm open to using infrastructure such as Amazon EC2, but I'm not so sure about what technologies to be exploring for actually aggregating the results of the process. Seems like hadoop might be a bit overkill here.

Can anyone provide guidance on this type of problem?

Upvotes: 1

Views: 239

Answers (1)

Chris Shain
Chris Shain

Reputation: 51369

First off, you may want to reconsider your assumption that you can't combine files. Even images can be combined- you just need to figure out how to do that in a way that allows you to break them out again in your mappers. Combining them with some sort of sentinel value or magic number between them might make it possible to turn them into one giant file.

Other options include HBase, where you could store the images in cells. HBase also has a built-in TableMapper and TableReducer, and can store the results of your processing alongside the raw data in a semi-structured way.

EDIT: As for the "is Hadoop overkill" question, you need to consider the following:

  1. Hadoop adds at least one machine of overhead (the HDFS NameNode). You typically dont want to store data or run jobs on that machine, since it is a SPOF.

  2. Hadoop is best suited for processing data in batch, with relatively high latency. As @Raihan mentions, there are several other FOSS distributed compute architectures that may server your needs better if you need realtime or low-latency results.

  3. 100k files isn't so very few. Even if they are 100k each, that's 10GB of data.

  4. Other than the above, Hadoop is a relatively low-overhead way of approaching distributed computing problems. It has a huge, helpful community behind it, so you can get help quickly if you need it. And it is focused on running on cheap hardware and a free OS, so there really isnt any significant overhead.

In short, I'd try it before you discard it for something else.

Upvotes: 1

Related Questions