Reputation: 2807
I have an existing Python program that has a sequence of operations that goes something like this:
java -jar X.jar <folder_name>
This will open every file in the folder and perform some operation on them and writes out an equal number of transformed files into another folder.java -jar Y.jar <folder_name>
This creates multiple files of one line each which are then merged into a single file using a merge function.I'd like to use make use of Hadoop to speed up operation Y as it takes very long to complete if there are: a) more number of files or b) large input files to be operated upon.
What I'd like to know is if it is a good idea to go with Hadoop in the first place to do something of this nature and if threads would make more sense in this case. Bear in mind that X and Y are things that cannot be replaced or changed in any way.
I came up with this idea:
I would like to know if this makes sense at all and especially given that the mapper expects a (key,value) pair, would I even have a k-v pair in this scenario?
I know this sounds like a project and that's because it is, but I'm not looking for code, just some guidance about whether or not this would even work and if it did, what is the right way of going about doing this if my proposed solution is not accurate (enough).
Thank you!
Upvotes: 0
Views: 1205
Reputation: 4832
You absolutely can use hadoop mapreduce framework to complete your work, but the answer for if it's a good idea could be "it depends". It depends the number and sizes of files you want to proceed.
Keep in mind that hdfs is not very good at deal with small files, it could be a disaster for the namenode if you have a good number (say 10 million) of small files (size is less than 1k bytes). An another hand, if the sizes are too large but only a few files are needed to proceed, it is not cool to just wrap step#2 directly in a mapper, because the job won't be spread widely and evenly (in this situation i guess the key-value only can be "file no. - file content" or "file name - file content" given you mentioned X can't changed in any way. Actually, "line no. - line" would be more situable)
BTW, there are 2 ways to utilize hadoop mapreduce framework, one way is you write mapper/reducer in java and compile them in jar then run mapreduce job with hadoop jar you_job.jar . Another way is streaming, you can write mapper/reducer by using python is this way.
Upvotes: 2