frazman
frazman

Reputation: 33293

merging two files in hadoop

I am a newbie in hadoop framework. So it would help me if someone can guide me thru this. I have two type of files. dirA/ --> file_a , file_b, file_c

dirB/ --> another_file_a, another_file_b...

Files in directory A contains tranascation information.

So something like:

   id, time_stamp
   1 , some_time_stamp
   2 , some_another_time_stamp
   1  , another_time_stamp

So, this kind of information is scattered across all the files in dirA. Now 1st thing to do is: I give a time frame (lets say last week) and I want to find all the unique ids which are present between that time frame.

So, save a file.

Now, dirB files contains the address information. Something like:

    id, address, zip code
     1, fooadd, 12345
     and so on

So all the unique ids outputted by the first file.. I take them as input and then find the address and zip code.

basically the final out is like the sql merge.

Find all the unique ids between a time frame and then merge the address infomration.

I would greatly appreciate any help. Thanks

Upvotes: 0

Views: 1308

Answers (1)

Joe K
Joe K

Reputation: 18434

You tagged this as pig, so I'm guessing you're looking to use it to accomplish this? If so, I think that's a great choice - this is really easy in pig!

times = LOAD 'dirA' USING PigStorage(', ') AS (id:int, time:long);
addresses = LOAD 'dirB' USING PigStorage(', ') AS (id:int, address:chararray, zipcode:chararray);
filtered_times = FILTER times BY (time >= $START_TIME) AND (time <= $END_TIME);
just_ids = FOREACH filtered_times GENERATE id;
distinct_ids = DISTINCT just_ids;
result = JOIN distinct_ids BY id, addresses BY id;

Where $START_TIME and $END_TIME are parameters you can pass to the script.

Upvotes: 1

Related Questions