user1265125
user1265125

Reputation: 2656

Understanding Hadoop behavior with GZ files

I have a small JSON file in two separate folders in my S3 bucket. I ran the same command with the same mapper on those two separately.

NORMAL JSON

$ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar -Dmapred.reduce.tasks=0 -file ./mapper.py -mapper ./mapper.py -input s3://mybucket/normaltest -output smalltest-output
14/08/28 08:33:53 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
packageJobJar: [./mapper.py, /mnt/var/lib/hadoop/tmp/hadoop-unjar6225144044327095484/] [] /tmp/streamjob6947060448653690043.jar tmpDir=null
14/08/28 08:33:56 INFO mapred.JobClient: Default number of map tasks: null
14/08/28 08:33:56 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 160
14/08/28 08:33:56 INFO mapred.JobClient: Default number of reduce tasks: 0
14/08/28 08:33:56 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache
14/08/28 08:33:56 INFO mapred.JobClient: Setting group to hadoop
14/08/28 08:33:56 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
14/08/28 08:33:56 WARN lzo.LzoCodec: Could not find build properties file with revision hash
14/08/28 08:33:56 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev UNKNOWN]
14/08/28 08:33:56 WARN snappy.LoadSnappy: Snappy native library is available
14/08/28 08:33:56 INFO snappy.LoadSnappy: Snappy native library loaded
14/08/28 08:33:58 INFO mapred.FileInputFormat: Total input paths to process : 1
14/08/28 08:33:58 INFO streaming.StreamJob: getLocalDirs(): [/mnt/var/lib/hadoop/mapred]
14/08/28 08:33:58 INFO streaming.StreamJob: Running job: job_201408260907_0053
14/08/28 08:33:58 INFO streaming.StreamJob: To kill this job, run:
14/08/28 08:33:58 INFO streaming.StreamJob: /home/hadoop/bin/hadoop job  -Dmapred.job.tracker=10.165.13.124:9001 -kill job_201408260907_0053
14/08/28 08:33:58 INFO streaming.StreamJob: Tracking URL: http://ip-10-165-13-124.ec2.internal:9100/jobdetails.jsp?jobid=job_201408260907_0053
14/08/28 08:33:59 INFO streaming.StreamJob:  map 0%  reduce 0%
14/08/28 08:34:23 INFO streaming.StreamJob:  map 1%  reduce 0%
14/08/28 08:34:26 INFO streaming.StreamJob:  map 2%  reduce 0%
14/08/28 08:34:29 INFO streaming.StreamJob:  map 9%  reduce 0%
14/08/28 08:34:32 INFO streaming.StreamJob:  map 45%  reduce 0%
14/08/28 08:34:35 INFO streaming.StreamJob:  map 56%  reduce 0%
14/08/28 08:34:36 INFO streaming.StreamJob:  map 57%  reduce 0%
14/08/28 08:34:38 INFO streaming.StreamJob:  map 84%  reduce 0%
14/08/28 08:34:39 INFO streaming.StreamJob:  map 85%  reduce 0%
14/08/28 08:34:41 INFO streaming.StreamJob:  map 99%  reduce 0%
14/08/28 08:34:44 INFO streaming.StreamJob:  map 100%  reduce 0%
14/08/28 08:34:50 INFO streaming.StreamJob:  map 100%  reduce 100%
14/08/28 08:34:50 INFO streaming.StreamJob: Job complete: job_201408260907_0053
14/08/28 08:34:50 INFO streaming.StreamJob: Output: smalltest-output

In smalltest-output, I get several small files containing a part of the processed JSON.

GZIPed JSON

$ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar -Dmapred.reduce.tasks=0 -file ./mapper.py -mapper ./mapper.py -input s3://weblablatency/gztest -output smalltest-output
14/08/28 08:39:45 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
packageJobJar: [./mapper.py, /mnt/var/lib/hadoop/tmp/hadoop-unjar2539293594337011579/] [] /tmp/streamjob301144784484156113.jar tmpDir=null
14/08/28 08:39:48 INFO mapred.JobClient: Default number of map tasks: null
14/08/28 08:39:48 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 160
14/08/28 08:39:48 INFO mapred.JobClient: Default number of reduce tasks: 0
14/08/28 08:39:48 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache
14/08/28 08:39:48 INFO mapred.JobClient: Setting group to hadoop
14/08/28 08:39:48 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
14/08/28 08:39:48 WARN lzo.LzoCodec: Could not find build properties file with revision hash
14/08/28 08:39:48 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev UNKNOWN]
14/08/28 08:39:48 WARN snappy.LoadSnappy: Snappy native library is available
14/08/28 08:39:48 INFO snappy.LoadSnappy: Snappy native library loaded
14/08/28 08:39:50 INFO mapred.FileInputFormat: Total input paths to process : 1
14/08/28 08:39:51 INFO streaming.StreamJob: getLocalDirs(): [/mnt/var/lib/hadoop/mapred]
14/08/28 08:39:51 INFO streaming.StreamJob: Running job: job_201408260907_0055
14/08/28 08:39:51 INFO streaming.StreamJob: To kill this job, run:
14/08/28 08:39:51 INFO streaming.StreamJob: /home/hadoop/bin/hadoop job  -Dmapred.job.tracker=10.165.13.124:9001 -kill job_201408260907_0055
14/08/28 08:39:51 INFO streaming.StreamJob: Tracking URL: http://ip-10-165-13-124.ec2.internal:9100/jobdetails.jsp?jobid=job_201408260907_0055
14/08/28 08:39:52 INFO streaming.StreamJob:  map 0%  reduce 0%
14/08/28 08:40:20 INFO streaming.StreamJob:  map 100%  reduce 0%
14/08/28 08:40:26 INFO streaming.StreamJob:  map 100%  reduce 100%
14/08/28 08:40:26 INFO streaming.StreamJob: Job complete: job_201408260907_0055

In smalltest-output I get a correctly parsed file, but as a single file.

Why this difference and what is happening? Is my job not being distributed properly in the gz case?

In my actual use case I need to process ~2000 gz files totalling to around 4GB uncompressed; every 4 hours. So I can't afford any performance issues because of compression.

Upvotes: 0

Views: 83

Answers (1)

Clément MATHIEU
Clément MATHIEU

Reputation: 3171

Gzip is not splittable. You will find bazillions of articles and questions speaking about this issue so I won't go into details.

Your options are:

  • Don't use Gzip (don't compress or use another splittable compression format)
  • Use a hack to make GZip splittable, like https://github.com/nielsbasjes/splittablegzip. Each mapper will still have to read the file from the beginning so it's a trade-off. Read the documentation to learn more.

It depends on what you do, but for most processing 4GB of data is nothing. I would make sure that I really need an elephant like Hadoop for my use case. It is scalable but complex, painful to work and usually slow for small data sets.

Upvotes: 1

Related Questions