tnk_peka
tnk_peka

Reputation: 1535

native snappy compressed data emitted by Hadoop cannot extract by java-snappy version

When we use Spark after some processing i store result to file and use snappy codec with simple code :

 data.saveAsTextFile("/data/2014-11-29",classOf[org.apache.hadoop.io.compress.SnappyCodec])

after that when I use Spark to read this folder file and so Everything work perfectly ! But today I try to use java snappy ( java-snappy 1.1.1.2) in my pc to decompress a file in result folder ( this file is one of files from this folder downloaded to my Pc )

maven dependency :

<dependency>
    <groupId>org.xerial.snappy</groupId>
    <artifactId>snappy-java</artifactId>
    <version>1.1.1.2</version>
</dependency>

I use this code to decompress :

File fileIn = new File("E:\\dt\\part-00000.snappy");
File fileOut = new File("E:\\dt\\adv1417971604684.dat");
FileOutputStream fos = new FileOutputStream(fileOut, true);
byte[] fileBytes = Files.readAllBytes(Paths.get(fileIn.getPath()));
byte[] fileBytesOut = Snappy.uncompress(fileBytes);
fos.write(fileBytesOut);

but :( I immediately get this error :

    java.io.IOException: FAILED_TO_UNCOMPRESS(5)
 at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:84)
 at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
 at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:444)
 at org.xerial.snappy.Snappy.uncompress(Snappy.java:480)
 at org.xerial.snappy.Snappy.uncompress(Snappy.java:456)
 at

in spark cluster we use :

spark 1.1.0 && hadoop 2.5.1 ( with native hadoop snappy )

Here is result when i run hadoop checknative -a :

    14/12/09 16:16:57 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2    library system-native
14/12/09 16:16:57 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/local/hadoop/hadoop2.5.1/lib/native/libhadoop.so
zlib:   true /lib64/libz.so.1
snappy: true /usr/local/hadoop/hadoop2.5.1/lib/native/libsnappy.so.1
lz4:    true revision:99
bzip2:  true /lib64/libbz2.so.1

I downloaded and build snappy native from link:

https://code.google.com/p/snappy/ and and source from : https://drive.google.com/file/d/0B0xs9kK-b5nMOWIxWGJhMXd6aGs/edit?usp=sharing

Someone please explain for this strange errors !! Are there some differences when hadoop use native snappy to compress data from use java-snappy ??????

Upvotes: 2

Views: 2484

Answers (1)

leo
leo

Reputation: 161

I am the developer of snappy-java. Hadoop's SnappyCodec is not exactly same with the Snappy's format specification: https://code.google.com/p/snappy/source/browse/trunk/format_description.txt

SnappyCodec in Hadoop extends this format to compress large data streams. Data is split into blocks (via BlockCompressionStream), and each block has some header and compressed data. To read the compressed data with Snappy.uncompress method, you need to extract each block and remove its header.

Upvotes: 5

Related Questions