lvella
lvella

Reputation: 419

Checksum Exception when reading from or copying to hdfs in apache hadoop

I am trying to implement a parallelized algorithm using Apache hadoop, however I am facing some issues when trying to transfer a file from the local file system to hdfs. A checksum exception is being thrown when trying to read from or transfer a file.

The strange thing is that some files are being successfully copied while others are not (I tried with 2 files, one is slightly bigger than the other, both are small in size though). Another observation that I have made is that the Java FileSystem.getFileChecksum method, is returning a null in all cases.

A slight background on what I am trying to achieve: I am trying to write a file to hdfs, to be able to use it as a distributed cache for the mapreduce job that I have written.

I have also tried the hadoop fs -copyFromLocal command from the terminal, and the result is the exact same behaviour as when it is done through the java code.

I have looked all over the web, including other questions here on stackoverflow however I haven't managed to solve the issue. Please be aware that I am still quite new to hadoop so any help is greatly appreciated.

I am attaching the stack trace below which shows the exceptions being thrown. (In this case I have posted the stack trace resulting from the hadoop fs -copyFromLocal command from terminal)

name@ubuntu:~/Desktop/hadoop2$ bin/hadoop fs -copyFromLocal ~/Desktop/dtlScaleData/attr.txt /tmp/hadoop-name/dfs/data/attr2.txt

13/03/15 15:02:51 INFO util.NativeCodeLoader: Loaded the native-hadoop library
    13/03/15 15:02:51 INFO fs.FSInputChecker: Found checksum error: b[0, 0]=
    org.apache.hadoop.fs.ChecksumException: Checksum error: /home/name/Desktop/dtlScaleData/attr.txt at 0
        at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
        at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
        at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
        at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:176)
        at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1183)
        at org.apache.hadoop.fs.FsShell.copyFromLocal(FsShell.java:130)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:1762)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)
    copyFromLocal: Checksum error: /home/name/Desktop/dtlScaleData/attr.txt at 0

Upvotes: 25

Views: 34619

Answers (5)

Narendra Parmar
Narendra Parmar

Reputation: 1409

I face the same problem solved by removing .crc files

Upvotes: 10

Akash Agrawal
Akash Agrawal

Reputation: 4927

I got the exact same problem and didn't fid any solution. Since this was my first hadoop experience, I could not follow some instruction over the internet. I solved this problem by formatting my namenode.

hadoop namenode -format

Upvotes: -4

Kiran teja Avvaru
Kiran teja Avvaru

Reputation: 364

CRC file holds serial number for the Particular block data. Entire data is spiltted into Collective Blocks. Each block stores metada along with the CRC file inside /hdfs/data/dfs/data folder. If some one makes correction to the CRC files...the actual and current CRC serial numbers would mismatch and it causes the ERROR!! Best practice to fix this ERROR is to do override the meta data file along with CRC file.

Upvotes: 1

lvella
lvella

Reputation: 419

Ok so I managed to solve this issue and I'm writing the answer here just in case someone else encounters the same problem.

What I did was simply create a new file and copied all the contents from the problematic file.

From what I can presume it looks like some crc file is being created and attached to that particular file, hence by trying with another file, another crc check will be carried out. Another reason could be that I have named the file attr.txt, which could be a conflicting file name with some other resource. Maybe someone could expand even more on my answer, since I am not 100% sure on the technical details and these are just my observations.

Upvotes: 1

Charles Menguy
Charles Menguy

Reputation: 41438

You are probably hitting the bug described in HADOOP-7199. What happens is that when you download a file with copyToLocal, it also copies a crc file in the same directory, so if you modify your file and then try to do copyFromLocal, it will do a checksum of your new file and compare to your local crc file and fail with a non descriptive error message.

To fix it, please check if you have this crc file, if you do just remove it and try again.

Upvotes: 70

Related Questions