Steven Owens
Steven Owens

Reputation: 21

getting java.net.SocketTimeoutException when trying to run the Hadoop mapReduce on fresh install of Hortonworks

I have a fresh install of Hortonworks version 2.3_1 for oracle virtualbox and I get a java.net.SocketTimeoutException whenever I try to run a mapreduce job. I changed nothing other than the memory and the cores available to the VM.

full text of run:

WARNING: Use "yarn jar" to launch YARN applications.  
15/09/01 01:15:17 INFO impl.TimelineClientImpl: Timeline service address: http:/                                                                                                             /sandbox.hortonworks.com:8188/ws/v1/timeline/  
15/09/01 01:15:20 INFO client.RMProxy: Connecting to ResourceManager at sandbox.                                                                                                             hortonworks.com/10.0.2.15:8050  
15/09/01 01:16:19 WARN mapreduce.JobResourceUploader: Hadoop command-line option                                                                                                              parsing not performed. Implement the Tool interface and execute your applicatio                                                                                                             n with ToolRunner to remedy this.  
15/09/01 01:18:09 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor excepti                                                                                                             on  for block BP-601678901-10.0.2.15-1439987491556:blk_1073742292_1499  
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel                                                                                                              to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.0                                                                                                             .2.15:52924 remote=/10.0.2.15:50010]  
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.ja                                                                                                             va:164)  
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                                                                                                             61)  
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                                                                                                             31)  
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                                                                                                             18)  
        at java.io.FilterInputStream.read(FilterInputStream.java:83)  
        at java.io.FilterInputStream.read(FilterInputStream.java:83)  
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java                                                                                                             :2280)  
        at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(P                                                                                                             ipelineAck.java:244)  
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor                                                                                                             .run(DFSOutputStream.java:749)  
15/09/01 01:18:11 INFO mapreduce.JobSubmitter: Cleaning up the staging area /use                                                                                                             r/root/.staging/job_1441069639378_0001  
Exception in thread "main" java.io.IOException: All datanodes   DatanodeInfoWithStorage[10.0.2.15:50010,DS-56099a5f-3cb3-426e-8e1a-ff3b53df9bf2,DISK] are bad. Aborting...  
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1117)  
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:909)  
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:412)  

Full name of file ova file I am using: Sandbox_HDP_2.3_1_virtualbox.ova

my host is a window 7 home premium machine with eight lines of execution(four hyperthreaded cores, I think)

Upvotes: 0

Views: 1431

Answers (1)

Steven Owens
Steven Owens

Reputation: 21

The problem was exactly what it seemed a timeout error. Fixed by going to the hadoop config folder and raising all the timeouts as well as the number of retries (although from the log that didn't come into play) and stopping unnecessary services on both the host and guest operating system.

Thank, sunrise76 on of those issues pointed me to the config folder.

Upvotes: 1

Related Questions