Free Man
Free Man

Reputation: 195

java exception: No FileSystem for scheme

The code below copy's data from my local machine to hdfs

Configuration conf = new Configuration();                   
conf.addResource(new Path("/etc/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/etc/hadoop/conf/hdfs-site.xml"));

FileSystem fs = FileSystem.get(conf);

fs.moveFromLocalFile(new Path("/path/to/file"), new Path("/path/to/hdfs/"));

When I run this in eclipse, it works perfectly. However, after I compile to jar and run as stand alone using this code:

nohup java -cp "Test.jar" Test &

I get the error below:

Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at Test.main(Test.java:37)

Upvotes: 2

Views: 4875

Answers (2)

Free Man
Free Man

Reputation: 195

This was a classpath issue. The common approach used for adding classpath is as below:

export CLASSPATH=/usr/lib/hadoop/client-0.20/\*

Unfortunately this didn't work for me. This is what worked. I had to add the path containing all my jar files to the nohup java command..

nohup java -cp "/usr/lib/hadoop/client-0.20/*:Test.jar" Test & 

Upvotes: 0

Alain O'Dea
Alain O'Dea

Reputation: 21686

Given that Test.jar is a fat JAR (including the dependencies) something is going wrong with the registration of the protocol handlers.

To override this if you know the package in Hadoop that provides it do something like (this is a random, and very likely wrong, guess):

nohup java -cp Test.jar -Djava.protocol.handler.pkgs=org.apache.hadoop.fs Test &

That will work if org.apache.hadoop.fs.Handler exists and extends java.net.URLStreamHandler.

This mechanism is described in more detail in the JavaDocs for java.net.URL.

An alternative fix is documented on the HortonWorks forum.

Upvotes: 1

Related Questions