user1357015
user1357015

Reputation: 11696

reading a file in hdfs from pyspark

I'm trying to read a file in my hdfs. Here's a showing of my hadoop file structure.

hduser@GVM:/usr/local/spark/bin$ hadoop fs -ls -R /
drwxr-xr-x   - hduser supergroup          0 2016-03-06 17:28 /inputFiles
drwxr-xr-x   - hduser supergroup          0 2016-03-06 17:31 /inputFiles/CountOfMonteCristo
-rw-r--r--   1 hduser supergroup    2685300 2016-03-06 17:31 /inputFiles/CountOfMonteCristo/BookText.txt

Here's my pyspark code:

from pyspark import SparkContext, SparkConf

conf = SparkConf().setAppName("myFirstApp").setMaster("local")
sc = SparkContext(conf=conf)

textFile = sc.textFile("hdfs://inputFiles/CountOfMonteCristo/BookText.txt")
textFile.first()

The error I get is:

Py4JJavaError: An error occurred while calling o64.partitions.
: java.lang.IllegalArgumentException: java.net.UnknownHostException: inputFiles

Is this because I'm setting my sparkContext incorrectly? I'm running this in a ubuntu 14.04 virtual machine through virtual box.

I'm not sure what I'm doing wrong here....

Upvotes: 15

Views: 63617

Answers (4)

vegetarianCoder
vegetarianCoder

Reputation: 2978

First, you need to run

export PYSPARK_PYTHON=python3.4 #what so ever is your python version

code

from pyspark.sql import SparkSession
from pyspark import SparkConf, SparkContext

spark = SparkSession.builder.appName("HDFS").getOrCreate()
sparkcont = SparkContext.getOrCreate(SparkConf().setAppName("HDFS"))
logs = sparkcont.setLogLevel("ERROR")

data = [('First', 1), ('Second', 2), ('Third', 3), ('Fourth', 4), ('Fifth', 5)]
df = spark.createDataFrame(data)

df.write.csv("hdfs:///mnt/data/")
print("Data Written")

To execute the code

spark-submit --master yarn --deploy-mode client <py file>

Upvotes: 0

bekce
bekce

Reputation: 4330

There are two general way to read files in Spark, one for huge-distributed files to process them in parallel, one for reading small files like lookup tables and configuration on HDFS. For the latter, you might want to read a file in the driver node or workers as a single read (not a distributed read). In that case, you should use SparkFiles module like below.

# spark is a SparkSession instance
from pyspark import SparkFiles

spark.sparkContext.addFile('hdfs:///user/bekce/myfile.json')
with open(SparkFiles.get('myfile.json'), 'rb') as handle:
    j = json.load(handle)
    or_do_whatever_with(handle)

Upvotes: 9

Shawn Guo
Shawn Guo

Reputation: 3228

You could access HDFS files via full path if no configuration provided.(namenodehost is your localhost if hdfs is located in local environment).

hdfs://namenodehost/inputFiles/CountOfMonteCristo/BookText.txt

Upvotes: 14

zero323
zero323

Reputation: 330383

Since you don't provide authority URI should look like this:

hdfs:///inputFiles/CountOfMonteCristo/BookText.txt

otherwise inputFiles is interpreted as a hostname. With correct configuration you shouldn't need scheme at all an use:

/inputFiles/CountOfMonteCristo/BookText.txt

instead.

Upvotes: 7

Related Questions