ToBeSparkShark
ToBeSparkShark

Reputation: 681

Using spark dataFrame to load data from HDFS

Can we use DataFrame while reading data from HDFS. I have a tab separated data in HDFS.

I googled, but saw it can be used with NoSQL data

Upvotes: 5

Views: 21097

Answers (2)

Robin East
Robin East

Reputation: 156

DataFrame is certainly not limited to NoSQL data sources. Parquet, ORC and JSON support is natively provided in 1.4 to 1.6.1; text delimited files are supported using the spark-cvs package.

If you have your tsv file in HDFS at /demo/data then the following code will read the file into a DataFrame

sqlContext.read.
  format("com.databricks.spark.csv").
  option("delimiter","\t").
  option("header","true").
  load("hdfs:///demo/data/tsvtest.tsv").show

To run the code from spark-shell use the following:

--packages com.databricks:spark-csv_2.10:1.4.0

In Spark 2.0 csv is natively supported so you should be able to do something like this:

spark.read.
  option("delimiter","\t").
  option("header","true").
  csv("hdfs:///demo/data/tsvtest.tsv").show

Upvotes: 9

dbustosp
dbustosp

Reputation: 4458

If I am understanding correctly, you essentially want to read data from the HDFS and you want this data to be automatically converted to a DataFrame.

If that is the case, I would recommend you this spark csv library. Check this out, it has a very good documentation.

Upvotes: 1

Related Questions