user1735076
user1735076

Reputation: 3325

How to create an empty DataFrame with a specified schema?

I want to create on DataFrame with a specified schema in Scala. I have tried to use JSON read (I mean reading empty file) but I don't think that's the best practice.

Upvotes: 122

Views: 246537

Answers (12)

Krishna Tapse
Krishna Tapse

Reputation: 21

#Create Empty DataFrame using spark.createDataFrame and pass empty list and schema

from pyspark.sql.types import StructType,StructField
from pyspark.sql.functions import StringType
schema = StructType([
    StructField('table_name',StringType()),
    StructField('row_cnt',StringType()),
])
df = spark.createDataFrame([],schema)
display(df)

Upvotes: 1

Zack
Zack

Reputation: 2456

We were having issues with the emptyRDD method after converting to Spark 13.3 / enabling Unity Catalog in Databricks. The below solution works as a replacement for both.

import org.apache.spark.sql.types.{StructType, StringType}
import org.apache.spark.sql.Row
import java.util.ArrayList

val schema = new StructType()
  .add("column1", StringType, true)
  .add("column2", StringType, true)

val df = spark.createDataFrame(
  new ArrayList[Row],
  schema
)
df.count()

Upvotes: 0

ZygD
ZygD

Reputation: 24356

I'd like to add the following syntax which was not yet mentioned:

Seq[(String, Integer)]().toDF("k", "v")

It makes it clear that the () part is for values. It's empty, so the dataframe is empty.

This syntax is also beneficial for adding null values manually. It just works, while other options either don't or are overly verbose.

Upvotes: 0

Kity Cartman
Kity Cartman

Reputation: 896

I had a special requirement wherein I already had a dataframe but given a certain condition I had to return an empty dataframe so I returned df.limit(0) instead.

Upvotes: 1

ss301
ss301

Reputation: 672

This is helpful for testing purposes.

Seq.empty[String].toDF()

Upvotes: 3

Molay
Molay

Reputation: 1304

Java version to create empty DataSet:

public Dataset<Row> emptyDataSet(){

    SparkSession spark = SparkSession.builder().appName("Simple Application")
                .config("spark.master", "local").getOrCreate();

    Dataset<Row> emptyDataSet = spark.createDataFrame(new ArrayList<>(), getSchema());

    return emptyDataSet;
}

public StructType getSchema() {

    String schemaString = "column1 column2 column3 column4 column5";

    List<StructField> fields = new ArrayList<>();

    StructField indexField = DataTypes.createStructField("column0", DataTypes.LongType, true);
    fields.add(indexField);

    for (String fieldName : schemaString.split(" ")) {
        StructField field = DataTypes.createStructField(fieldName, DataTypes.StringType, true);
        fields.add(field);
    }

    StructType schema = DataTypes.createStructType(fields);

    return schema;
}

Upvotes: 5

Fox Fairy
Fox Fairy

Reputation: 59

As of Spark 2.4.3

val df = SparkSession.builder().getOrCreate().emptyDataFrame

Upvotes: -3

zero323
zero323

Reputation: 330063

Lets assume you want a data frame with the following schema:

root
 |-- k: string (nullable = true)
 |-- v: integer (nullable = false)

You simply define schema for a data frame and use empty RDD[Row]:

import org.apache.spark.sql.types.{
    StructType, StructField, StringType, IntegerType}
import org.apache.spark.sql.Row

val schema = StructType(
    StructField("k", StringType, true) ::
    StructField("v", IntegerType, false) :: Nil)

// Spark < 2.0
// sqlContext.createDataFrame(sc.emptyRDD[Row], schema) 
spark.createDataFrame(sc.emptyRDD[Row], schema)

PySpark equivalent is almost identical:

from pyspark.sql.types import StructType, StructField, IntegerType, StringType

schema = StructType([
    StructField("k", StringType(), True), StructField("v", IntegerType(), False)
])

# or df = sc.parallelize([]).toDF(schema)

# Spark < 2.0 
# sqlContext.createDataFrame([], schema)
df = spark.createDataFrame([], schema)

Using implicit encoders (Scala only) with Product types like Tuple:

import spark.implicits._

Seq.empty[(String, Int)].toDF("k", "v")

or case class:

case class KV(k: String, v: Int)

Seq.empty[KV].toDF

or

spark.emptyDataset[KV].toDF

Upvotes: 158

Nilesh Shinde
Nilesh Shinde

Reputation: 461

Here you can create schema using StructType in scala and pass the Empty RDD so you will able to create empty table. Following code is for the same.

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql._
import org.apache.spark.sql.Row
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.types.StructField
import org.apache.spark.sql.types.IntegerType
import org.apache.spark.sql.types.BooleanType
import org.apache.spark.sql.types.LongType
import org.apache.spark.sql.types.StringType



//import org.apache.hadoop.hive.serde2.objectinspector.StructField

object EmptyTable extends App {
  val conf = new SparkConf;
  val sc = new SparkContext(conf)
  //create sparksession object
  val sparkSession = SparkSession.builder().enableHiveSupport().getOrCreate()

  //Created schema for three columns 
   val schema = StructType(
    StructField("Emp_ID", LongType, true) ::
      StructField("Emp_Name", StringType, false) ::
      StructField("Emp_Salary", LongType, false) :: Nil)

      //Created Empty RDD 

  var dataRDD = sc.emptyRDD[Row]

  //pass rdd and schema to create dataframe
  val newDFSchema = sparkSession.createDataFrame(dataRDD, schema)

  newDFSchema.createOrReplaceTempView("tempSchema")

  sparkSession.sql("create table Finaltable AS select * from tempSchema")

}

Upvotes: 4

Jacek Laskowski
Jacek Laskowski

Reputation: 74619

As of Spark 2.0.0, you can do the following.

Case Class

Let's define a Person case class:

scala> case class Person(id: Int, name: String)
defined class Person

Import spark SparkSession implicit Encoders:

scala> import spark.implicits._
import spark.implicits._

And use SparkSession to create an empty Dataset[Person]:

scala> spark.emptyDataset[Person]
res0: org.apache.spark.sql.Dataset[Person] = [id: int, name: string]

Schema DSL

You could also use a Schema "DSL" (see Support functions for DataFrames in org.apache.spark.sql.ColumnName).

scala> val id = $"id".int
id: org.apache.spark.sql.types.StructField = StructField(id,IntegerType,true)

scala> val name = $"name".string
name: org.apache.spark.sql.types.StructField = StructField(name,StringType,true)

scala> import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.types.StructType

scala> val mySchema = StructType(id :: name :: Nil)
mySchema: org.apache.spark.sql.types.StructType = StructType(StructField(id,IntegerType,true), StructField(name,StringType,true))

scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row

scala> val emptyDF = spark.createDataFrame(sc.emptyRDD[Row], mySchema)
emptyDF: org.apache.spark.sql.DataFrame = [id: int, name: string]

scala> emptyDF.printSchema
root
 |-- id: integer (nullable = true)
 |-- name: string (nullable = true)

Upvotes: 48

braj
braj

Reputation: 2695

Here is a solution that creates an empty dataframe in pyspark 2.0.0 or more.

from pyspark.sql import SQLContext
sc = spark.sparkContext
schema = StructType([StructField('col1', StringType(),False),StructField('col2', IntegerType(), True)])
sqlContext.createDataFrame(sc.emptyRDD(), schema)

Upvotes: 3

Ravindra
Ravindra

Reputation: 133

import scala.reflect.runtime.{universe => ru}
def createEmptyDataFrame[T: ru.TypeTag] =
    hiveContext.createDataFrame(sc.emptyRDD[Row],
      ScalaReflection.schemaFor(ru.typeTag[T].tpe).dataType.asInstanceOf[StructType]
    )
  case class RawData(id: String, firstname: String, lastname: String, age: Int)
  val sourceDF = createEmptyDataFrame[RawData]

Upvotes: 3

Related Questions