Amrutha K
Amrutha K

Reputation: 204

SqlServer Datatype to Hive Datatype using Spark Scala

Spark is used to get schema of a table from SQL server DB. I am facing issue while creating Hive tables using this schema due to datatype mismatch. How can we convert the SQL Server datatype to Hive datatype in Spark Scala.

val df = sqlContext.read.format("jdbc")
  .option("url", "jdbc:sqlserver://host:port;databaseName=DB")
  .option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
  .option("dbtable", "schema.tableName")
  .option("user", "Userid").option("password", "pswd")
  .load().schema

Upvotes: 1

Views: 1127

Answers (1)

Amrutha K
Amrutha K

Reputation: 204

Thanks, Got the solution.Created a method to check datatypes as given below.

def sqlToHiveDatatypeMapping(inputDatatype: String): String = inputDatatype match {
  case "numeric" => "int"
  case "bit" => "smallint"
  case "long" => "bigint"
  case "dec_float" => "double"
  case "money" => "double" 
  case "smallmoney" => "double"  
  case "real" => "double"
  case "char" => "string" 
  case "nchar" => "string"  
  case "varchar" => "string"
  case "nvarchar" => "string"
  case "text" => "string"
  case "ntext" => "string"
  case "binary" => "binary"
  case "varbinary" => "binary"
  case "image" => "binary"
  case "date" => "date"
  case "datetime" => "timestamp"
  case "datetime2" => "timestamp"
  case "smalldatetime" => "timestamp"
  case "datetimeoffset" => "timestamp"
  case "timestamp" => "timestamp"
  case "time" => "timestamp"
  case "clob" => "string"
  case "blob" => "binary"
  case _ => "string"
}
val columns = df.fields.map({field => field.name.toLowerCase+" "+sqlToHiveDatatypeMapping(field.dataType.typeName.toLowerCase)}).mkString(",")

Upvotes: 2

Related Questions