Reputation: 570
So far, I am able to read dataframe from Teradata using Teradata jdbc connector for Spark. Syntax for reading is as follows :
val df = hc.read.format("jdbc").options(
Map(
"url" -> url,
"dbtable" -> (sel * from tableA) as data,
"driver" -> "com.teradata.jdbc.TeraDriver"
)
).load()
where hc = hiveContext, url = connection url for teradata
I want to save a dataframe to Teradata table. I tried using the above syntax by changing dbtable to insert statement ,
val df = hc.read.format("jdbc").options(
Map(
"url" -> url,
"dbtable" -> (insert into db.tabA values (1,2,3)) as data,
"driver" -> "com.teradata.jdbc.TeraDriver"
)
).load()
But the above statement gave me an error :
Error: Exception in thread "main" java.sql.SQLException: [Teradata Database] [TeraJDBC 15.10.00.22] [Error 3706] [SQLState 42000] Syntax error: expected something between '(' and the 'insert' keyword.
I want to save a dataframe to Teradata in Spark, what is the best possible way of doing it?
Upvotes: 2
Views: 4132
Reputation: 570
I was able to write data into Teradata table using Scalikejdbc. I have used batch update for storing the results.
Sample code for inserting batch rows using ScalikeJdbc:
DB localTx { implicit session =>
val batchParams: Seq[Seq[Any]] = (2001 to 3000).map(i => Seq(i, "name" + i))
withSQL {
insert.into(Emp).namedValues(column.id -> sqls.?, column.name -> sqls.?)
}.batch(batchParams: _*).apply()
}
Upvotes: 0
Reputation: 29165
AFAIK as data
is not correct , remaining seems correct to my eyes.
"dbtable" -> (insert into db.tabA values (1,2,3)) as data,
with
"dbtable" -> (insert into db.tabA values (1,2,3)) ,
Below should work with out any hassle.
val df = hc.read.format("jdbc").options(
Map(
"url" -> url,
"dbtable" -> (insert into db.tabA values (1,2,3)),
"driver" -> "com.teradata.jdbc.TeraDriver"
)
).load()
Upvotes: 0