user3569267
user3569267

Reputation: 1125

update statement in spark-sql

is there a way to execute an update statement on sql server table using spark-sql (with scala langage)?

I need to perform the following query:

update  MyLog_table
set     Log_FileQueue = xx,
        Log_TotalLine = xx
where   Log_ID = xxx

I tried the following syntaxe:

 val jdbcUrl = s"jdbc:sqlserver://${jdbcHostname}:${jdbcPort};database=${jdbcDatabase}"
    val Log_FileIn = spark.read.jdbc(jdbcUrl, s"(select Log_FileIn from log Where   Log_ID = '${Process1Log_ID}' ) as sq", connectionProperties)
    val newLog_FileIn = Log_FileIn.collectAsList().toString().replace("[", "").replace("]", "")

 spark.sql(s"(select '${newLog_FileIn}' as Log_FileQueue, ${NbLine} as Log_TotalLine where Log_ID = '${newLog_id}')")
  .write
  .mode(SaveMode.Append)
  .jdbc(jdbcUrl, "Log", connectionProperties)

but it generates the following error:

org.apache.spark.sql.AnalysisException: cannot resolve '`Log_ID`' given input columns: []; line 1 pos 115;
'Project [test_141001.csv AS Log_FileQueue#290, 5 AS Log_TotalLine#29

I tried also to use the "where" method:

spark.sql(s"(select '${newLog_FileIn}' as Log_FileQueue, ${NbLine} as Log_TotalLine where Log_ID = '${newLog_id}')")
  .where(s"Log_ID = '${newLog_id}'")
  .write
  .mode(SaveMode.Append)
  .jdbc(jdbcUrl, "Log", connectionProperties)

but it also does not work. I get the following error:

org.apache.spark.sql.AnalysisException: cannot resolve '`Log_ID`' given input columns: [Log_FileQueue, Log_TotalLine]; line 1 pos 0;
'Filter ('Log_ID = 157456)
+- AnalysisBarrier
      +- Project [ANNONCE-FNAC-VIGICOLIS-GRX-BIZ-2018hfgr071eyzdtrf2_141001.csv AS Log_FileQueue#290, 5 AS Log_TotalLine#291]

any help will be appreciated

Upvotes: 0

Views: 1468

Answers (1)

Ged
Ged

Reputation: 18013

Not how it works.Try executeBatch.

Upvotes: 2

Related Questions