Sathish
Sathish

Reputation: 1

Spark job to work in two different HDFS environments

I have a requirement, I need to write a spark job to connect in Prod(Source-Hive)Server A and get the data into Local(Temp hive server) do the transform and load it back into TargetProd(Server B)

In earlier cases, we have our Target DB as Oracle, so we use to give like below, which will overwrite the table

AAA.write.format("jdbc").option("url", "jdbc:oracle:thin:@//uuuuuuu:0000/gsahgjj.yyy.com").option("dbtable", "TeST.try_hty").option("user", "aaaaa").option("password", "dsfdss").option("Truncate","true").mode("Overwrite").save().

In terms of SPARK overwrite from Server A to B, what should be syntax we need to give.

when I try to establish the connection through jdbc from one hive(ServerA) to Server B. It is not working.. please help.

Upvotes: 0

Views: 162

Answers (1)

Chandan Ray
Chandan Ray

Reputation: 2091

You can connect to hive by using jdbc if it’s a remote one. Please get your hive thrift server url and port details and connect via jdbc. It should work.

Upvotes: 0

Related Questions