Mattinwpg
Mattinwpg

Reputation: 1

How do I pass a date variable to Teradata using Spark?

Looked around but haven't been able to find this quesiton yet... I'm working in Jupyter notebook writing python code, and a lot of the datasets we use are in Teradata, and so my code usually looks like this:

cs = '''
( 
select 
*
from SST01.data
where snap_dt = '2020-08-31'
)foo'''
dfclnt_status = spark.read.format('jdbc') \
    .option('url', 'jdbc:teradata://teradataservernamehere') \
    .option('driver', 'com.teradata.jdbc.TeraDriver') \
    .option('user', 'redacted') \
    .option('password', PASS) \
    .option('dbtable', cs) \
    .load()

I know that in spark when running code against our Hive tables I can pass date variables using '{VAR}' but when I try to apply the same thing in queries against Teradata I get this error:

Py4JJavaError: An error occurred while calling o233.load.
: java.sql.SQLException: [Teradata Database] [TeraJDBC 16.30.00.00] [Error 3535] [SQLState 22003] A character string failed conversion to a numeric value.

How is it possible to pass date variables into Teradata?

EDIT: My variables look like this:

END_DT='2020-08-31'

Upvotes: 0

Views: 290

Answers (1)

Lars Skaug
Lars Skaug

Reputation: 1386

The easiest way is probably to explicitly convert your field to a date, like so:

to_date('2020-08-31')

If you're still getting an error, take a look at the table DDL. The error says the field is numeric.

Upvotes: 1

Related Questions