wrschneider
wrschneider

Reputation: 18780

Referencing a Hive view from within an AWS Glue job

I'm trying to figure out how to migrate a use case from EMR to AWS Glue involving Hive views.

In EMR today, I have Hive external tables backed by Parquet in S3, and I have additional views like create view hive_view as select col from external_table where col = x

Then in Spark on EMR, I can issue statements like df = spark.sql("select * from hive_view") to reference my Hive view.

I am aware I can use the Glue catalog as a drop-in replacement for the Hive metastore, but I'm trying to migrate the Spark job itself off of EMR to Glue. So in my end state, there is no longer a Hive endpoint, only Glue.

Questions:

What I've tried so far: using boto3 to call glue.create_table like this

glue = boto3.client('glue')
glue.create_table(DatabaseName='glue_db_name', 
   TableInput = {'Name': 'hive_view', 
       'TableType': 'VIRTUAL_VIEW',
       'ViewExpandedText': 'select .... from ...'
    })

I can see the object created in the Glue catalog but the classification shows as "Unknown" and the references in the job fail with a corresponding error:

py4j.protocol.Py4JJavaError: An error occurred while calling o56.getCatalogSource. : 
java.lang.Error: No classification or connection in bill_glue_poc.view_test at ...

I have validated that I can use Hive views with Spark in EMR with the Glue catalog as the metastore -- I see the view in the Glue catalog, and Spark SQL queries succeed, but I cannot reference the view from within a Glue job.

Upvotes: 3

Views: 1518

Answers (1)

Yuriy Bondaruk
Yuriy Bondaruk

Reputation: 4750

You can create a temporary view in Spark and query it like a Hive table (Scala):

val dataDyf = glueContext.getSourceWithFormat(
      connectionType = "s3",
      format = "parquet",
      options = JsonOptions(Map(
        "paths" -> Array("s3://bucket/external/folder")
      ))).getDynamicFrame()

// Convert DynamicFrame to Spark's DataFrame and apply filtering
val dataViewDf = dataDyf.toDF().where(...)

dataViewDf.createOrReplaceTempView("hive_view")

val df = spark.sql("select * from hive_view")

Upvotes: 1

Related Questions