S. K
S. K

Reputation: 505

How to use a broadcast collection in a udf?

How to use a broadcast collection in Spark SQL 1.6.1 udf. Udf should be called from Main SQL as shown below

sqlContext.sql("""Select col1,col2,udf_1(key) as value_from_udf FROM table_a""")

udf_1() should be looking through a broadcast small collection to return value to main sql.

Upvotes: 12

Views: 11285

Answers (1)

mtoto
mtoto

Reputation: 24178

Here's a minimal reproducible example in pySpark, illustrating the use of broadcast variables to perform lookups, employing a lambda function as an UDF inside a SQL statement.

# Create dummy data and register as table
df = sc.parallelize([
    (1,"a"),
    (2,"b"),
    (3,"c")]).toDF(["num","let"])
df.registerTempTable('table')

# Create broadcast variable from local dictionary
myDict = {1: "y", 2: "x", 3: "z"}
broadcastVar = sc.broadcast(myDict) 
# Alternatively, if your dict is a key-value rdd, 
# you can do sc.broadcast(rddDict.collectAsMap())

# Create lookup function and apply it
sqlContext.registerFunction("lookup", lambda x: broadcastVar.value.get(x))
sqlContext.sql('select num, let, lookup(num) as test from table').show()
+---+---+----+
|num|let|test|
+---+---+----+
|  1|  a|   y|
|  2|  b|   x|
|  3|  c|   z|
+---+---+----+

Upvotes: 15

Related Questions