Pramod Kumar
Pramod Kumar

Reputation: 81

Index a map by a the value of a different column in Spark

I have a dataframe with the following schema:

|-- A: map (nullable = true)
|    |-- key: string
|    |-- value: array (valueContainsNull = true)
|    |    |-- element: struct (containsNull = true)
|    |    |    |-- uid: string (nullable = true)
|    |    |    |-- price: double (nullable = true)
|    |    |    |-- type: string (nullable = true)
|-- keyindex: string (nullable = true)

For example, if I have the following data:

 {"A":{
 "innerkey_1":[{"uid":"1","price":0.01,"recordtype":"STAT"},
               {"uid":"6","price":4.3,"recordtype":"DYN"}],
 "innerkey_2":[{"uid":"2","price":2.01,"recordtype":"DYN"},
               {"uid":"4","price":6.1,"recordtype":"DYN"}]},
 "innerkey_2"}

I use the following schema to read the data into a dataframe:

val schema = (new StructType().add("mainkey", MapType(StringType, new ArrayType(new StructType().add("uid",StringType).add("price",DoubleType).add("recordtype",StringType), true))).add("keyindex",StringType))

I am trying to figure out if I can use the keyindex to select values from the map. Since the keyindex in the example is "innerkey_2", I want the output to be

[{"uid":"2","price":2.01,"recordtype":"DYN"},
 {"uid":"4","price":6.1,"recordtype":"DYN"}]

Thanks for your help!

Upvotes: 0

Views: 643

Answers (1)

Alper t. Turker
Alper t. Turker

Reputation: 35219

getItem should do the trick:

scala> val df = Seq(("innerkey2", Map("innerkey2" -> Seq(("1", 0.01, "STAT"))))).toDF("keyindex", "A")
df: org.apache.spark.sql.DataFrame = [keyindex: string, A: map<string,array<struct<_1:string,_2:double,_3:string>>>]

scala> df.select($"A"($"keyindex")).show
+---------------+
|    A[keyindex]|
+---------------+
|[[1,0.01,STAT]]|
+---------------+

Upvotes: 1

Related Questions