Ajay Ganti
Ajay Ganti

Reputation: 35

Is there a way to write pyspark dataframe to azure cache for redis?

I'm having a pyspark dataframe with 2 columns. I created a azure cache for redis instance. I would like to write the pyspark dataframe to redis with first column of dataframe as key and second column as value. How can I do it in azure?

Upvotes: 0

Views: 1520

Answers (1)

teedak8s
teedak8s

Reputation: 780

You need to leverage this library:https://github.com/RedisLabs/spark-redis along with the associated jar needed(depending on which version of spark+scala you are using).

In my case I have installed 3 jars on spark cluster(Scala=2.12) latest spark:

  1. spark_redis_2_12_2_6_0.jar
  2. commons_pool2_2_10_0.jar
  3. jedis_3_6_0.jar

Along the configuration for connecting to redis:

Cluster conf setup

spark.redis.auth PASSWORD
spark.redis.port 6379
spark.redis.host xxxx.xxx.cache.windows.net

Make sure you have azure redis 4.0, the library might have issue with 6.0. Sample code to push:

    from pyspark.sql.types import StructType, StructField, StringType
schema = StructType([
    StructField("id", StringType(), True),
    StructField("colA", StringType(), True),
    StructField("colB", StringType(), True)
])

data = [
    ['1', '8', '2'],
    ['2', '5', '3'],
    ['3', '3', '1'],
    ['4', '7', '2']
]
df = spark.createDataFrame(data, schema=schema)
df.show()
--------------
(
    df.
    write.
    format("org.apache.spark.sql.redis").
    option("table", "mytable").
    option("key.column", "id").
    save()
)

 

Upvotes: 1

Related Questions