la_femme_it
la_femme_it

Reputation: 672

HiveQL to PySpark - issue with aggregated column in SELECT statement

I have following HQL script which needs to be puti nto pyspark, spark 1.6

insert into table db.temp_avg
select
a,
avg(b) ,
c 
from db.temp WHERE flag is not null GROUP BY a, c;

I created few versions of spark code, but I'm stuggling how to get this averaged column into select.

Also I found out that groupped data cannot be write this way:

df3 = df2.groupBy...
df3.write.mode('overwrite').saveAsTable('db.temp_avg')

part of pyspark code:

temp_table = sqlContext.table("db.temp")

df = temp_table.select('a', 'avg(b)', 'c', 'flag').toDF('a', 'avg(b)', 'c', 'flag')
df = df.where(['flag'] != 'null'))
# this ofc does not work along with the avg(b)
df2 = df.groupBy('a', 'c')
df3.write.mode('overwrite').saveAsTable('db.temp_avg')

Thx for your help.

Correct solution:

import pyspark.sql.functions as F
df = sqlContext.sql("SELECT * FROM db.temp_avg").alias("temp")
df = df.select('a', 'b', 'c')\
    .filter(F.col("temp.flag").isNotNULL())\
    .groupby('a', 'c')\
    .agg(F.avg('b').alias("avg_b"))

Upvotes: 0

Views: 48

Answers (1)

Ankit Kumar Namdeo
Ankit Kumar Namdeo

Reputation: 1464

import pyspark.sql.functions as F
df = sqlContext.sql("select * from db.temp_avg")

df = df.select('a', b, 'c')\ .filter(F.col("flag").isNotNULL())\ .groupby('a', 'c')\ .agg(F.avg('b').alias("avg_b"))

Then you can save the table by df.saveAsTable("tabe_name")

Upvotes: 1

Related Questions