Edamame
Edamame

Reputation: 25366

pyspark: AnalysisException when joining two data frame

I have two data frame created from sparkSQL:

df1 = sqlContext.sql(""" ...""")
df2 = sqlContext.sql(""" ...""")

I tried to join these two data frame on the column my_id like below:

from pyspark.sql.functions import col

combined_df = df1.join(df2, col("df1.my_id") == col("df2.my_id"), 'inner')

Then I got the following error. Any idea what I missed? Thanks!

AnalysisException                         Traceback (most recent call last)
<ipython-input-11-45f5313387cc> in <module>()
      3 from pyspark.sql.functions import col
      4 
----> 5 combined_df = df1.join(df2, col("df1.my_id") == col("df2.my_id"), 'inner')
      6 combined_df.take(10)

/usr/local/spark-latest/python/pyspark/sql/dataframe.py in join(self, other, on, how)
    770                 how = "inner"
    771             assert isinstance(how, basestring), "how should be basestring"
--> 772             jdf = self._jdf.join(other._jdf, on, how)
    773         return DataFrame(jdf, self.sql_ctx)
    774 

/usr/local/spark-latest/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/usr/local/spark-latest/python/pyspark/sql/utils.py in deco(*a, **kw)
     67                                              e.java_exception.getStackTrace()))
     68             if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     70             if s.startswith('org.apache.spark.sql.catalyst.analysis'):
     71                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)

AnalysisException: "cannot resolve '`df1.my_id`' given input columns: [...

Upvotes: 1

Views: 4845

Answers (2)

Pushkr
Pushkr

Reputation: 3619

I think issue with your code is , you are trying to give "df1.my_id" as a column name instead of just col('my_id'). That is why the error says cannot resolve df1.my_id given input columns

you can do this without importing col.

combined_df = df1.join(df2, df1.my_id == df2.my_id, 'inner')

Upvotes: 3

koiralo
koiralo

Reputation: 23109

Not sure about pyspark but this should work if you have same field name in both dataframe

combineDf = df1.join(df2, 'my_id', 'outer')

Hope this helps!

Upvotes: 0

Related Questions