Reputation: 8967
I'm trying to use the like
function on a Column with another Column. Is it possible to use Column
inside the like function?
sample code:
df['col1'].like(concat('%',df2['col2'], '%'))
Error log:
py4j.Py4JException: Method like([class org.apache.spark.sql.Column]) does not exist at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318) at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326) at py4j.Gateway.invoke(Gateway.java:274) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748)
Upvotes: 3
Views: 1874
Reputation: 8523
You can do it using a SQL expression instead. For some reason the python API doesn't directly support it. For example:
from pyspark.sql.functions import expr
data = [
("aaaa", "aa"),
("bbbb", "cc")
]
df = sc.parallelize(data).toDF(["value", "pattern"])
df = df.withColumn("match", expr("value like concat('%', pattern, '%')"))
df.show()
Outputs this:
+-----+-------+-----+
|value|pattern|match|
+-----+-------+-----+
| aaaa| aa| true|
| bbbb| cc|false|
+-----+-------+-----+
Upvotes: 5