Reputation: 3111
I've implemented code in Scala that is using a method written in Java.
In the code below processSale()
is a Java method that takes util.List<Sale>
as a parameter.
I've converted Scala Iterable[Sale]
to Seq[Sale]
and then to util.List<Sale>
with the help of scala.collection.JavaConverters._
val parseSales: RDD[(String, Sale)] = rawSales
.map(sale => sale.Id -> sale)
.groupByKey()
.mapValues(a => SaleParser.processSale(a.toSeq.asJava))
However when the code gets executed as part of a Spark driver the job fails due to the task failure with UnsupportedOperationException
. I've looked through the logs and it appears that the reason is within the Java processSale
method on the call of Collections.sort
Collections.sort(sales, new Comparator<InvocaCall>() {
@Override
public int compare(Sale sale1, Sale sale2) {
return Long.compare(sale1.timestamp, sale2.timestamp);
}
});
I'm stuck at this point because I'm passing the required util.List<Sale>
. Why could Collections.sort
be an unsupported operation in this case?
Upvotes: 1
Views: 852
Reputation: 1000
Add null check for rawSales util.List<Sale>
.
val parseSales: RDD[(String, Sale)] = if (rawSales.nonEmpty)
//rawSales specific stream operations
else
//None or any code as per requirement
Upvotes: 1
Reputation: 1572
From this documentation:
Because Java does not distinguish between mutable and immutable collections in their type, a conversion from, say,
scala.immutable.List
will yield ajava.util.List
, where all mutation operations throw anUnsupportedOperationException
toSeq
from your code returns immutable.Seq
, that's why you get the exception.
So you can convert your list to mutable data structure like ListBuffer
:
list.to[scala.collection.mutable.ListBuffer].asJava
Upvotes: 2