Fleur
Fleur

Reputation: 686

Input type must be string type but got ArrayType(StringType,true) error in Spark using Scala

I am new to Spark and am using Scala to create a basic classifier. I am reading from a textfile as a dataset and splitting it into training and test data sets. Then I'm trying to tokenize the training data but it fails with

Caused by: java.lang.IllegalArgumentException: requirement failed: Input type must be string type but got ArrayType(StringType,true).
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.ml.feature.RegexTokenizer.validateInputType(Tokenizer.scala:149)
at org.apache.spark.ml.UnaryTransformer.transformSchema(Transformer.scala:110)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:180)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:180)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:186)
at org.apache.spark.ml.Pipeline.transformSchema(Pipeline.scala:180)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:70)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:132)
at com.classifier.classifier_app.App$.<init>(App.scala:91)
at com.classifier.classifier_app.App$.<clinit>(App.scala)
... 1 more

error.

The code is as below:

val input_path = "path//to//file.txt"

case class Sentence(value: String)
val sentencesDS = spark.read.textFile(input_path).as[Sentence]  

val Array(trainingData, testData) = sentencesDS.randomSplit(Array(0.7, 0.3))     

val tokenizer = new Tokenizer()
  .setInputCol("value")
  .setOutputCol("words")

val pipeline = new Pipeline().setStages(Array(tokenizer, regexTokenizer, remover, hashingTF, ovr))
val model = pipeline.fit(trainingData)

How do I solve this? Any help is appreciated.

I have defined all the stages in the pipeline but haven't put them here in the code snippet.

Upvotes: 1

Views: 4059

Answers (1)

Fleur
Fleur

Reputation: 686

The error was resolved when the order of execution in pipeline was changed.

val pipeline = new Pipeline().setStages(Array (indexer, regexTokenizer, remover, hashingTF))
val model = pipeline.fit(trainingData) 

The tokenizer was replaced with regexTokenizer.

Upvotes: 1

Related Questions