Dinosaurius
Dinosaurius

Reputation: 8628

ValueError: Length of object (3) does not match with length of fields

I manually create PySpark DataFrame as follows:

acdata = sc.parallelize([ 
[('timestamp', 1506340019), ('pk', 111), ('product_pk', 123), ('country_id', 'FR'), ('channel', 'web')]
])
# Convert to tuple
acdata_converted = acdata.map(lambda x: (x[0][1], x[1][1], x[2][1]))

# Define schema
acschema = StructType([
    StructField("timestamp", LongType(), True),
    StructField("pk", LongType(), True),
    StructField("product_pk", LongType(), True),
    StructField("country_id", StringType(), True),
    StructField("channel", StringType(), True)
])

df = sqlContext.createDataFrame(acdata_converted, acschema)

But when I write df.head() and do spark-submit, I get the following error:

org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000003/pyspark.zip/pyspark/worker.py", line 177, in main
    process()
  File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000003/pyspark.zip/pyspark/worker.py", line 172, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000003/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000001/pyspark.zip/pyspark/sql/session.py", line 520, in prepare
  File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000003/pyspark.zip/pyspark/sql/types.py", line 1358, in _verify_type
    "length of fields (%d)" % (len(obj), len(dataType.fields)))
ValueError: Length of object (3) does not match with length of fields (12)

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

What does it mean and how to solve it?

Upvotes: 4

Views: 13549

Answers (2)

iurii_n
iurii_n

Reputation: 1370

I'd do it this way:

acdata = sc.parallelize([{'timestamp': 1506340019, 'pk': 111, 'product_pk': 123, 'country_id': 'FR', 'channel': 'web'}, {...}])

# Define schema
acschema = StructType([
    StructField("timestamp", LongType(), True),
    StructField("pk", LongType(), True),
    StructField("product_pk", LongType(), True),
    StructField("country_id", StringType(), True),
    StructField("channel", StringType(), True)
])

df = sqlContext.createDataFrame(acdata_converted, acschema)

Also think if you really need to parallelize data. It's also possible to create DataFrame from dictionary.

Upvotes: -1

Pratyush Sharma
Pratyush Sharma

Reputation: 289

You need to map all 5 fields to match with the schema defined.

    acdata_converted = acdata.map(lambda x: (x[0][1], x[1][1], x[2][1], x[3][1], x[4][1]))

Upvotes: 2

Related Questions