Leo
Leo

Reputation: 908

python rdd count function failing

I'm trying to take the count of Python RDD

myrdd.count()

Error code:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 27871.0 failed 4 times, most recent failure: Lost task 0.3 in stage 27871.0 (TID 89815) (10.140.0.6 executor 109): org.apache.spark.api.python.PythonException: 'TypeError: cannot unpack non-iterable NoneType object', from <command-1230182905232156>, line 27. Full traceback below:

Full traceback :

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<command-1230182905232966> in <module>
----> 1 myrdd.count()

/databricks/spark/python/pyspark/rdd.py in count(self)
   1268         3
   1269         """
-> 1270         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
   1271 
   1272     def stats(self):

/databricks/spark/python/pyspark/rdd.py in sum(self)
   1257         6.0
   1258         """
-> 1259         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
   1260 
   1261     def count(self):

/databricks/spark/python/pyspark/rdd.py in fold(self, zeroValue, op)
   1111         # zeroValue provided to each partition is unique from the one provided
   1112         # to the final reduce call
-> 1113         vals = self.mapPartitions(func).collect()
   1114         return reduce(op, vals, zeroValue)
   1115 

/databricks/spark/python/pyspark/rdd.py in collect(self)
    965         # Default path used in OSS Spark / for non-credential passthrough clusters:
    966         with SCCallSiteSync(self.context) as css:
--> 967             sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    968         return list(_load_from_socket(sock_info, self._jrdd_deserializer))
    969 

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1302 
   1303         answer = self.gateway_client.send_command(command)
-> 1304         return_value = get_return_value(
   1305             answer, self.gateway_client, self.target_id, self.name)
   1306 

/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
    108     def deco(*a, **kw):
    109         try:
--> 110             return f(*a, **kw)
    111         except py4j.protocol.Py4JJavaError as e:
    112             converted = convert_exception(e.java_exception)

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    324             value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325             if answer[1] == REFERENCE_TYPE:
--> 326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
    328                     format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 27977.0 failed 4 times, most recent failure: Lost task 0.3 in stage 27977.0 (TID 90220) (10.140.0.9 executor 112): org.apache.spark.api.python.PythonException: 'TypeError: cannot unpack non-iterable NoneType object', from <command-1230182905232156>, line 27. Full traceback below:
Traceback (most recent call last):
  File "/databricks/spark/python/pyspark/worker.py", line 713, in main
    process()
  File "/databricks/spark/python/pyspark/worker.py", line 703, in process
    out_iter = func(split_index, iterator)
  File "/databricks/spark/python/pyspark/rdd.py", line 2953, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/databricks/spark/python/pyspark/rdd.py", line 2953, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/databricks/spark/python/pyspark/rdd.py", line 2953, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/databricks/spark/python/pyspark/rdd.py", line 419, in func
    return f(iterator)
  File "/databricks/spark/python/pyspark/rdd.py", line 1270, in <lambda>
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/databricks/spark/python/pyspark/rdd.py", line 1270, in <genexpr>
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/databricks/spark/python/pyspark/util.py", line 72, in wrapper
    return f(*args, **kwargs)
  File "<command-1230182905232159>", line 4, in <lambda>
  File "<command-1230182905232156>", line 27, in func1
TypeError: cannot unpack non-iterable NoneType object

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:661)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:813)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:795)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:614)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
    at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
    at scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
    at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
    at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
    at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
    at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
    at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
    at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
    at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1038)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:68)
    at org.apache.spark.scheduler.Task.doRunTask(Task.scala:148)
    at org.apache.spark.scheduler.Task.run(Task.scala:117)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:732)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1643)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:735)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2766)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2713)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2707)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2707)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1256)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1256)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1256)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2974)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2915)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2903)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1029)
    at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2458)
    at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1036)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:125)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:419)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:1034)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:260)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
    at py4j.Gateway.invoke(Gateway.java:295)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: 'TypeError: cannot unpack non-iterable NoneType object', from <command-1230182905232156>, line 27. Full traceback below:
Traceback (most recent call last):
  File "/databricks/spark/python/pyspark/worker.py", line 713, in main
    process()
  File "/databricks/spark/python/pyspark/worker.py", line 703, in process
    out_iter = func(split_index, iterator)
  File "/databricks/spark/python/pyspark/rdd.py", line 2953, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/databricks/spark/python/pyspark/rdd.py", line 2953, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/databricks/spark/python/pyspark/rdd.py", line 2953, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "/databricks/spark/python/pyspark/rdd.py", line 419, in func
    return f(iterator)
  File "/databricks/spark/python/pyspark/rdd.py", line 1270, in <lambda>
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/databricks/spark/python/pyspark/rdd.py", line 1270, in <genexpr>
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/databricks/spark/python/pyspark/util.py", line 72, in wrapper
    return f(*args, **kwargs)
  File "<command-1230182905232159>", line 4, in <lambda>
  File "<command-1230182905232156>", line 27, in func1
TypeError: cannot unpack non-iterable NoneType object

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:661)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:813)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:795)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:614)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
    at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
    at scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
    at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
    at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
    at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
    at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
    at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
    at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
    at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1038)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:68)
    at org.apache.spark.scheduler.Task.doRunTask(Task.scala:148)
    at org.apache.spark.scheduler.Task.run(Task.scala:117)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:732)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1643)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:735)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

Below code works if I take only few records

myrdd.take(10)

Result:

('AB212312123',
  -28.520553588867188,
  32.144439697265625,
  10003424,
  datetime.datetime(2021, 8, 9, 9, 4, 45),
  0.0,
  0.0,
  4662.440894206737,
  'South Africa',
  'ZA',
  'KwaZulu-Natal',
  '(South) Uthungulu DC',
  '',
  'Teza, South Africa')

The same code works in some environment. I guess the data is causing the issue. How to find which record is exactly causing the issue

Upvotes: 0

Views: 1048

Answers (1)

Steven
Steven

Reputation: 15318

without knowing all the transformations that you do on the rdd befor the count, it is difficult to know what is causing the issues.

Technically, you should have 3 steps in your process :

  1. you acquire your data i.e. sc.textFile or equivalent
  2. you do some transfo : rdd = rdd.apply or rdd = rdd.reduceByKey
  3. you write your data (or another action). In your case, you are trying a count.

The 3rd one is falling, meaning that the issue probably comes from part 1 or 2. but RDD are lazy so you cannot see the error easily.

Start with a count at step 1. Do you read you data properly ? Good !
Next, for each transformation you applied on your RDD, do a count after. That's painful and timeconsuming. But that's a really good method to understand where your transformations are failling.

And also: try to use dataframe, they're more efficient and easier to debug than RDDs.

Upvotes: 2

Related Questions