Hong
Hong

Reputation: 18521

How to find a clue from "Runtime aborting..." crash

An app running on a specific Android TV device periodically has the following crash:

Cmdline: com.mydomain.myapp
pid: 28890, tid: 3124, name: CodecLooper  >>> com.mydomain.myapp <<<
Davey! duration=754ms; Flags=0, FrameTimelineVsyncId=15812495, IntendedVsync=146790515172662, Vsync=146790880252007, InputEventId=0, HandleInputStart=146790883087631, AnimationStart=146790883088506, PerformTraversalsStart=146790883617881, DrawStart=146790899276008, FrameDeadline=146790546918692, FrameInterval=146790883080923, FrameStartTime=15873015, SyncQueued=146790906193467, SyncStart=146791117319030, IssueDrawCommandsStart=146791122452364, SwapBuffers=146791474395774, FrameCompleted=146791480789108, DequeueBufferDuration=340854242, QueueBufferDuration=5631208, GpuCompleted=146791476995983, SwapBuffersCompleted=146791480789108, DisplayPresentTime=0, 


runtime.cc:669] Runtime aborting...
runtime.cc:669] Dumping all threads without mutator lock held
runtime.cc:669] All threads:
runtime.cc:669] DALVIK THREADS (260):
runtime.cc:669] "pool-139-thread-4" prio=5 tid=402 Runnable
runtime.cc:669]   | group="" sCount=0 ucsCount=0 flags=0 obj=0x162037f0 self=0xb4000075be884a40
runtime.cc:669]   | sysTid=3110 nice=0 cgrp=default sched=0/0 handle=0x721b9cacb0
runtime.cc:669]   | state=R schedstat=( 276302280598 789549905823 2816620 ) utm=22266 stm=5363 core=4 HZ=100
runtime.cc:669]   | stack=0x721b8c7000-0x721b8c9000 stackSize=1039KB
runtime.cc:669]   | held mutexes= "abort lock" "mutator lock"(shared held)
runtime.cc:669]   native: #00 pc 000000000055f850  /apex/com.android.art/lib64/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, int, BacktraceMap*, char const*, art::ArtMethod*, void*, bool)+140)
runtime.cc:669]   native: #01 pc 0000000000676270  /apex/com.android.art/lib64/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool, BacktraceMap*, bool) const+360)
runtime.cc:669]   native: #02 pc 0000000000693f4c  /apex/com.android.art/lib64/libart.so (art::DumpCheckpoint::Run(art::Thread*)+920)
runtime.cc:669]   native: #03 pc 000000000068da70  /apex/com.android.art/lib64/libart.so (art::ThreadList::RunCheckpoint(art::Closure*, art::Closure*)+520)
runtime.cc:669]   native: #04 pc 000000000068cc84  /apex/com.android.art/lib64/libart.so (art::ThreadList::Dump(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool)+1464)
runtime.cc:669]   native: #05 pc 0000000000626d68  /apex/com.android.art/lib64/libart.so (art::Runtime::Abort(char const*)+2164)
runtime.cc:669]   native: #06 pc 000000000001595c  /system/lib64/libbase.so (android::base::SetAborter(std::__1::function<void (char const*)>&&)::$_3::__invoke(char const*)+76)
runtime.cc:669]   native: #07 pc 0000000000006dc8  /system/lib64/liblog.so (__android_log_assert+308)
runtime.cc:669]   native: #08 pc 000000000001ad34  /system/lib64/libstagefright_foundation.so (android::ALooperRoster::registerHandler(android::sp<android::ALooper> const&, android::sp<android::AHandler> const&)+796)
runtime.cc:669]   native: #09 pc 000000000001966c  /system/lib64/libstagefright_foundation.so (android::ALooper::registerHandler(android::sp<android::AHandler> const&)+136)
runtime.cc:669]   native: #10 pc 000000000010c470  /system/lib64/libstagefright.so (android::MediaCodec::init(android::AString const&)+1556)
runtime.cc:669]   native: #11 pc 0000000000049130  /system/lib64/libmedia_jni.so (android_media_MediaCodec_reset(_JNIEnv*, _jobject*)+328)
runtime.cc:669]   at android.media.MediaCodec.native_reset(Native method)
runtime.cc:669]   at android.media.MediaCodec.reset(MediaCodec.java:1987)

The above is one thread dump whichis a suspect. The entire dump has over 2,000 lines. Most threads are sleeping threads.

About 1 second before the above logcat entries, there are many entries like the following:

#17(BLAST Consumer)17](id:70da00000011,api:3,p:28890,c:28890) detachBuffer: slot 39 is not owned by the producer (state = FREE)
#17(BLAST Consumer)17](id:70da00000011,api:3,p:28890,c:28890) detachBuffer: slot 40 is not owned by the producer (state = FREE)
#17(BLAST Consumer)17](id:70da00000011,api:3,p:28890,c:28890) detachBuffer: slot 41 is not owned by the producer (state = FREE)
#17(BLAST Consumer)17](id:70da00000011,api:3,p:28890,c:28890) detachBuffer: slot 42 is not owned by the producer (state = FREE)
#17(BLAST Consumer)17](id:70da00000011,api:3,p:28890,c:28890) detachBuffer: slot 43 is not owned by the producer (state = FREE)
...
#17(BLAST Consumer)17](id:70da00000011,api:3,p:28890,c:28890) detachBuffer: slot 63 is not owned by the producer (state = FREE)

I wonder if they are related to the crash. I'd like to emphasize:

Could anyone offer a tip on finding a clue for the crash culprit?

Upvotes: 1

Views: 279

Answers (0)

Related Questions