Adrian Petrescu
Adrian Petrescu

Reputation: 18029

Why did RecognitionListener stop working in JellyBean?

For everyone using Android's voice recognition API, there used to be a handy RecognitionListener you could register that would push various events to your callbacks. In particular, there was the following onBufferReceived(byte[]) method:

public abstract void onBufferReceived (byte[] buffer)

Since: API Level 8 More sound has been received. The purpose of this function is to allow giving feedback to the user regarding the captured audio. There is no guarantee that this method will be called.

Parameters buffer a buffer containing a sequence of big-endian 16-bit integers representing a single channel audio stream. The sample rate is implementation dependent.

Although the method explicitly states that there is no guarantee it will be called, in ICS and prior it would effectively be called 100% of the time: regularly enough, at least, that by concatenating all the bytes received this way, you could reconstruct the entire audio stream and play it back.

For some reason, however, in the Jellybean SDK, this magically stopped working. There's no notice of deprecation and the code still compiles, but the onBufferReceived is now never called. Technically this isn't breaking their API (since it says there's "no guarantee" the method will be called), but clearly this is a breaking change for a lot of things that depended on this behaviour.

Does anybody know why this functionality was disabled, and if there's a way to replicate its behaviour on Jellybean?

Clarification: I realize that the whole RecognizerIntent thing is an interface with multiple implementations (including some available on the Play Store), and that they can each choose what to do with RecognitionListener. I am specifically referring to the default Google implementation that the vast majority of Jellybean phones use.

Upvotes: 14

Views: 2634

Answers (4)

Mike6679
Mike6679

Reputation: 6137

I have a service that is implementing RecognitionListener and I also override onBufferReceived(byte[]) method. I was investigating why the speech recognition is much slower to call onResults() on <=ICS . The only difference I could find was that onBufferReceived is called on phones <= ICS. On JellyBean the onBufferReceived() is never called and onResults() is called significantly faster and I'm thinking its because of the overhead to call onBufferReceived every second or millisecond. Maybe thats why they did away with onBufferReceived()?

Upvotes: 0

stevietheTV
stevietheTV

Reputation: 512

I ran in to the same problem. The reason why I didn't just accept that "this does not work" was because Google Nows "note-to-self" record the audio and sends it to you. What I found out in logcat while running the "note-to-self"-operation was:

02-20 14:04:59.664: I/AudioService(525):  AudioFocus  requestAudioFocus() from android.media.AudioManager@42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1@424cca50

02-20 14:04:59.754: I/AbstractCardController.SelfNoteController(8675): #attach
02-20 14:05:01.006: I/AudioService(525):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1@424cca50

02-20 14:05:05.791: I/ActivityManager(525): START u0 {act=com.google.android.gm.action.AUTO_SEND typ=text/plain cmp=com.google.android.gm/.AutoSendActivity (has extras)} from pid 8675
02-20 14:05:05.821: I/AbstractCardView.SelfNoteCard(8675): #onViewDetachedFromWindow

This makes me belive that google disposes the audioFocus from google now (the regonizerIntent), and that they use an audio recorder or something similar when the Note-to-self-tag appears in onPartialResults. I can not confirm this, has anyone else made tries to make this work?

Upvotes: 0

Craig Hamilton
Craig Hamilton

Reputation: 21

I too was using the onBufferReceived method and was disappointed that the (non-guaranteed) call to the method was dropped in Jelly Bean. Well, if we can't grab the audio with onBufferReceived(), maybe there is a possibility of running an AudioRecord simultaneously with voice recognition. Anyone tried this? If not, I'll give it a whirl and report back.

Upvotes: 2

AndyM
AndyM

Reputation: 216

Google does not call this method their Jelly Bean speech app (QuickSearchBox). Its simply not in the code. Unless there is an official comment from a Google Engineer I cannot give a definite answer "why" they did this. I did search the developer forums but did not see any commentary about this decision.

The ics default for speech recognition comes from Google's VoiceSearch.apk. You can decompile this apk and see and find see there is an Activity to handle an intent of action *android.speech.action.RECOGNIZE_SPEECH*. In this apk I searched for "onBufferReceived" and found a reference to it in com.google.android.voicesearch.GoogleRecognitionService$RecognitionCallback.

With Jelly Bean, Google renamed VoiceSearch.apk to QuickSearch.apk and made a lot of new additions to the app (ex. offline dictation). You would expect to still find an onBufferReceived call, but for some reason it is completely gone.

Upvotes: 9

Related Questions