| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
a more robust solution instead.
|
|
|
|
| |
Change-Id: I8ac7bd4c1db85058f863bcfaf5ee30212644b2bd
|
|
|
|
| |
to fix the build.
|
|\
| |
| |
| | |
data checks."
|
| |
| |
| |
| | |
voice data checks.
|
| | |
|
| |
| |
| |
| |
| | |
RecognizerIntent.ACTION_RECOGNIZE_SPEECH when finding a voice
recognition service.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
to be used by anyone implementing a voice recognition service. Also define
a new <recognition-service> styleable to be used in such service's metadata
xml.
Still to do: Change VoiceSearch's GoogleRecognitionService to respond to this
intent, and actually use this intent instead of ACTION_RECOGNIZE_SPEECH here
in RecognitionService.
|
|/
|
|
|
|
|
|
|
|
| |
for voice recognition on the device. Right now this just queries
the package manager at boot and finds the (hopefully) single
available recognizer.
TODO: Add an attribute to let recognition services expose a settings
activity, and expose the settings activity of the chosen recognition
service in the system settings for voice input & output.
|
|
|
|
| |
onPartialResults()
|
|
|
|
|
|
|
| |
Reinstalling VoiceIME created a problem because RecognitionService expected the first command to be setListener, in this version, such command is added
if the connection is broken.
Change-Id: Ia102fc1843053e2bdd330b380c2685a1227081b2
|
|
|
|
| |
Change-Id: Ia2c13d4c7993d646956090aa5c56d1a441af9e5a
|
|
|
|
|
|
| |
Specifically point out that startActivity() is not supported for
ACTION_RECOGNIZE_SPEECH, and make the documentation on EXTRA_RESULTS more
clear to point out that this is a part of the results, not the request.
|
| |
|
|
|
|
|
| |
the base url to be used when interpreting html results given in
EXTRA_VOICE_SEARCH_RESULT_HTML.
|
|
|
|
|
| |
This will not be unhidden for Froyo as nothing will implement it until
later, but I wanted to have the definition explicit in the framework.
|
|
|
|
| |
Change-Id: Ib5068fb6d42b6752d09b0828964b6cbe92d015d3
|
|
|
|
|
| |
(or other voice search implementations) can use to implement settings in the
system settings.
|
|
|
|
|
|
|
| |
from system settings. For now it'll just be triggered from within the
voice search app if you choose the settings menu item.
Need to unhide this before we can be fully unbundled for voice search.
|
|
|
|
| |
interfere with one another.
|
|
|
|
| |
engine to use for text-to-speech.
|
|
|
|
| |
one in eclair.
|
|\
| |
| |
| |
| |
| |
| | |
Merge commit '3e8c0ee84223328f3e8e5b430aa719969cd4f38d'
* commit '3e8c0ee84223328f3e8e5b430aa719969cd4f38d':
TextToSpeech javadoc update.
|
| |\
| | |
| | |
| | |
| | |
| | |
| | | |
Merge commit 'b5308a7051fedacf289470c8a7e21b63af9d4db8' into eclair
* commit 'b5308a7051fedacf289470c8a7e21b63af9d4db8':
TextToSpeech javadoc update.
|
| |/
|/| |
|
|/ |
|
|
|
|
|
|
| |
files. This is required for bug 2022435.
Correct the javadoc where two intents were mislabelled as broadcast,
but were activity actions.
|
|\
| |
| |
| |
| | |
* changes:
Propagate info about whether a "call" command was issued in RecognitionResult.
|
| |
| |
| |
| | |
This is needed for the fix of http://b/2018041.
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
A race condition is encountered when an application invokes shutdown()
on its TextToSpeech object while is has speak() requests still running.
Since the TTS service destructor releases the synthesizer resources and
sets the corresponding synth reference to null, an NPE was observed.
The fix consists in catching NPEs whenever the sNativeSynth object is
accessed, and return the matching error for the call.
This change is a "low risk" version of the fix for bug 2025765i (same
issue) which was reverted because it was higher risk than this CL:
it affected the logic of each call to sNativeSynth. This CL only sets
an error code when an NPE is fired because sNativeSynth is null.
|
|
|
|
|
| |
Add new intent and matching extra to signal the completion of the
language pack installer. This is used by CL 20513.
|
|
|
|
|
|
|
|
|
| |
Removed the TTS_ prefix in the TextToSpeech class to follow the standard naming convention.
Moved the TTS-related intents from the Intent class to TextToSpeech and TextToSpeech.Engine.
Renamed the TextToSpeech.Engine constants that are used as extras for the
ACTION_TTS_CHECK_TTS_DATA intent to prefix them with EXTRA_.
Cleaned up the other TextToSpeech.Engine constant to remove superfluous mentions of
"TTS" in the name.
|
|
|
|
|
| |
which works around the bug where a language cannot be set if the default
language (which is loaded upon initialization) isn't eng-USA.
|
|
|
|
|
| |
waiting to change the language right before a call to speak can
put the engine in an unstable state.
|
|
|
|
|
|
|
|
| |
for all current TextToSpeech instances by only caching the language
value so it is used with each subsequent utterance for this instance.
Synchronize calls to the engine around a global mutex since the engine
isn't thread-safe, except for the stop() call which is meant to interrupt
the synthesis loop.
|
|
|
|
|
| |
this method is needed to add earcons; otherwise, there is
nothing for playEarcon to play.
|
|
|
|
|
|
| |
extras for android.intent.action.CHECK_TTS_DATA intent, and the key values
for the parameter hashmap that can be passed by an application in speak(),
synthesizeToFile(), playSilence() and playEarcon().
|
| |
|
| |
|
|
|
|
|
|
| |
ID specified as a hashmap param in the synthesis calls.
Fix a bug where the cached parameters were not passed to the service
when synthesizing to a file.
|
| |
|
| |
|
|
|
|
|
| |
the default language is determined by the current Locale, not a hardcoded
value. Add a value for the default TTS engine to use.
|
|
|
|
| |
language settings with the current Locale.
|
| |
|
|
|
|
| |
installed already.
|
| |
|
| |
|
| |
|
|
|
|
| |
for returning this information in an Intent from checkVoiceData.
|