|Veröffentlichungsdatum||8. Febr. 2000|
|Eingetragen||12. Dez. 1996|
|Prioritätsdatum||12. Dez. 1996|
|Auch veröffentlicht unter||US6591237, US20010012997|
|Veröffentlichungsnummer||08763999, 763999, US 6023676 A, US 6023676A, US-A-6023676, US6023676 A, US6023676A|
|Ursprünglich Bevollmächtigter||Dspc Israel, Ltd.|
|Zitat exportieren||BiBTeX, EndNote, RefMan|
|Patentzitate (16), Nichtpatentzitate (4), Referenziert von (65), Klassifizierungen (10), Juristische Ereignisse (8)|
|Externe Links: USPTO, USPTO-Zuordnung, Espacenet|
The present invention relates to speech recognition systems generally and to those which are activated by a keyword in particular.
Speech recognition of isolated words is used for voice-activated command and control applications. There are usually two modes of activating the recognition system, an "open microphone mode" and a "button activated" or "push-to-talk" mode. In the open microphone mode, the recognizer continuously searches for a match between the acoustic input and the vocabulary of commands which form part of the recognizer. In the button activated mode, the recognizer searches for a match only after the user pushes a button indicating that a command is expected within the next few seconds.
Many speech recognition applications have selected the button activated mode because speech recognition systems perform better on its task: "Given the utterance, which is the most likely word, out of my N known words, that was said?". It is far harder for speech recognition systems to perform the open microphone task of "Does this utterance correspond to one of my N known words?" The reason for this difference is related to the variability in the environment and in the manner of speaking compared to the originally trained (or "known") words.
In each case, recognition scores indicating how close the utterance is to each of the known words are determined. The "open" vocabulary of the open microphone compares the recognition scores to an absolute threshold and is therefore, affected by significant "noises". The "closed" vocabulary of the button activated mode, however, attempts to determine which word was said and thus, compares the recognition scores to each other, selecting the best relative score. Since the noise generally affects all of the scores in the same way, the scores generally rise and fall together and the resultant comparison is not affected by this variability.
Unfortunately, the button activated mode is not fully hands-free since the user has to push a button prior to saying the command.
A known method for improving the acceptance/rejection decision in the open microphone mode is to use background or filler templates which model background or non-relevant speech. The background or filler templates are typically produced from a large database of speech utterances which are not part of the particular vocabulary of the recognizer.
Such a method is described in the article "Word Spotting From Continuous Speech Utterances" by R. C. Rose, Automatic Speech and Speaker Recognition--Advanced Topics, edited by C. H. Lee, F. K. Soong and K. K. Paliwal, Kluwer Academic Publishers, 1996, pp. 303-329. This method is relevant to Hidden Markov Model (HMM) type, speaker independent recognition systems which are described in "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition" by L. R. Rabiner, Proceedings of the IEEE, Vol. 7, No. 2, Feb. 1989, pp. 257-286. Both articles are incorporated herein by reference.
In the open microphone mode, the standard measure for the rejection/acceptance capability of a recognition system is the rate of false alarms per vocabulary word, for a given rate of detection. In other words, for a given rate of true recognition of a vocabulary word, how many times did the system claim a vocabulary word was said when it had not been said. Unfortunately, the more words in the vocabulary, the more false alarms there are and the more of a nuisance the system is to the user. Designers have thus tried to reduce the number of vocabulary words in the open microphone mode.
One method to do so without limiting the functionality of the recognition system is to separate the recognition operation into two steps. This method is described in section 6.2 of the article by R. C. Rose and involves using a single or a few keywords, which are recognized in open microphone mode, as an activation element. Once the uttered keyword has been recognized, the method operates in the closed vocabulary mode, selecting the next utterance as one of the words in the closed vocabulary. In effect, the keywords of this method replace the button of the button activation mode described hereinabove.
The above-described two step method provides hands-free operation, as in the open microphone mode, but the number of false alarms is reduced since the vocabulary in the open microphone mode is reduced. Such a mode of operation is natural for menu-type operations where the user activates one of a few functions with a keyword and only afterwards says one of the commands which are relevant to the function.
The present invention utilizes two types of templates, that of a keyword (called herein a "keyword template") and those of a closed vocabulary (called herein "vocabulary templates").
It is an object of the present invention to provide a keyword recognition system for speaker dependent, dynamic time warping (DTW) recognition systems. The present invention uses all of the trained templates in the system (keyword and vocabulary) to determine if an utterance is a keyword utterance or not.
Initially, only the keyword template is utilized as a first acceptance criterion. If that criterion is passed, then the utterance is compared to all of the vocabulary templates and their match scores recorded. Only if the match to the keyword is higher than all of the matches to the vocabulary templates, is the utterance accepted as a keyword utterance. At that point, a listening window is opened and the following utterance is compared to each of the utterances of the closed vocabulary. Thus, the present invention utilizes the vocabulary templates as filler templates.
There is therefore provided, in accordance with a preferred embodiment of the present invention, a system and method for recognizing an utterance as a keyword. The system activates a speaker dependent recognition system on a plurality of vocabulary words and includes a pattern matcher and a criterion determiner. The pattern matcher initially matches the utterance to a keyword template and produces a corresponding keyword score indicating the quality of the match between the utterance and the keyword template. The pattern matcher also matches the utterance to a plurality of vocabulary templates, the result being a corresponding plurality of vocabulary scores each indicating the quality of the match between the utterance and one of the vocabulary templates. The criterion determiner selects the utterance as the keyword if the keyword score indicates a significant match to the keyword template and if the keyword score indicates a better match than do the entirety of the vocabulary scores. Once the utterance is accepted as the keyword, the criterion determiner activates the speaker dependent recognition system to match at least a second utterance to the words of the closed vocabulary.
Moreover, in accordance with a preferred embodiment of the present invention, the pattern matcher performs dynamic time warping between the utterance and the relevant one of the templates.
Additionally, in accordance with a preferred embodiment of the present invention, the criterion determiner opens a listening window once the utterance is accepted as the keyword thereby to recognize the words of the closed vocabulary. The pattern matcher then matches at least the second utterance to the vocabulary templates thereby to determine which word of the closed vocabulary was spoken in the second utterance.
Further, in accordance with a preferred embodiment of the present invention, the present invention also includes a preprocessing operation which selects suitable vocabulary templates for use in the keyword recognition. The suitable vocabulary templates are those which are different, by a predetermined criterion, from the keyword template.
Still further, in accordance with a further preferred embodiment of the present invention, there can be more than one keyword template where each is associated with its own vocabulary. The present invention determines which keyword is spoken and accepts the utterance only if the keyword score is large enough and better than the score of the utterance to at least a portion of all of the vocabulary words. The present invention then activates the recognition system on the vocabulary associated with the detected keyword.
The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:
FIG. 1 is a block diagram illustration of a keyword recognition system, constructed and operative in accordance with a preferred embodiment of the present invention;
FIG. 2 is a flow chart illustration of a method of recognizing a keyword from among a continuous stream of utterances, operative in accordance with a preferred embodiment of the present invention and in conjunction with the system of FIG. 1;
FIG. 3 is a flow chart illustration of a method of recognizing a vocabulary word once the method of FIG. 2 has recognized a keyword;
FIG. 4 is a flow chart illustration of a method of selecting which vocabulary words to use; and
FIG. 5 is a flow chart illustration of a multiple keyword recognition method.
Reference is now made to FIGS. 1, 2 and 3 which respectively illustrate a keyword recognition system (FIG. 1) and the methods with which to operate it (FIGS. 2 and 3). The keyword recognition system comprises an utterance detector 10, a pattern matcher 12 having associated with it a keyword template 14 and a database 16 of templates for words of a closed vocabulary, and a criterion determiner 20. The words of the closed vocabulary are typically words which it is desired to be able to recognize once the keyword has been said. It will be appreciated that the templates for both the keyword and the words of the closed vocabulary are trained by the user prior to operation of this system.
The utterance detector 10 receives an input acoustic signal and determines whether or not there was a speech utterance therein, providing an output only when there was, in fact, an utterance. Detector 10 can be any suitable utterance detector such as a voice/no voice (VOX) detector which detects words spoken in isolation or a word-spotting method capable of detecting a keyword uttered within a longer utterance of continuous speech, such as the word-spotting methods described in the article by R. C. Rose provided hereinabove. An exemplary VOX is described in Part 6 of the European Telecommunication Standard ETS 300 581-6, entitled "part 6: Voice Activity Detector (VAD) for Half Rate Speech Traffic Channels (GSM 06.42)" which is incorporated herein by reference.
The pattern matcher 12 can be any suitable pattern matcher such as those performing dynamic time warping (DTW) or any other suitable speaker dependent pattern matcher. DTW is described in U.S. Pat. No. 4,488,243 to Brown et al. and is incorporated herein by reference.
In accordance with a preferred embodiment of the present invention, the pattern matcher 12 produces match scores of the input utterance against either the keyword template 14 or the database 16 of templates for the words of the closed vocabulary.
The criterion determiner 20 and pattern matcher 12 operate together in two modes; a keyword determining mode (FIG. 2) and a vocabulary word determining mode (FIG. 3). In the first mode and as shown in FIG. 2, pattern matcher 12 first matches the utterance (step 30) to the keyword template and produces a keyword score, where, in this embodiment, the lower the score (i.e. the lower the error between the utterance and the template), the better the match. Other criteria of being "best" can also be utilized herein and the tests of steps 32 and 36 should be changed accordingly. If desired, the pattern matcher 12 can normalize the keyword score by some function, such as an average of all of the other scores, in order to reduce its environmental variability.
In step 32 the criterion determiner 20 determines if the keyword score indicates that the utterance is significantly far, in absolute terms, from the keyword. For example, in this embodiment, the keyword score is too large. If so, the utterance is ignored and the system waits until utterance detector 10 detects a further utterance.
Otherwise and in accordance with a preferred embodiment of the present invention, the pattern matcher 12 matches the utterance (step 34) to the entirety of vocabulary templates in database 16, producing a score, indicated as score(i), for each word of the closed vocabulary. Criterion determiner 20 accepts the utterance as the keyword only if the keyword score is "better" than all of the scores, score(i), i=1. . . N, of the vocabulary words, where, in this embodiment, "better" means "is less than". In other words, the utterance has not only to be a reasonable match in absolute terms, but has to match the keyword template better than any of the vocabulary templates in database 16. The first criterion (of step 32) is an absolute criterion and the second criterion (of step 36) is a relative one.
It will be appreciated that, if an utterance is not a keyword, it has an equal chance of being classified as one of the vocabulary comprised of the keyword and the words of the closed vocabulary. Thus, the vocabulary templates serve to reduce the chance that a non-keyword utterance will be classified as a keyword, thereby increasing the quality of the keyword recognition.
Once criterion determiner 20 accepts the utterance as a keyword utterance (i.e. the result of step 36 is positive), the system switches modes to the vocabulary word determining mode and proceeds to the method of FIG. 3 in which it opens a listening window for utterances which will match the vocabulary words in database 16.
In step 40, the pattern matcher 12 receives an utterance from utterance detector 10 and matches the utterance to each of the vocabulary templates in database 16, producing a score, score(i), for each one. In step 42, criterion determiner 20 selects the best score from among score(i) in accordance with any suitable criterion, such as smallest. The criterion determiner 20 provides the word associated with the selected score as the matched word.
It will be appreciated that the keyword recognition system of the present invention provides a hands-free operation with a closed vocabulary.
Reference is now made to FIG. 4 which illustrates a method of processing the vocabulary words to select only those which are not similar to the keyword. The method of FIG. 4 reduces the possibility that a true keyword will not be detected due to being mistaken for a similar sounding vocabulary word.
In step 50, the pattern matcher 12 matches the keyword template to each of the vocabulary templates producing a kscore(i) for each vocabulary template, wherein each kscore(i) indicates the closeness of the keyword and the ith vocabulary word. In step 52, each kscore(i) is compared to a similar word threshold above which the keyword is different than the ith vocabulary word and below which the keyword is too close to the ith vocabulary word.
In step 54 (kscore(i) above threshold), the ith vocabulary template is marked different. Thus, the keyword recognition process of FIG. 2 will utilize the ith vocabulary template (in step 34 thereof). If kscore(i) is below the threshold, the keyword recognition process will not utilize the ith vocabulary template.
It will be appreciated that a system might have a plurality of vocabularies, each selected via a different keywords. As illustrated in FIG. 5 to which reference is now made, for this embodiment, the present invention includes a keyword database 60 having a plurality M of keyword templates 62 and a plurality M of vocabulary databases 64.
Initially, the pattern matcher 12 matches (step 70) the utterance with each of the keyword templates 62 of keyword database 60. In step 72, the criterion determiner 20 selects the best keyword score, for example, the keyword score corresponding to the kth keyword template 62.
In step 74, the criterion determiner 20 determines if the kth keyword score indicates that the utterance is significantly far, in absolute terms, from the keyword. If so, the utterance is ignored and the system waits until utterance detector 10 detects a further utterance.
Otherwise, the pattern matcher 12 matches the utterance (step 76) to the vocabulary templates in all of the vocabulary databases 64. The pattern matcher 12 can match the utterance to all of the vocabulary templates or, as described hereinabove with respect to FIG. 4, to those vocabulary templates not similar to the keyword templates.
Criterion determiner 20 accepts the utterance as the kth keyword only if (step 78) the kth keyword score is better than all of the resultant scores, score(i) of the vocabulary words. In step 80, criterion determiner 20 indicates to pattern matcher to switch modes to the closed vocabulary recognition mode and to operate on the kth vocabulary database.
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined only by the claims which follow:
|US32012 *||9. Apr. 1861||Improvement in desulphurizing coal and ores|
|US4348550 *||9. Juni 1980||7. Sept. 1982||Bell Telephone Laboratories, Incorporated||Spoken word controlled automatic dialer|
|US4489434 *||5. Okt. 1981||18. Dez. 1984||Exxon Corporation||Speech recognition method and apparatus|
|US4860358 *||18. Dez. 1987||22. Aug. 1989||American Telephone And Telegraph Company, At&T Bell Laboratories||Speech recognition arrangement with preselection|
|US4896358 *||17. März 1987||23. Jan. 1990||Itt Corporation||Method and apparatus of rejecting false hypotheses in automatic speech recognizer systems|
|US4941178 *||9. Mai 1989||10. Juli 1990||Gte Laboratories Incorporated||Speech recognition using preclassification and spectral normalization|
|US4994983 *||2. Mai 1989||19. Febr. 1991||Itt Corporation||Automatic speech recognition system using seed templates|
|US5036539 *||6. Juli 1989||30. Juli 1991||Itt Corporation||Real-time speech processing development system|
|US5073939 *||8. Juni 1989||17. Dez. 1991||Itt Corporation||Dynamic time warping (DTW) apparatus for use in speech recognition systems|
|US5163081 *||5. Nov. 1990||10. Nov. 1992||At&T Bell Laboratories||Automated dual-party-relay telephone system|
|US5218668 *||14. Okt. 1992||8. Juni 1993||Itt Corporation||Keyword recognition system and method using template concantenation model|
|US5649057 *||16. Jan. 1996||15. Juli 1997||Lucent Technologies Inc.||Speech recognition employing key word modeling and non-key word modeling|
|US5710864 *||29. Dez. 1994||20. Jan. 1998||Lucent Technologies Inc.||Systems, methods and articles of manufacture for improving recognition confidence in hypothesized keywords|
|US5737724 *||8. Aug. 1996||7. Apr. 1998||Lucent Technologies Inc.||Speech recognition employing a permissive recognition criterion for a repeated phrase utterance|
|US5794196 *||24. Juni 1996||11. Aug. 1998||Kurzweil Applied Intelligence, Inc.||Speech recognition system distinguishing dictation from commands by arbitration between continuous speech and isolated word modules|
|US5799279 *||13. Nov. 1995||25. Aug. 1998||Dragon Systems, Inc.||Continuous speech recognition of text and commands|
|1||*||Chin Hui Lee, Frank K. Soong, Kuldip K. Paliwal; Automatic speech and speaker recognition; Kluwer academic publishers.|
|2||Chin-Hui Lee, Frank K. Soong, Kuldip K. Paliwal; Automatic speech and speaker recognition; Kluwer academic publishers.|
|3||*||European digital cellular telecommunications systems, Half rate speech, part 6: Voice Activity Detector (VAD) for half rate speech traffic channels (GSM 06.42);Nov. 1995;pp. 5 23.|
|4||European digital cellular telecommunications systems, Half rate speech, part 6: Voice Activity Detector (VAD) for half rate speech traffic channels (GSM 06.42);Nov. 1995;pp. 5-23.|
|Zitiert von Patent||Eingetragen||Veröffentlichungsdatum||Antragsteller||Titel|
|US6208971 *||30. Okt. 1998||27. März 2001||Apple Computer, Inc.||Method and apparatus for command recognition using data-driven semantic inference|
|US6594632 *||2. Nov. 1998||15. Juli 2003||Ncr Corporation||Methods and apparatus for hands-free operation of a voice recognition system|
|US6836760||29. Sept. 2000||28. Dez. 2004||Apple Computer, Inc.||Use of semantic inference and context-free grammar with speech recognition system|
|US7280961 *||3. März 2000||9. Okt. 2007||Sony Corporation||Pattern recognizing device and method, and providing medium|
|US7289950||21. Sept. 2004||30. Okt. 2007||Apple Inc.||Extended finite state grammar for speech recognition systems|
|US7401224||13. Juni 2002||15. Juli 2008||Qualcomm Incorporated||System and method for managing sonic token verifiers|
|US7698136 *||28. Jan. 2003||13. Apr. 2010||Voxify, Inc.||Methods and apparatus for flexible speech recognition|
|US8391480||3. Febr. 2009||5. März 2013||Qualcomm Incorporated||Digital authentication over acoustic channel|
|US8468023||1. Okt. 2012||18. Juni 2013||Google Inc.||Handsfree device with countinuous keyword recognition|
|US8892446||21. Dez. 2012||18. Nov. 2014||Apple Inc.||Service orchestration for intelligent automated assistant|
|US8903716||21. Dez. 2012||2. Dez. 2014||Apple Inc.||Personalized vocabulary for digital assistant|
|US8930191||4. März 2013||6. Jan. 2015||Apple Inc.||Paraphrasing of user requests and results by automated digital assistant|
|US8942986||21. Dez. 2012||27. Jan. 2015||Apple Inc.||Determining user intent based on ontologies of domains|
|US8943583||14. Juli 2008||27. Jan. 2015||Qualcomm Incorporated||System and method for managing sonic token verifiers|
|US9081852 *||1. Okt. 2008||14. Juli 2015||Fujitsu Limited||Recommending terms to specify ontology space|
|US9117447||21. Dez. 2012||25. Aug. 2015||Apple Inc.||Using event alert text as input to an automated assistant|
|US9177551 *||28. Mai 2008||3. Nov. 2015||At&T Intellectual Property I, L.P.||System and method of providing speech processing in user interface|
|US9214155||8. Mai 2013||15. Dez. 2015||Google Inc.||Handsfree device with countinuous keyword recognition|
|US9262612||21. März 2011||16. Febr. 2016||Apple Inc.||Device access using voice authentication|
|US9295086||30. Aug. 2013||22. März 2016||Motorola Solutions, Inc.||Method for operating a radio communication device in a multi-watch mode|
|US9300784||13. Juni 2014||29. März 2016||Apple Inc.||System and method for emergency calls initiated by voice command|
|US9318108||10. Jan. 2011||19. Apr. 2016||Apple Inc.||Intelligent automated assistant|
|US9330720||2. Apr. 2008||3. Mai 2016||Apple Inc.||Methods and apparatus for altering audio output signals|
|US9338493||26. Sept. 2014||10. Mai 2016||Apple Inc.||Intelligent automated assistant for TV user interactions|
|US9368114||6. März 2014||14. Juni 2016||Apple Inc.||Context-sensitive handling of interruptions|
|US9390708 *||28. Mai 2013||12. Juli 2016||Amazon Technologies, Inc.||Low latency and memory efficient keywork spotting|
|US9430463||30. Sept. 2014||30. Aug. 2016||Apple Inc.||Exemplar-based natural language processing|
|US9483461||6. März 2012||1. Nov. 2016||Apple Inc.||Handling speech synthesis of content for multiple languages|
|US9495129||12. März 2013||15. Nov. 2016||Apple Inc.||Device, method, and user interface for voice-activated navigation and browsing of a document|
|US9502031||23. Sept. 2014||22. Nov. 2016||Apple Inc.||Method for supporting dynamic grammars in WFST-based ASR|
|US9530415||30. Okt. 2015||27. Dez. 2016||At&T Intellectual Property I, L.P.||System and method of providing speech processing in user interface|
|US9535906||17. Juni 2015||3. Jan. 2017||Apple Inc.||Mobile device having human language translation capability with positional feedback|
|US9548047 *||10. Okt. 2013||17. Jan. 2017||Google Technology Holdings LLC||Method and apparatus for evaluating trigger phrase enrollment|
|US9548050||9. Juni 2012||17. Jan. 2017||Apple Inc.||Intelligent automated assistant|
|US9576574||9. Sept. 2013||21. Febr. 2017||Apple Inc.||Context-sensitive handling of interruptions by intelligent digital assistant|
|US9582608||6. Juni 2014||28. Febr. 2017||Apple Inc.||Unified ranking with entropy-weighted information for phrase-based semantic auto-completion|
|US9620104||6. Juni 2014||11. Apr. 2017||Apple Inc.||System and method for user-specified pronunciation of words for speech synthesis and recognition|
|US9620105||29. Sept. 2014||11. Apr. 2017||Apple Inc.||Analyzing audio input for efficient speech and music recognition|
|US9626955||4. Apr. 2016||18. Apr. 2017||Apple Inc.||Intelligent text-to-speech conversion|
|US9633004||29. Sept. 2014||25. Apr. 2017||Apple Inc.||Better resolution when referencing to concepts|
|US9633660||13. Nov. 2015||25. Apr. 2017||Apple Inc.||User profiling for voice input processing|
|US9633674||5. Juni 2014||25. Apr. 2017||Apple Inc.||System and method for detecting errors in interactions with a voice-based digital assistant|
|US9646609||25. Aug. 2015||9. Mai 2017||Apple Inc.||Caching apparatus for serving phonetic pronunciations|
|US9646614||21. Dez. 2015||9. Mai 2017||Apple Inc.||Fast, language-independent method for user authentication by voice|
|US9668024||30. März 2016||30. Mai 2017||Apple Inc.||Intelligent automated assistant for TV user interactions|
|US9668121||25. Aug. 2015||30. Mai 2017||Apple Inc.||Social reminders|
|US9697820||7. Dez. 2015||4. Juli 2017||Apple Inc.||Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks|
|US9697822||28. Apr. 2014||4. Juli 2017||Apple Inc.||System and method for updating an adaptive speech recognition model|
|US9703350 *||24. Juni 2013||11. Juli 2017||Maxim Integrated Products, Inc.||Always-on low-power keyword spotting|
|US9711141||12. Dez. 2014||18. Juli 2017||Apple Inc.||Disambiguating heteronyms in speech synthesis|
|US9715875||30. Sept. 2014||25. Juli 2017||Apple Inc.||Reducing the need for manual start/end-pointing and trigger phrases|
|US9721566||31. Aug. 2015||1. Aug. 2017||Apple Inc.||Competing devices responding to voice triggers|
|US9734193||18. Sept. 2014||15. Aug. 2017||Apple Inc.||Determining domain salience ranking from ambiguous words in natural speech|
|US9760559||22. Mai 2015||12. Sept. 2017||Apple Inc.||Predictive text input|
|US9785630||28. Mai 2015||10. Okt. 2017||Apple Inc.||Text prediction using combined word N-gram and unigram language models|
|US9792914||5. Juli 2016||17. Okt. 2017||Google Inc.||Speaker verification using co-location information|
|US9798393||25. Febr. 2015||24. Okt. 2017||Apple Inc.||Text correction processing|
|US20050038650 *||21. Sept. 2004||17. Febr. 2005||Bellegarda Jerome R.||Method and apparatus to use semantic inference with speech recognition systems|
|US20090044015 *||14. Juli 2008||12. Febr. 2009||Qualcomm Incorporated||System and method for managing sonic token verifiers|
|US20090094020 *||1. Okt. 2008||9. Apr. 2009||Fujitsu Limited||Recommending Terms To Specify Ontology Space|
|US20090141890 *||3. Febr. 2009||4. Juni 2009||Qualcomm Incorporated||Digital authentication over acoustic channel|
|US20090187410 *||28. Mai 2008||23. Juli 2009||At&T Labs, Inc.||System and method of providing speech processing in user interface|
|US20140281628 *||24. Juni 2013||18. Sept. 2014||Maxim Integrated Products, Inc.||Always-On Low-Power Keyword spotting|
|US20150039311 *||10. Okt. 2013||5. Febr. 2015||Motorola Mobility Llc||Method and Apparatus for Evaluating Trigger Phrase Enrollment|
|WO2003098866A1 *||15. Mai 2003||27. Nov. 2003||Qualcomm, Incorporated||System and method for managing sonic token verifiers|
|US-Klassifikation||704/241, 704/E15.016, 704/251, 704/275|
|Internationale Klassifikation||G10L15/22, G10L15/00, G10L15/12|
|12. Dez. 1996||AS||Assignment|
Owner name: DSPC ISRAEL LTD, ISRAEL
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ERELL, ADORAM;REEL/FRAME:008297/0118
Effective date: 19961127
|15. Nov. 1999||AS||Assignment|
Owner name: D.S.P.C. TECHNOLOGIES, LTD., ISRAEL
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:D.S.P.C. ISRAEL LTD.;REEL/FRAME:010417/0018
Effective date: 19991025
|10. Febr. 2000||AS||Assignment|
Owner name: D S P C TECHNOLOGIES LTD., ISRAEL
Free format text: CHANGE OF NAME;ASSIGNOR:DSPC ISRAEL LTD.;REEL/FRAME:010589/0501
Effective date: 19991111
|1. Nov. 2001||AS||Assignment|
Owner name: D.S.P.C. TECHNOLOGIES LTD., ISRAEL
Free format text: CHANGE OF ADDRESS;ASSIGNOR:D.S.P.C. TECHNOLOGIES LTD.;REEL/FRAME:012252/0590
Effective date: 20011031
|8. Aug. 2003||FPAY||Fee payment|
Year of fee payment: 4
|9. Nov. 2006||AS||Assignment|
Owner name: INTEL CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DSPC TECHNOLOGIES LTD.;REEL/FRAME:018505/0943
Effective date: 20060926
|6. Aug. 2007||FPAY||Fee payment|
Year of fee payment: 8
|3. Aug. 2011||FPAY||Fee payment|
Year of fee payment: 12