US20120310642A1 - Automatically creating a mapping between text data and audio data - Google Patents

Automatically creating a mapping between text data and audio data Download PDF

Info

Publication number
US20120310642A1
US20120310642A1 US13/267,738 US201113267738A US2012310642A1 US 20120310642 A1 US20120310642 A1 US 20120310642A1 US 201113267738 A US201113267738 A US 201113267738A US 2012310642 A1 US2012310642 A1 US 2012310642A1
Authority
US
United States
Prior art keywords
text
audio
work
mapping
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/267,738
Inventor
Xiang Cao
Alan C. Cannistraro
Gregory S. Robbin
Casey M. Dougherty
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority to US13/267,738 priority Critical patent/US20120310642A1/en
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CANNISTRARO, ALAN C., CAO, XIANG, DOUGHERTY, CASEY M., ROBBIN, GREGORY S.
Priority to JP2012126444A priority patent/JP5463385B2/en
Priority to TW101119921A priority patent/TWI488174B/en
Priority to JP2014513799A priority patent/JP2014519058A/en
Priority to AU2012261818A priority patent/AU2012261818B2/en
Priority to EP12729332.2A priority patent/EP2593846A4/en
Priority to KR1020137034641A priority patent/KR101622015B1/en
Priority to CN201280036281.5A priority patent/CN103703431B/en
Priority to KR1020120060060A priority patent/KR101324910B1/en
Priority to CN2012103062689A priority patent/CN102937959A/en
Priority to PCT/US2012/040801 priority patent/WO2012167276A1/en
Priority to KR1020167006970A priority patent/KR101700076B1/en
Priority to KR1020157017690A priority patent/KR101674851B1/en
Publication of US20120310642A1 publication Critical patent/US20120310642A1/en
Priority to JP2014008040A priority patent/JP2014132345A/en
Priority to AU2016202974A priority patent/AU2016202974B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/169Annotation, e.g. comment data or footnotes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/19Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems

Definitions

  • the present invention relates to automatically creating a mapping between text data and audio data by analyzing the audio data to detect words reflected therein and compare those words to words in the document.
  • digital books also known as “e-books”
  • e-book readers or “e-readers”.
  • other handheld devices such as tablet computers and smart phones, although not designed solely as e-readers, have the capability to be operated as e-readers.
  • EPUB European Digital Publishing Forum
  • An EPUB file uses XHTML 1.1 (or DTBook) to construct the content of a book. Styling and layout are performed using a subset of CSS, referred to as OPS Style Sheets.
  • an audio version of the written work is created. For example, a recording of a famous individual (or one with a pleasant voice) reading a written work is created and made available for purchase, whether online or in a brick and mortar store.
  • an e-book and an audio version (or “audio book”) of the e-book.
  • a user reads the entirety of an e-book and then desires to listen to the audio book.
  • a user transitions between reading and listening to the book, based on the user's circumstances. For example, while engaging in sports or driving during a commute, the user will tend to listen to the audio version of the book. On the other hand, when lounging in a sofa-chair prior to bed, the user will tend to read the e-book version of the book. Unfortunately, such transitions can be painful, since the user must remember where she stopped in the e-book and manually locate where to begin in the audio book, or visa-versa.
  • EPUB Media Overlays 3.0 defines a usage of SMIL (Synchronized Multimedia Integration Language), the Package Document, the EPUB Style Sheet, and the EPUB Content Document for representation of synchronized text and audio publications.
  • a pre-recorded narration of a publication can be represented as a series of audio clips, each corresponding to part of the text.
  • Each single audio clip, in the series of audio clips that make up a pre-recorded narration typically represents a single phrase or paragraph, but infers no order relative to the other clips or to the text of a document.
  • Media Overlays solve this problem of synchronization by tying the structured audio narration to its corresponding text in the EPUB Content Document using SMIL markup.
  • Media Overlays are a simplified subset of SMIL 3.0 that allow the playback sequence of these clips to be defined.
  • FIG. 1 is a flow diagram that depicts a process for automatically creating a mapping between text data and audio data, according to an embodiment of the invention
  • FIG. 2 is a block diagram that depicts a process that involves an audio-to-text correlator in generating a mapping between text data and audio data, according to an embodiment of the invention
  • FIG. 3 is a flow diagram that depicts a process for using a mapping in one or more of these scenarios, according to an embodiment of the invention
  • FIG. 4 is a block diagram that an example system 400 that may be used to implement some of the processes described herein, according to an embodiment of the invention.
  • FIGS. 5A-B are flow diagrams that depict processes for bookmark switching, according to an embodiment of the invention.
  • FIG. 6 is a flow diagram that depicts a process for causing text, from a textual version of a work, to be highlighted while an audio version of the work is being played, according to an embodiment of the invention
  • FIG. 7 is a flow diagram that depicts a process of highlighting displayed text in response to audio input from a user, according to an embodiment of the invention.
  • FIGS. 8A-B are flow diagrams that depict processes for transferring an annotation from one media context to another, according to an embodiment of the invention.
  • FIG. 9 is a block diagram that illustrates a computer system upon which an embodiment of the invention may be implemented.
  • a mapping is automatically created where the mapping maps locations within an audio version of a work (e.g., an audio book) with corresponding locations in a textual version of the work (e.g., an e-book).
  • the mapping is created by performing a speech-to-text analysis on the audio version to identify words reflected in the audio version. The identified words are matched up with the corresponding words in the textual version of the work.
  • the mapping associates locations (within the audio version) of the identified words with locations in the textual version of the work where the identified words are found.
  • the audio data reflects an audible reading of text of a textual version of a work, such as a book, web page, pamphlet, flyer, etc.
  • the audio data may be stored in one or more audio files.
  • the one or more audio files may be in one of many file formats. Non-limiting examples of audio file formats include AAC, MP3, WAV, and PCM.
  • the text data to which the audio data is mapped may be stored in one of many document file formats.
  • document file formats include DOC, TXT, PDF, RTF, HTML, XHTML, and EPUB.
  • a typical EPUB document is accompanied by a file that (a) lists each XHTML content document, and (b) indicates an order of the XHTML content documents. For example, if a book comprises 20 chapters, then an EPUB document for that book may have 20 different XHTML documents, one for each chapter. A file that accompanies the EPUB document identifies an order of the XHTML documents that corresponds to the order of the chapters in the book. Thus, a single (logical) document (whether an EPUB document or another type of document) may comprise multiple data items or files.
  • the words or characters reflected in the text data may be in one or multiple languages.
  • one portion of the text data may be in English while another portion of the text data may be in French.
  • examples of English words are provided herein, embodiments of the invention may be applied to other languages, including character-based languages.
  • a mapping comprises a set of mapping records, where each mapping record associates an audio location with a text location.
  • Each audio location identifies a location in audio data.
  • An audio location may indicate an absolute location within the audio data, a relative location within the audio data, or a combination of an absolute location and a relative location.
  • an audio location may indicate a time offset (e.g., 04:32:24 indicating 4 hours, 32 minutes, 24 seconds) into the audio data, or a time range, as indicated above in Example A.
  • a relative location an audio location may indicate a chapter number, a paragraph number, and a line number.
  • the audio location may indicate a chapter number and a time offset into the chapter indicated by the chapter number.
  • each text location identifies a location in text data, such as a textual version of a work.
  • a text location may indicate an absolute location within the textual version of the work, a relative location within the textual version of the work, or a combination of an absolute location and a relative location.
  • a text location may indicate a byte offset into the textual version of the work and/or an “anchor” within the textual version of the work.
  • An anchor is metadata within the text data that identifies a specific location or portion of text. An anchor may be stored separate from the text in the text data that is displayed to an end-user or may be stored among the text that is displayed to an end-user.
  • there is an anchor prior to each word in the sentence there is an anchor prior to each word in the sentence.
  • a text location may indicate a page number, a chapter number, a paragraph number, and/or a line number.
  • a text location may indicate a chapter number and an anchor into the chapter indicated by the chapter number.
  • the “par” element includes two child elements: a “text” element and an “audio” element.
  • the text element comprises an attribute “src” that identifies a particular sentence within an XHTML document that contains content from the first chapter of a book.
  • the audio element comprises a “src” attribute that identifies an audio file that contains an audio version of the first chapter of the book, a “clipBegin” attribute that identifies where an audio clip within the audio file begins, and a “clipEnd” attribute that identifies where the audio clip within the audio file ends.
  • seconds 23 through 45 in the audio file correspond to the first sentence in Chapter 1 of the book.
  • a mapping between a textual version of a work and an audio version of the same work is automatically generated. Because the mapping is generated automatically, the mapping may use much finer granularity than would be practical using manual text-to-audio mapping techniques.
  • Each automatically-generated text-to-audio mapping includes multiple mapping records, each of which associates a text location in the textual version with an audio location in the audio version.
  • FIG. 1 is a flow diagram that depicts a process 100 for automatically creating a mapping between a textual version of a work and an audio version of the same work, according to an embodiment of the invention.
  • a speech-to-text analyzer receives audio data that reflects an audible version of the work.
  • the speech-to-text analyzer performs an analysis of the audio data
  • the speech-to-text analyzer generates text for portions of the audio data.
  • the speech-to-text analyzer Based on the text generated for the portions of the audio data, the speech-to-text analyzer generates a mapping between a plurality of audio locations in the audio data and a corresponding plurality of text locations in the textual version of the work.
  • Step 130 may involve the speech-to-text analyzer comparing the generated text with text in the textual version of the work to determine where, within the textual version of the work, the generated text is located. For each portion of generated text that is found in the textual version of the work, the speech-to-text analyzer associates (1) an audio location that indicates where, within the audio data, the corresponding portion of audio data is found with (2) a text location that indicates where, within the textual version of the work, the portion of text is found.
  • the textual context of a textual version of a work includes intrinsic characteristics of the textual version of the work (e.g. the language the textual version of the work is written in, the specific words that textual version of the work uses, the grammar and punctuation that textual version of the work uses, the way the textual version of the work is structured, etc.) and extrinsic characteristics of the work (e.g. the time period in which the work was created, the genre to which the work belongs, the author of the work, etc.)
  • intrinsic characteristics of the textual version of the work e.g. the language the textual version of the work is written in, the specific words that textual version of the work uses, the grammar and punctuation that textual version of the work uses, the way the textual version of the work is structured, etc.
  • extrinsic characteristics of the work e.g. the time period in which the work was created, the genre to which the work belongs, the author of the work, etc.
  • the grammar used in a classic English novel may be very different that the grammar of modern poetry.
  • a certain word order may follow the rules of one grammar, that same word order may violate the rules of another grammar.
  • the grammar used in both a classic English novel and modern poetry may differ from the grammar (or lack thereof) employed in a text message sent from one teenager to another.
  • one technique described herein automatically creates a fine granularity mapping between the audio version of a work and the textual version of the same work by performing a speech-to-text conversion of the audio version of the work.
  • the textual context of a work is used to increase the accuracy of the speech-to-text analysis that is performed on the audio version of the work.
  • the speech-to-text analyzer (or another process) may analyze the textual version of the work prior to performing a speech-to-text analysis. The speech-to-text analyzer may then make use of the grammar information thus obtained to increase the accuracy of the speech-to-text analysis of the audio version of the work.
  • a user may provide input that identifies one or more rules of grammar that are followed by the author of the work.
  • the rules associated with the identified grammar are input to the speech-to-text analyzer to assist the analyzer in recognizing words in the audio version of the work.
  • speech-to-text analyzers must be configured or designed to recognize virtually every word in the English language and, optionally, some words in other languages. Therefore, speech-to-text analyzers must have access to a large dictionary of words.
  • the dictionary from which a speech-to-text analyzer selects words during a speech-to-text operation is referred to herein as the “candidate dictionary” of the speech-to-text analyzer.
  • the number of unique words in a typical candidate dictionary is approximately 500,000.
  • text from the textual version of a work is taken into account when performing the speech-to-text analysis of the audio version of the work.
  • the candidate dictionary used by the speech-to-text analyzer is restricted to the specific set of words that are in the text version of the work.
  • the only words that are considered to be “candidates” during the speech-to-text operation performed on an audio version of a work are those words that actually appear in the textual version of the work.
  • the speech-to-text operation may be significantly improved. For example, assume that the number of unique words in a particular work is 20,000. A conventional speech-to-text analyzer may have difficulty determining to which specific word, of a 500,000 word candidate dictionary, a particular portion of audio corresponds. However, that same portion of audio may unambiguously correspond to one particular word when only the 20,000 unique words that are in the textual version of the work are considered. Thus, with such a much smaller dictionary of possible words, the accuracy of the speech-to-text analyzer may be significantly improved.
  • the candidate dictionary may be restricted to even fewer words than all of the words in the textual version of the work.
  • the candidate dictionary is limited to those words found in a particular portion of the textual version of the work. For example, during a speech-to-text translation of a work, it is possible to approximately track the “current translation position” of the translation operation relative to the textual version of the work. Such tracking may be performed, for example, by comparing (a) the text that has been generated during the speech-to-text operation so far, against (b) the textual version of the work.
  • the candidate dictionary may further restricted based on the current translation position. For example, in one embodiment, the candidate dictionary is limited to only those words that appear, within the textual version of the work, after the current translation position. Thus, words that are found prior to the current translation position, but not thereafter, are effectively removed from the candidate dictionary. Such removal may increase the accuracy of the speech-to-text analyzer, since the smaller the candidate dictionary, the less likely the speech-to-text analyzer will translate a portion of audio data to the wrong word.
  • an audio book and a digital book may be divided into a number of segments or sections.
  • the audio book may be associated with an audio section mapping and the digital book may be associated with a text section mapping.
  • the audio section mapping and the text section mapping may identify where each chapter begins or ends.
  • These respective mappings may be used by a speech-to-text analyzer to limit the candidate dictionary. For example, if the speech-to-text analyzer determines, based on the audio section mapping, that the speech-to-text analyzer is analyzing the 4 th chapter of the audio book, then the speech-to-text analyzer uses the text section mapping to identify the 4 th chapter of the digital book and limit the candidate dictionary to the words found in the 4 th chapter.
  • the speech-to-text analyzer employs a sliding window that moves as the current translation position moves.
  • the speech-to-text analyzer moves the sliding window “across” the textual version of the work.
  • the sliding window indicates two locations within the textual version of the work.
  • the boundaries of the sliding window may be (a) the start of the paragraph that precedes the current translation position and (b) the end of the third paragraph after the current translation position.
  • the candidate dictionary is restricted to only those words that appear between those two locations.
  • the window may span any amount of text within the textual version of the work.
  • the window may span an absolute amount of text, such as 60 characters.
  • the window may span a relative amount of text from the textual version of the work, such as ten words, three “lines” of text, 2 sentences, or 1 “page” of text.
  • the speech-to-text analyzer may use formatting data within the textual version of the work to determine how much of the textual version of the work constitutes a line or a page.
  • the textual version of a work may comprise a page indicator (e.g., in the form of an HTML or XML tag) that indicates, within the content of the textual version of the work, the beginning of a page or the ending of a page.
  • a page indicator e.g., in the form of an HTML or XML tag
  • the start of the window corresponds to the current translation position.
  • the speech-to-text analyzer maintains a current text location that indicates the most recently-matched word in the textual version of the work and maintains a current audio location that indicates the most recently-identified word in the audio data.
  • the narrator whose voice is reflected in the audio data
  • misreads text of the textual version of the work adds his/her own content, or skips portions of the textual version of the work during the recording
  • the next word that the speech-to-text analyzer detects in the audio data i.e., after the current audio location
  • Maintaining both locations may significantly increase the accuracy of the speech-to-text translation.
  • a text-to-speech generator and an audio-to-text correlator are used to automatically create a mapping between the audio version of a work and the textual version of a work.
  • FIG. 2 is a block diagram that depicts these analyzers and the data used to generate the mapping.
  • Textual version 210 of a work (such as an EPUB document) is input to text-to-speech generator 220 .
  • Text-to-speech generator 220 may be implemented in software, hardware, or a combination of hardware and software. Whether implemented in software or hardware, text-to-speech generator 220 may be implemented on a single computing device or may be distributed among multiple computing devices.
  • Text-to-speech generator 220 generates audio data 230 based on document 210 .
  • text-to-speech generator 220 (or another component not shown) creates an audio-to-document mapping 240 .
  • Audio-to-document mapping 240 maps, multiple text locations within document 210 to corresponding audio locations within generated audio data 230 .
  • text-to-speech generator 220 generates audio data for a word located at location Y within document 210 .
  • the audio data that was generated for the work is located at a location X within audio data 230 .
  • a mapping would be created between location X and location Y.
  • text-to-speech generator 220 knows where a word or phrase occurs in document 210 when a corresponding word or phrase of audio is generated, each mapping between the corresponding words or phrases can be easily generated.
  • Audio-to-text correlator 260 accepts, as input, generated audio data 230 , audio book 250 , and audio-to-document mapping 240 . Audio-to-text correlator 260 performs two main steps: an audio-to-audio correlation step and a look-up step. For the audio-to-audio correlation step, audio-to-text correlator 260 compares generated audio data 230 with audio book 250 to determine the correlation between portions of audio data 230 and portions of audio book 250 . For example, audio-to-text correlator 260 may determine, for each word represented in audio data 230 , the location of the corresponding word in audio book 250 .
  • the granularity at which audio data 230 is divided, for the purpose of establishing correlations, may vary from implementation to implementation. For example, a correlation may be established between each word in audio data 230 and each corresponding word in audio book 250 . Alternatively, a correlation may be established based on fixed-duration time intervals (e.g. one mapping for every 1 minute of audio). In yet another alternative, a correlation may be established for portions of audio established based on other criteria, such as at paragraph or chapter boundaries, significant pauses (e.g., silence of greater than 3 seconds), or other locations based on data in audio book 250 , such as audio markers within audio book 250 .
  • a correlation may be established between each word in audio data 230 and each corresponding word in audio book 250 .
  • a correlation may be established based on fixed-duration time intervals (e.g. one mapping for every 1 minute of audio).
  • a correlation may be established for portions of audio established based on other criteria, such as at paragraph or chapter boundaries, significant pauses (e.
  • audio-to-text correlator 260 uses audio-to-document mapping 240 to identify a text location (indicated in mapping 240 ) that corresponds to the audio location within generated audio data 230 . Audio-to-text correlator 260 then associates the text location with the audio location within audio book 250 to create a mapping record in document-to-audio mapping 270 .
  • a mapping record in audio-to-document mapping 240 ) that correlates location X to location Y within document 210 .
  • a mapping record in document-to-audio mapping 270 would be created that correlates location Z of the audio book 250 with location Y within document 210 .
  • Audio-to-text correlator 260 repeatedly performs the audio-to-audio correlation and look-up steps for each portion of audio data 230 . Therefore, document-to-audio mapping 270 comprises multiple mapping records, each mapping record mapping a location within document 210 to a location within audio book 250 .
  • the audio-to-audio correlation for each portion of audio data 230 is immediately followed by the look-up step for that portion of audio.
  • document-to-audio mapping 270 may be created for each portion of audio data 230 prior to proceeding to the next portion of audio data 230 .
  • the audio-to-audio correlation step may be performed for many or for all of the portions of audio data 230 before any look-up step is performed.
  • the look-up steps for all portions can be performed in a batch, after all of the audio-to-audio correlations have been established.
  • a mapping has a number of attributes, one of which is the mapping's size, which refers to the number of mapping records in the mapping. Another attribute of a mapping is the mapping's “granularity.”
  • the “granularity” of a mapping refers to the number of mapping records in the mapping relative to the size of the digital work.
  • the granularity of a mapping may vary from one digital work to another digital work.
  • a first mapping for a digital book that comprises 200 “pages” includes a mapping record only for each paragraph in the digital book.
  • the first mapping may comprise 1000 mapping records.
  • a second mapping for a digital “children's” book that comprises 20 pages includes a mapping record for each word in the children's book.
  • the second mapping may comprise 800 mapping records.
  • the first mapping comprises more mapping records than the second mapping
  • the granularity of the second mapping is finer than the granularity of the first mapping.
  • the granularity of a mapping may be dictated based on input to a speech-to-text analyzer that generates the mapping. For example, a user may specify a specific granularity before causing a speech-to-text analyzer to generate a mapping.
  • specific granularities include:
  • word granularity i.e., an association for each word
  • sentence granularity i.e., an association for each sentence
  • 10-second granularity i.e., a mapping for each 10 seconds of audio.
  • a user may specify the type of digital work (e.g., novel, children's book, short story) and the speech-to-text analyzer (or another process) determines the granularity based on the work's type. For example, a children's book may be associated with word granularity while a novel may be associated with sentence granularity.
  • the type of digital work e.g., novel, children's book, short story
  • the speech-to-text analyzer or another process determines the granularity based on the work's type. For example, a children's book may be associated with word granularity while a novel may be associated with sentence granularity.
  • the granularity of a mapping may even vary within the same digital work. For example, a mapping for the first three chapters of a digital book may have sentence granularity while a mapping for the remaining chapters of the digital book have word granularity.
  • an audio-to-text mapping is generated at runtime or after a user has begun to consume the audio data and/or the text data on the user's device. For example, a user reads a textual version of a digital book using a tablet computer. The tablet computer keeps track of the most recent page or section of the digital book that the tablet computer has displayed to the user. The most recent page or section is identified by a “text bookmark.”
  • the playback device may be the same tablet computer on which the user was reading the digital book or another device.
  • the text bookmark is retrieved, and a speech-to-text analysis is performed relative to at least a portion of the audio book.
  • “temporary” mapping records are generated to establish a correlation between the generated text and the corresponding locations within the audio book.
  • a text-to-text comparison is used to determine the generated text that corresponds to the text bookmark. Then, the temporary mapping records are used to identify the portion of the audio book that corresponds to the portion of generated text that corresponds to the text bookmark. Playback of the audio book is then initiated from that position.
  • the portion of the audio book on which the speech-to-text analysis is performed may be limited to the portion that corresponds to the text bookmark.
  • an audio section mapping may already exist that indicates where certain portions of the audio book begin and/or end.
  • an audio section mapping may indicate where each chapter begins, where one or more pages begin, etc. Such an audio section mapping may be helpful to determine where to begin the speech-to-text analysis so that a speech-to-text analysis on the entire audio book is not required to be performed.
  • the text bookmark indicates a location within the 12 th chapter of the digital book and an audio section mapping associated with the audio data identifies where the 12 th chapter begins in the audio data
  • a speech-to-text analysis is not required to be performed on any of the first 11 chapters of the audio book.
  • the audio data may consist of 20 audio files, one audio file for each chapter. Therefore, only the audio file that corresponds to the 12 th chapter is input to a speech-to-text analyzer.
  • Mapping records can be generated on-the-fly to facilitate audio-to-text transitions, as well as text-to-audio transitions. For example, assume that a user is listening to an audio book using a smart phone. The smart phone keeps track of the current location within the audio book that is being played. The current location is identified by an “audio bookmark.” Later, the user picks up a tablet computer and selects a digital book version of the audio book to display.
  • the tablet computer receives the audio bookmark (e.g., from a central server that is remote relative to the tablet computer and the smart phone), performs a speech-to-text analysis of at least a portion of the audio book, and identifies, within the audio book, a portion that corresponds to a portion of text within a textual version of the audio book that corresponds to the audio bookmark.
  • the tablet computer then begins displaying the identified portion within the textual version.
  • the portion of the audio book on which the speech-to-text analysis is performed may be limited to the portion that corresponds to the audio bookmark.
  • a speech-to-text analysis is performed on a portion of the audio book that spans one or more time segments (e.g., seconds) prior to the audio bookmark in the audio book and/or one or more time segments after the audio bookmark in the audio book.
  • the text produced by the speech-to-text analysis on that portion is compared to text in the textual version to locate where the series of words or phrases in the produced text match text in the textual version.
  • the audio bookmark can be used to identify a section in the text section mapping, then much of the textual version need not be analyzed in order to locate where the series of words or phrases in the produced text match text in the textual version. For example, if the audio bookmark indicates a location within in the 3 rd chapter of the audio book and a text section mapping associated with the digital book identifies where the 3 rd chapter begins in the textual version, then a speech-to-text analysis is not required to be performed on any of the first two chapters of the audio book or on any of the chapters of the audio book after the 3 rd chapter.
  • a mapping (whether created manually or automatically) is used to identify the locations within an audio version of a digital work (e.g., an audio book) that correspond to locations within a textual version of the digital work (e.g., an e-book). For example, a mapping may be used to identify a location within an e-book based on a “bookmark” established in an audio book. As another example, a mapping may be used to identify which displayed text corresponds to an audio recording of a person reading the text as the audio recording is being played and cause the identified text to be highlighted. Thus, while an audio book is being played, a user of an e-book reader may follow along as the e-book reader highlights the corresponding text.
  • a mapping may be used to identify a location in audio data and play audio at that location in response to input that selects displayed text from an e-book.
  • a user may select a word in an e-book, which selection causes audio that corresponds to that word to be played.
  • a user may create an annotation while “consuming” (e.g., reading or listening to) one version of a digital work (e.g., an e-book) and cause the annotation to be consumed while the user is consuming another version of the digital work (e.g., an audio book).
  • a user can make notes on a “page” of an e-book and may view those notes while listening to an audio book of the e-book.
  • a user can make a note while listening to an audio book and then can view that note when reading the corresponding e-book.
  • FIG. 3 is a flow diagram that depicts a process for using a mapping in one or more of these scenarios, according to an embodiment of the invention.
  • location data that indicates a specified location within a first media item is obtained.
  • the first media item may be a textual version of a work or audio data that corresponds to a textual version of the work.
  • This step may be performed by a device (operated by a user) that consumes the first media item.
  • the step may be performed by a server that is located remotely relative to the device that consumes the first media item.
  • the device sends the location data to the server over a network using a communication protocol.
  • a mapping is inspected to determine a first media location that corresponds to the specified location. Similarly, this step may be performed by a device that consumes the first media item or by a server that is located remotely relative to the device.
  • a second media location that corresponds to the first media location and that is indicated in the mapping is determined. For example, if the specified location is an audio “bookmark”, then the first media location is an audio location indicated in the mapping and the second media location is a text location that is associated with the audio location in the mapping. Similarly, For example, if the specified location is a text “bookmark”, then the first media location is a text location indicated in the mapping and the second media location is an audio location that is associated with the text location in the mapping.
  • the second media item is processed based on the second media location. For example, if the second media item is audio data, then the second media location is an audio location and is used as a current playback position in the audio data. As another example, if the second media item is a textual version of a work, then the second media location is a text location and is used to determine which portion of the textual version of the work to display.
  • FIG. 4 is a block diagram that an example system 400 that may be used to implement some of the processes described herein, according to an embodiment of the invention.
  • System 400 includes end-user device 410 , intermediary device 420 , and end-user device 430 .
  • End-user devices 410 and 430 include desktop computers, laptop computers, smart phones, tablet computers, and other handheld computing devices.
  • device 410 stores a digital media item 402 and executes a text media player 412 and an audio media player 414 .
  • Text media player 412 is configured to process electronic text data and cause device 410 to display text (e.g., on a touch screen of device 410 , not shown).
  • digital media item 402 is an e-book
  • text media player 412 may be configured to process digital media item 402 , as long as digital media item 402 is in a text format that text media player 412 is configured to process.
  • Device 410 may execute one or more other media players (not shown) that are configured to process other types of media, such as video.
  • audio media player 414 is configured to process audio data and cause device 410 to generate audio (e.g., via speakers on device 410 , not shown).
  • digital media item 402 is an audio book
  • audio media player 414 may be configured to process digital media item 402 , as long as digital media item 402 is in an audio format that audio media player 414 is configured to process.
  • item 402 may comprise multiple files, whether audio files or text files.
  • Device 430 similarly stores a digital media item 404 and executes an audio media player 432 that is configured to process audio data and cause device 430 to generate audio.
  • Device 430 may execute one or more other media players (not shown) that are configured to process other types of media, such as video and text.
  • Intermediary device 420 stores a mapping 406 that maps audio locations within audio data to text location in text data. For example, mapping 406 may map audio locations within digital media item 404 to text locations within digital media item 402 . Although not depicted in FIG. 4 , intermediary device 420 may store many mappings, one for each corresponding set of audio data and text data. Also, intermediary device 420 may interact with many end-user devices not shown.
  • intermediary device 420 may store digital media items that users may access via their respective devices. Thus, instead of storing a local copy of a digital media item, a device (e.g., device 430 ) may request the digital media item from intermediary device 420 .
  • a device e.g., device 430
  • intermediary device 420 may store account data that associates one or more devices of a user with a single account. Thus, such account data may indicate that devices 410 and 430 are registered by the same user under the same account. Intermediary device 420 may also store account-item association data that associates an account with one or more digital media items owned (or purchased) by a particular user. Thus, intermediary device 420 may verify that device 430 may access a particular digital media item by determining whether the account-item association data indicates that device 430 and the particular digital media item are associated with the same account.
  • an end-user may own and operate more or less devices that consume digital media items, such as e-books and audio books.
  • intermediary device 420 the entity that owns and operates intermediary device 420 may operate multiple devices, each of which provide the same service or may operate together to provide a service to the user of end-user devices 410 and 430 .
  • Network 440 may be implemented by any medium or mechanism that provides for the exchange of data between various computing devices. Examples of such a network include, without limitation, a network such as a Local Area Network (LAN), Wide Area Network (WAN), Ethernet or the Internet, or one or more terrestrial, satellite, or wireless links.
  • the network may include a combination of networks such as those described.
  • the network may transmit data according to Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and/or Internet Protocol (IP).
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • IP Internet Protocol
  • mapping 406 may be stored separate from the text data and the audio data from which the mapping was generated. For example, as depicted in FIG. 4 , mapping 406 is stored separate from digital media items 402 and 404 even though mapping 406 may be used to identify a media location in one digital media item based on a media location in the other digital media item. In fact, mapping 406 is stored on a separate computing device (intermediary device 420 ) than devices 410 and 430 that store, respectively, digital media items 402 and 404 .
  • mapping 406 may be stored as part of the corresponding text data.
  • mapping 406 may be stored in digital media item 402 .
  • the mapping may not be displayed to an end-user that consumes the text data.
  • a mapping may be stored as part of the audio data.
  • mapping 406 may be stored in digital media item 404 .
  • bookmark switching refers to establishing a specified location (or “bookmark”) in one version of a digital work and using the bookmark to find the corresponding location within another version of the digital work.
  • TA bookmark switching involves using a text bookmark established in an e-book to identify a corresponding audio location in an audio book.
  • AT bookmark switching involves using an audio bookmark established in an audio book to identify a corresponding text location within an e-book.
  • FIG. 5A is a flow diagram that depicts a process 500 for TA bookmark switching, according to an embodiment of the invention.
  • FIG. 5A is described using elements of system 400 depicted in FIG. 4 .
  • a text media player 412 determines a text bookmark within digital media item 402 (e.g., a digital book).
  • Device 410 displays content from digital media item 402 to a user of device 410 .
  • the text bookmark may be determined in response to input from the user. For example, the user may touch an area on a touch screen of device 410 . Device 410 's display, at or near that area, displays one or more words. In response to the input, the text media player 412 determines the one or more words that are closest to the area. The text media player 412 determines the text bookmark based on the determined one or more words.
  • the text bookmark may be determined based on the last text data that was displayed to the user.
  • the digital media item 402 may comprise 200 electronic “pages” and page 110 was the last page that was displayed.
  • Text media player 412 determines that page 110 was the last page that was displayed.
  • Text media player 412 may establish page 110 as the text bookmark or may establish a point at the beginning of page 110 as the text bookmark, since there may be no way to know where the user stopped reading. It may be safe to assume that the user at least read the last sentence on page 109 , which sentence may have ended on page 109 or on page 110 . Therefore, the text media player 412 may establish the beginning of the next sentence (which begins on page 110 ) as the text bookmark.
  • text media player 412 may establish the beginning of the last paragraph on page 109 .
  • text media player 412 may establish the beginning of the chapter that includes page 110 as the text bookmark.
  • text media player 412 sends, over network 440 to intermediary device 420 , data that indicates the text bookmark.
  • Intermediary device 420 may store the text bookmark in association with device 410 and/or an account of the user of device 410 .
  • the user may have established an account with an operator of intermediary device 420 .
  • the user then registered one or more devices, including device 410 , with the operator. The registration caused each of the one or more devices to be associated with the user's account.
  • One or more factors may cause the text media player 412 to send the text bookmark to intermediary device 420 .
  • Such factors may include the exiting (or closing down) of text media player 412 , the establishment of the text bookmark by the user, or an explicit instruction by the user to save the text bookmark for use when listening to the audio book that corresponds to the textual version of the work for which the text bookmark is established.
  • intermediary device 420 has access to (e.g., stores) mapping 406 , which, in this example, maps multiple audio locations in digital media item 404 with multiple text locations within digital media item 402 .
  • intermediary device 420 inspects mapping 406 to determine a particular text location, of the multiple text locations, that corresponds to the text bookmark.
  • the text bookmark may not exactly match any of the multiple text locations in mapping 406 .
  • intermediary device 420 may select the text location that is closest to the text bookmark.
  • intermediary device 420 may select the text location that is immediately before the text bookmark, which text location may or may not be the closest text location to the text bookmark.
  • the text bookmark indicates 5 th chapter, 3 rd paragraph, 5 th sentence and the closest text locations in mapping 406 are (1) 5 th chapter, 3 rd paragraph, 1 st sentence and (2), 5 th chapter, 3 rd paragraph, 6 th sentence, then the text location (1) is selected.
  • intermediary device 420 determines a particular audio location, in mapping 406 , that corresponds to the particular text location.
  • intermediary device 420 sends the particular audio location to device 430 , which, in this example, is different than device 410 .
  • device 410 may be a tablet computer and the device 430 may be a smart phone.
  • device 430 is not involved.
  • intermediary device 420 may send the particular audio location to device 410 .
  • Step 510 may be performed automatically, i.e., in response to intermediary device 420 determining the particular audio location.
  • step 510 or step 506 may be performed in response to receiving, from device 430 , an indication that device 430 is about to process digital media item 404 .
  • the indication may be a request for an audio location that corresponds to the text bookmark.
  • audio media player 432 establishes the particular audio location as a current playback position of the audio data in digital media item 404 . This establishment may be performed in response to receiving the particular audio location from intermediary device 420 . Because the current playback position becomes the particular audio location, audio media player 432 is not required to play any of the audio that precedes the particular audio location in the audio data. For example, if the particular audio location indicates 2:56:03 (2 hours, 56 minutes, and 3 seconds), then audio media player 432 establishes that time in the audio data as the current playback position. Thus, if the user of device 430 selects a “play” button (whether graphical or physical) on device 430 , then audio media player 430 begins processing the audio data at that 2:56:03 mark.
  • device 410 stores mapping 406 (or a copy thereof). Therefore, in place of steps 504 - 508 , text media player 412 inspects mapping 406 to determine a particular text location, of the multiple text locations, that corresponds to the text bookmark. Then, text media player 412 determines a particular audio location, in mapping 406 , that corresponds to the particular text location. The text media player 412 may then cause the particular audio location to be sent to intermediary device 420 to allow device 430 to retrieve the particular audio location and establish a current playback position in the audio data to be the particular audio location.
  • Text media player 412 may also cause the particular text location (or text bookmark) to be sent to intermediary device 420 to allow device 410 (or another device, not shown) to later retrieve the particular text location to allow another text media player executing on the other device to display a portion (e.g., a page) of another copy of digital media item 402 , where the portion corresponds to the particular text location.
  • the particular text location or text bookmark
  • intermediary device 420 and device 430 are not involved. Thus, steps 504 and 510 are not performed. Thus, device 410 performs all other steps in FIG. 5A , including steps 506 and 508 .
  • FIG. 5B is a flow diagram that depicts a process 550 for AT bookmark switching, according to an embodiment of the invention. Similarly to FIG. 5A , FIG. 5B is described using elements of system 400 depicted in FIG. 4 .
  • audio media player 432 determines an audio bookmark within digital media item 404 (e.g., an audio book).
  • the audio bookmark may be determined in response to input from the user. For example, the user may stop the playback of the audio data, for example, by selecting a “stop” button that is displayed on a touch screen of device 430 . Audio media player 432 determines the location within audio data of digital media item 404 that corresponds to where playback stopped. Thus, the audio bookmark may simply be the last place where the user stopped listening to the audio generated from digital media item 404 . Additionally or alternatively, the user may select one or more graphical buttons on the touch screen of device 430 to establish a particular location within digital media item 404 as the audio bookmark. For example, device 430 displays a timeline that corresponds to the length of the audio data in digital media item 404 . The user may select a position on the timeline and then provide one or more additional inputs that are used by audio media player 432 to establish the audio bookmark.
  • device 430 sends, over network 440 to intermediary device 420 , data that indicates the audio bookmark.
  • the intermediary device 420 may store the audio bookmark in association with device 430 and/or an account of the user of device 430 .
  • the user established an account with an operator of intermediary device 420 .
  • the user then registered one or more devices, including device 430 , with the operator. The registration caused each of the one or more devices to be associated with the user's account.
  • Intermediary device 420 also has access to (e.g., stores) mapping 406 .
  • Mapping 406 maps multiple audio locations in the audio data of digital media item 404 with multiple text locations within text data of digital media item 402 .
  • One or more factors may cause audio media player 432 to send the audio bookmark to intermediary device 420 .
  • Such factors may include the exiting (or closing down) of audio media player 432 , the establishment of the audio bookmark by the user, or an explicit instruction by the user to save the audio bookmark for use when displaying portions of the textual version of the work (reflected in digital media item 402 ) that corresponds to digital media item 404 , for which the audio bookmark is established.
  • intermediary device 420 inspects mapping 406 to determine a particular audio location, of the multiple audio locations, that corresponds to the audio bookmark.
  • the audio bookmark may not exactly match any of the multiple audio locations in mapping 406 .
  • intermediary device 420 may select the audio location that is closest to the audio bookmark.
  • intermediary device 420 may select the audio location that is immediately before the audio bookmark, which audio location may or may not be the closest audio location to the audio bookmark. For example, if the audio bookmark indicates 02:43:19 (or 2 hours, 43 minutes, and 19 seconds) and the closest audio locations in mapping 406 are (1) 02:41:07 and (2), 0:43:56, then the audio location (1) is selected, even though audio location (2) is closest to the audio bookmark.
  • intermediary device 420 determines a particular text location, in mapping 406 , that corresponds to the particular audio location.
  • intermediary device 420 sends the particular text location to device 410 , which, in this example, is different than device 430 .
  • device 410 may be a tablet computer and device 430 may be a smart phone that is configured to process audio data and generate audible sounds.
  • Step 560 may be performed automatically, i.e., in response to intermediary device 420 determining the particular text location.
  • step 560 (or step 556 ) may be performed in response to receiving, from device 410 , an indication that device 410 is about to process the digital media item 402 .
  • the indication may be a request for a text location that corresponds to the audio bookmark.
  • text media player 412 displays information about the particular text location. Step 562 may be performed in response to receiving the particular text location from intermediary device 420 .
  • Device 410 is not required to display any of the content that precedes the particular text location in the textual version of the work reflected in digital media item 402 .
  • the particular text location indicates Chapter 3, paragraph 2, sentence 4, then device 410 displays a page that includes that sentence.
  • Text media player 412 may cause a marker to be displayed at the particular text location in the page that visually indicates, to a user of device 410 , where to begin reading in the page.
  • the user is able to immediately read the textual version of the work beginning at a location that corresponds to the last words spoken by a narrator in the audio book.
  • the device 410 stores mapping 406 . Therefore, in place of steps 556 - 560 , after step 554 (wherein the device 430 sends data that indicates the audio bookmark to intermediary device 420 ), intermediary device 420 sends the audio bookmark to device 410 . Then, text media player 412 inspects mapping 406 to determine a particular audio location, of the multiple audio locations, that corresponds to the audio bookmark. Then, text media player 412 determines a particular text location, in mapping 406 , that corresponds to the particular audio location. This alternative process then proceeds to step 562 , described above.
  • intermediary device 420 is not involved. Thus, steps 554 and 560 are not performed. Thus, device 430 performs all other steps in FIG. 5B , including steps 556 and 558 .
  • text from a portion of a textual version of a work is highlighted or “lit up” while audio data that corresponds to the textual version of the work is played.
  • the audio data is an audio version of a textual version of the work and may reflect a reading, of text from the textual version, by a human user.
  • highlighting refers to a media player (e.g., an “e-reader”) visually distinguishing that text from other text that is concurrently displayed with the highlighted text.
  • Highlighting text may involve changing the font of the text, changing the font style of the text (e.g., italicize, bold, underline), changing the size of the text, changing the color of the text, changing the background color of the text, or creating an animation associated with the text.
  • An example of creating an animation is causing the text (or background of the text) to blink on and off or to change colors.
  • Another example of creating an animation is creating a graphic to appear above, below, or around the text. For example, in response to the word “toaster” being played and detected by a media player, the media player displays a toaster image above the word “toaster” in the displayed text.
  • Another example of an animation is a bouncing ball that “bounces” on a portion of text (e.g., word, syllable, or letter) when that portion is detected in audio data that is played.
  • FIG. 6 is a flow diagram that depicts a process 600 for causing text, from a textual version of a work, to be highlighted while an audio version of the work is being played, according to an embodiment of the invention.
  • the current playback position (which is constantly changing) of audio data of the audio version is determined. This step may be performed by a media player executing on a user's device. The media player processes the audio data to generate audio for the user.
  • a mapping record in a mapping is identified.
  • the current playback position may match or nearly match the audio location identified in the mapping record.
  • Step 620 may be performed by the media player if the media player has access to a mapping that maps multiple audio locations in the audio data with multiple text locations in the textual version of the work.
  • step 620 may be performed by another process executing on the user's device or by a server that receives the current playback position from the user's device over a network.
  • the text location identified in the mapping record is identified.
  • step 640 a portion of the textual version of the work that corresponds to the text location is caused to be highlighted. This step may be performed by the media player or another software application executing on the user's device. If a server performs the look-up steps ( 620 and 630 ), then step 640 may further involve the server sending the text location to the user's device. In response, the media player, or another software application, accepts the text location as input and causes the corresponding text to be highlighted.
  • mappings are associated with different types of highlighting.
  • one text location in the mapping may be associated with the changing of the font color from black to red while another text location in the mapping may be associated with an animation, such as a toaster graphic that shows a piece of toast “popping” out of toaster.
  • each mapping record in the mapping may include “highlighting data” that indicates how the text identified by the corresponding text location is to be highlighted.
  • the media player uses the highlighting data to determine how to highlight the text. If a mapping record does not include highlighting data, then the media player may not highlight the corresponding text. Alternatively, if an mapping record in the mapping does not include highlighting data, then the media player may use a “default” highlight technique (e.g., bolding the text) to highlight the text.
  • FIG. 7 is a flow diagram that depicts a process 700 of highlighting displayed text in response to audio input from a user, according to an embodiment of the invention.
  • a mapping is not required.
  • the audio input is used to highlight text in a portion of a textual version of a work that is concurrently displayed to the user.
  • audio input is received.
  • the audio input may be based on a user reading aloud text from a textual version of a work.
  • the audio input may be received by a device that displays a portion of the textual version.
  • the device may prompt the user to read aloud a word, phrase, or entire sentence.
  • the prompt may be visual or audio.
  • a visual prompt the device may cause the following text to be displayed: “Please read the underlined text” while or immediately before the device displays a sentence that is underlined.
  • the device may cause a computer-generated voice to read “Please read the underlined text” or cause a pre-recorded human voice to be played, where the pre-recorded human voice provides the same instruction.
  • a speech-to-text analysis is performed on the audio input to detect one or more words reflected in the audio input.
  • the particular set of words may be all the words that are currently displayed by a computing device (e.g., an e-reader). Alternatively, the particular set of words may be all the words that the user was prompted to read.
  • the device causes that matching word to be highlighted.
  • the steps depicted in process 700 may be performed by a single computing device that displays text from a textual version of a work. Alternatively, the steps depicted in process 700 may be performed by one or more computing devices that are different than the computing device that displays text from the textual version.
  • the audio input from a user in step 710 may be sent from the user's device over a network to a network server that performs the speech-to-text analysis. The network server may then send highlight data to the user's device to cause the user's device to highlight the appropriate text.
  • a user of a media player that displays portions of a textual version of a work may select portions of displayed text and cause the corresponding audio to be played. For example, if a displayed word from the digital book is “donut” and the user selects that word (e.g., by touching a portion of the media player's touch screen that displays that word), then the audio of “donut” may be played.
  • a mapping that maps text locations in a textual version of the work with audio locations in audio data is used to identify the portion of the audio data that corresponds to the selected text.
  • the user may select a single word, a phrase, or even one or more sentences.
  • the media player may identify one or more text locations. For example, the media player may identify a single text location that corresponds to the selected portion, even if the selected portion comprises multiple lines or sentences. The identified text location may correspond to the beginning of the selected portion. As another example, the media player may identify a first text location that corresponds to the beginning of the selected portion and a second text location that corresponds to the ending of the selected portion.
  • the media player uses the identified text location to look up a mapping record in the mapping that indicates a text location that is closest (or closest prior) to the identified text location.
  • the media player uses the audio location indicated in the mapping record to identify where, in the audio data, to begin processing the audio data in order to generate audio. If only a single text location is identified, then only the word or sounds at or near the audio location may be played. Thus, after the word or sounds are played, the media player ceases to play any more audio.
  • the media player begins playing at or near the audio location and does not cease playing the audio that follows the audio location until (a) the end of the audio data is reached, (b) further input from the user (e.g., selection of a “stop” button), or (c) a pre-designated stopping point in the audio data (e.g., end of a page or chapter that requires further input to proceed).
  • the media player identifies two text locations based on the selected portion, then two audio locations are identified and may be used to identify where to begin playing and where to stop playing the corresponding audio.
  • the audio data identified by the audio location may be played slowly (i.e., at a slow playback speed) or continuously without advancing the current playback position in the audio data. For example, if a user of a tablet computer selects the displayed word “two” by touching a touch screen of the tablet computer with his finger and continuously touches the displayed word (i.e., without lifting his finger and without moving his finger to another displayed word), then the tablet computer plays the corresponding audio creating a sound reflected by reading the word “twoooooooooooooooo”.
  • the speed at which a user drags her finger across displayed text on a touch screen of a media player causes the corresponding audio to be played at the same or similar speed. For example, a user selects the letter “d” of the displayed word “donut” and then slowly moves his finger across the displayed word.
  • the media player identifies the corresponding audio data (using the mapping) and plays the corresponding audio at the same speed at which the user moves his finger. Therefore, the media player creates audio that sounds as if the reader of the text of the textual version of the work pronounced the word “donut” as “dooooooonnnnnnnuuuuuuut.”
  • the time that a user “touches” a word displayed on a touch screen dictates how quickly or slowly the audio version of the word is played. For example, a quick tap of a displayed word by the user's finger causes the corresponding audio to be played at a normal speed, whereas the user holding down his finger on the selected word for more than 1 second causes the corresponding audio to be played at 1 ⁇ 2 the normal speed.
  • a user initiates the creation of annotations to one media version (e.g., audio) of a digital work and causes the annotations to be associated with another media version (e.g., text) of the digital work.
  • another media version e.g., text
  • an annotation may be created in the context of one type of media
  • the annotation may be consumed in the context of another type of media.
  • the “context” in which an annotation is created or consumed refers to whether text is being displayed or audio is being played when the creation or consumption occurs.
  • an indication of the annotation may be displayed, by a device, at the beginning or the end of the corresponding textual version or on each “page” of the corresponding textual version.
  • the text that is displayed when an annotation is created in the text context is not used when consuming the annotation in the audio context.
  • an indication of the annotation may be displayed, by a device, at the beginning or end of the corresponding audio version or continuously while the corresponding audio version is being played.
  • an audio indication of the annotation may be played. For example, a “beep” is played simultaneously with the audio track in such a way that both the beep and the audio track can be heard.
  • FIGS. 8A-B are flow diagrams that depict processes for transferring an annotation from one context to another, according to an embodiment of the invention.
  • FIG. 8A is a flow diagram depicts a process 800 for creating an annotation in the “text” context and consuming the annotation in the “audio” context
  • FIG. 8B is a flow diagram that depicts a process 850 for creating an annotation in the “audio” context and consuming the annotation in the “text” context.
  • the creation and consumption of an annotation may occur on the same computing device (e.g., device 410 ) or on separate computing devices (e.g., devices 410 and 430 ).
  • FIG. 8A describes a scenario where the annotation is created and consumed on device 410
  • FIG. 8B describes a scenario where the annotation is created on device 410 and later consumed on device 430 .
  • text media player 412 executing on device 410 , causes text (e.g., in the form of a page) from digital media item 402 to be displayed.
  • text media player 412 determines a text location within a textual version of the work reflected in digital media item 402 .
  • the text location is eventually stored in association with an annotation.
  • the text location may be determined in a number of ways.
  • text media player 412 may receive input that selects the text location within the displayed text.
  • the input may be a user touching a touch screen (that displays the text) of device 410 for a period of time.
  • the input may select a specific word, a number of words, the beginning or ending of a page, before or after a sentence, etc.
  • the input may also include first selecting a button, which causes text media player 412 to change to a “create annotation” mode where an annotation may be created and associated with the text location.
  • text media player 412 determines the text location automatically (without user input) based on which portion of the textual version of the work (reflected in digital media item 402 ) is being displayed. For example, if device 410 is displaying page 20 of the textual version of the work, then the annotation will be associated with page 20 .
  • text media player 412 receives input that selects a “Create Annotation” button that may be displayed on the touch screen. Such a button may be displayed in response to input in step 804 that selects the text location, where, for example, the user touches the touch screen for a period of time, such as one second.
  • step 804 is depicted as occurring before step 806 , alternatively, the selection of the “Create Annotation” button may occur prior to the determination of the text location.
  • text media player 412 receives input that is used to create annotation data.
  • the input may be voice data (such as the user speaking into a microphone of device 410 ) or text data (such as the user selecting keys on a keyboard, whether physical or graphical). If the annotation data is voice data, text media player 412 (or another process) may perform speech-to-text analysis on the voice data to create a textual version of the voice data.
  • text media player 412 stores the annotation data in association with the text location.
  • Text media player 412 uses a mapping (e.g., a copy of mapping 406 ) to identify a particular text location, in mapping, that is closest to the text location. Then, using mapping, text media player identifies an audio location that corresponds to the particular text location.
  • a mapping e.g., a copy of mapping 406
  • text media player 412 sends, over network 440 to intermediary device 420 , the annotation data and the text location.
  • intermediary device 420 stores the annotation data in association with the text location.
  • Intermediary device 420 uses a mapping (e.g., mapping 406 ) to identify a particular text location, in mapping 406 , that is closest to the text location. Then, using mapping 406 , intermediary device 420 identifies an audio location that corresponds to the particular text location.
  • Intermediary device 420 sends the identified audio location over network 440 to device 410 .
  • Intermediary device 420 may send the identified audio location in response to a request, from device 410 , for certain audio data and/or for annotations associated with certain audio data. For example, in response to a request for an audio book version of “The Tale of Two Cities”, intermediary device 420 determines whether there is any annotation data associated with that audio book and, if so, sends the annotation data to device 410 .
  • Step 810 may also comprise storing date and/or time information that indicates when the annotation was created. This information may be displayed later when the annotation is consumed in the audio context.
  • audio media player 414 plays audio by processing audio data of digital media item 404 , which, in this example (although not shown), may be stored on device 410 or may be streamed to device 410 from intermediary device 420 over network 440 .
  • audio media player 414 determines when the current playback position in the audio data matches or nearly matches the audio location identified in step 810 using mapping 406 .
  • audio media player 414 may cause data that indicates that an annotation is available to be displayed, regardless of where the current playback position is located and without having to play any audio, as indicated in step 812 .
  • step 812 is unnecessary.
  • a user may launch audio media player 414 and cause audio media player 414 to load the audio data of digital media item 404 .
  • Audio media player 414 determines that annotation data is associated with the audio data.
  • Audio media player 414 causes information about the audio data (e.g., title, artist, genre, length, etc.) to be displayed without generating any audio associated with the audio data.
  • the information may include a reference to the annotation data and information about a location within the audio data that is associated with the annotation data, where the location corresponds to the audio location identified in step 810 .
  • audio media player 414 consumes the annotation data. If the annotation data is voice data, then consuming the annotation data may involve processing the voice data to generate audio or converting the voice data to text data and displaying the text data. If the annotation data is text data, then consuming the annotation data may involve displaying the text data, for example, in a side panel of a GUI that displays attributes of the audio data that is played or in a new window that appears separate from the GUI.
  • attributes include time length of the audio data, the current playback position, which may indicate an absolute location within the audio data (e.g., a time offset) or a relative position within the audio data (e.g., chapter or section number), a waveform of the audio data, and title of the digital work.
  • FIG. 8B describes a scenario, as noted previously, where an annotation is created on device 430 and later consumed on device 410 .
  • audio media player 432 processes audio data from digital media item 404 to play audio.
  • audio media player 432 determines an audio location within the audio data.
  • the audio location is eventually stored in association with an annotation.
  • the audio location may be determined in a number of ways.
  • audio media player 432 may receive input that selects the audio location within the audio data.
  • the input may be a user touching a touch screen (that displays attributes of the audio data) of device 430 for a period of time.
  • the input may select an absolute position within a timeline that reflects the length of the audio data or a relative position within the audio data, such as a chapter number and a paragraph number.
  • the input may also comprise first selecting a button, which causes audio media player 432 to change to a “create annotation” mode where an annotation may be created and associated with the audio location.
  • audio media player 432 determines the audio location automatically (without user input) based on which portion of the audio data is being processed. For example, if audio media player 432 is processing a portion of the audio data that corresponds to chapter 20 of a digital work reflected in digital media item 404 , then audio media player 432 determines that the audio location is at least be somewhere within chapter 20.
  • audio media player 432 receives input that selects a “Create Annotation” button that may be displayed on the touch screen of device 430 .
  • a button may be displayed in response to input in step 854 that selects the audio location, where, for example, the user touches the touch screen continuously for a period of time, such as one second.
  • step 854 is depicted as occurring before step 856 , alternatively, the selection of the “Create Annotation” button may occur prior to the determination of the audio location.
  • the first media player receives input that is used to create annotation data, similar to step 808 .
  • audio media player 432 stores the annotation data in association with the audio location.
  • Audio media player 432 uses a mapping (e.g., mapping 406 ) to identify a particular audio location, in the mapping, that is closest to the audio location determined in step 854 . Then, using the mapping, audio media player 432 identifies a text location that corresponds to the particular audio location.
  • mapping e.g., mapping 406
  • audio media player 432 sends, over network 400 to intermediary device 420 , the annotation data and the audio location.
  • intermediary device 420 stores the annotation data in association with the audio location.
  • Intermediary device 420 uses mapping 406 to identify a particular audio location, in the mapping, that is closest to the audio location determined in step 854 . Then, using mapping 406 , intermediary device 420 identifies a text location that corresponds to the particular audio location.
  • Intermediary device 420 sends the identified text location over network 440 to device 410 .
  • Intermediary device 420 may send the identified text location in response to a request, from device 410 , for certain text data and/or for annotations associated with certain text data. For example, in response to a request for a digital book of “The Grapes of Wrath”, intermediary device 420 determines whether there is any annotation data associated with that digital book and, if so, sends the annotation data to device 430 .
  • Step 860 may also comprise storing date and/or time information that indicates when the annotation was created. This information may be displayed later when the annotation is consumed in the text context.
  • device 410 displays text data associated with digital media item 402 , which is a textual version of digital media item 404 .
  • Device 410 displays the text data of digital media item 402 based on a locally-stored copy of digital media item 402 or, if a locally-stored copy does not exist, may display the text data while the text data is streamed from intermediary device 420 .
  • device 410 determines when a portion of the textual version of the work (reflected in digital media item 402 ) that includes the text location (identified in step 860 ) is displayed. Alternatively, device 410 may display data that indicates that an annotation is available regardless of what portion of the textual version of the work, if any, is displayed.
  • text media player 412 consumes the annotation data. If the annotation data is voice data, then consuming the annotation data may comprise playing the voice data or converting the voice data to text data and displaying the text data. If the annotation data is text data, then consuming the annotation data may comprises displaying the text data, for example, in a side panel of a GUI that displays a portion of the textual version of the work or in a new window that appears separate from the GUI.
  • the techniques described herein are implemented by one or more special-purpose computing devices.
  • the special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination.
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques.
  • the special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
  • FIG. 9 is a block diagram that illustrates a computer system 900 upon which an embodiment of the invention may be implemented.
  • Computer system 900 includes a bus 902 or other communication mechanism for communicating information, and a hardware processor 904 coupled with bus 902 for processing information.
  • Hardware processor 904 may be, for example, a general purpose microprocessor.
  • Computer system 900 also includes a main memory 906 , such as a random access memory (RAM) or other dynamic storage device, coupled to bus 902 for storing information and instructions to be executed by processor 904 .
  • Main memory 906 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 904 .
  • Such instructions when stored in non-transitory storage media accessible to processor 904 , render computer system 900 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • Computer system 900 further includes a read only memory (ROM) 908 or other static storage device coupled to bus 902 for storing static information and instructions for processor 904 .
  • ROM read only memory
  • a storage device 910 such as a magnetic disk or optical disk, is provided and coupled to bus 902 for storing information and instructions.
  • Computer system 900 may be coupled via bus 902 to a display 912 , such as a cathode ray tube (CRT), for displaying information to a computer user.
  • a display 912 such as a cathode ray tube (CRT)
  • An input device 914 is coupled to bus 902 for communicating information and command selections to processor 904 .
  • cursor control 916 is Another type of user input device
  • cursor control 916 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 904 and for controlling cursor movement on display 912 .
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • Computer system 900 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 900 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 900 in response to processor 904 executing one or more sequences of one or more instructions contained in main memory 906 . Such instructions may be read into main memory 906 from another storage medium, such as storage device 910 . Execution of the sequences of instructions contained in main memory 906 causes processor 904 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 910 .
  • Volatile media includes dynamic memory, such as main memory 906 .
  • Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
  • Storage media is distinct from but may be used in conjunction with transmission media.
  • Transmission media participates in transferring information between storage media.
  • transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 902 .
  • transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 904 for execution.
  • the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
  • a modem local to computer system 900 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
  • An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 902 .
  • Bus 902 carries the data to main memory 906 , from which processor 904 retrieves and executes the instructions.
  • the instructions received by main memory 906 may optionally be stored on storage device 910 either before or after execution by processor 904 .
  • Computer system 900 also includes a communication interface 918 coupled to bus 902 .
  • Communication interface 918 provides a two-way data communication coupling to a network link 920 that is connected to a local network 922 .
  • communication interface 918 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 918 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 918 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 920 typically provides data communication through one or more networks to other data devices.
  • network link 920 may provide a connection through local network 922 to a host computer 924 or to data equipment operated by an Internet Service Provider (ISP) 926 .
  • ISP 926 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 928 .
  • Internet 928 uses electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link 920 and through communication interface 918 which carry the digital data to and from computer system 900 , are example forms of transmission media.
  • Computer system 900 can send messages and receive data, including program code, through the network(s), network link 920 and communication interface 918 .
  • a server 930 might transmit a requested code for an application program through Internet 928 , ISP 926 , local network 922 and communication interface 918 .
  • the received code may be executed by processor 904 as it is received, and/or stored in storage device 910 , or other non-volatile storage for later execution.

Abstract

Techniques are provided for creating a mapping that maps locations in audio data (e.g., an audio book) to corresponding locations in text data (e.g., an e-book). Techniques are provided for using a mapping between audio data and text data, whether or not the mapping is created automatically or manually. A mapping may be used for bookmark switching where a bookmark established in one version of a digital work is used to identify a corresponding location with another version of the digital work. Alternatively, the mapping may be used to play audio that corresponds to text selected by a user. Alternatively, the mapping may be used to automatically highlight text in response to audio that corresponds to the text being played. Alternatively, the mapping may be used to determine where an annotation created in one media context (e.g., audio) will be consumed in another media context (e.g., text).

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application claims priority to U.S. Provisional Patent Application No. 61/493,372, entitled “Automatically Creating A Mapping Between Text Data And Audio Data And Switching Between Text Data And Audio Data Based On A Mapping,” filed on Jun. 3, 2011, invented by Alan C. Cannistraro, et al., the entire disclosure of which is incorporated by reference for all purposes as if fully set forth herein.
  • The present application claims priority to U.S. Provisional Patent Application No. 61/494,375, entitled “Automatically Creating A Mapping Between Text Data And Audio Data And Switching Between Text Data And Audio Data Based On A Mapping,” filed on Jun. 7, 2011, invented by Alan C. Cannistraro, et al., the entire disclosure of which is incorporated by reference for all purposes as if fully set forth herein.
  • The present application is related to U.S. patent application Ser. No. ______ entitled “Switching Between Text Data and Audio Data Based on a Mapping,” filed on the same day herewith, the entire disclosure of which is incorporated by reference for all purposes as if fully set forth herein.
  • FIELD OF THE INVENTION
  • The present invention relates to automatically creating a mapping between text data and audio data by analyzing the audio data to detect words reflected therein and compare those words to words in the document.
  • BACKGROUND
  • With the cost of handheld electronic devices decreasing and large demand for digital content, creative works that have once been published on printed media are increasingly becoming available as digital media. For example, digital books (also known as “e-books”) are increasingly popular, along with specialized handheld electronic devices known as e-book readers (or “e-readers”). Also, other handheld devices, such as tablet computers and smart phones, although not designed solely as e-readers, have the capability to be operated as e-readers.
  • A common standard by which e-books are formatted is the EPUB standard (short for “electronic publication”), which is a free and open e-book standard by the International Digital Publishing Forum (IDPF). An EPUB file uses XHTML 1.1 (or DTBook) to construct the content of a book. Styling and layout are performed using a subset of CSS, referred to as OPS Style Sheets.
  • For some written works, especially those that become popular, an audio version of the written work is created. For example, a recording of a famous individual (or one with a pleasant voice) reading a written work is created and made available for purchase, whether online or in a brick and mortar store.
  • It is not uncommon for consumers to purchase both an e-book and an audio version (or “audio book”) of the e-book. In some cases, a user reads the entirety of an e-book and then desires to listen to the audio book. In other cases, a user transitions between reading and listening to the book, based on the user's circumstances. For example, while engaging in sports or driving during a commute, the user will tend to listen to the audio version of the book. On the other hand, when lounging in a sofa-chair prior to bed, the user will tend to read the e-book version of the book. Unfortunately, such transitions can be painful, since the user must remember where she stopped in the e-book and manually locate where to begin in the audio book, or visa-versa. Even if the user remembers clearly what was happening in the book where the user left off, such transitions can still be painful because knowing what is happening does not necessarily make it easy to find the portion of an eBook or audio book that corresponds to those happenings. Thus, switching between an e-book and an audio book may be extremely time-consuming.
  • The specification “EPUB Media Overlays 3.0” defines a usage of SMIL (Synchronized Multimedia Integration Language), the Package Document, the EPUB Style Sheet, and the EPUB Content Document for representation of synchronized text and audio publications. A pre-recorded narration of a publication can be represented as a series of audio clips, each corresponding to part of the text. Each single audio clip, in the series of audio clips that make up a pre-recorded narration, typically represents a single phrase or paragraph, but infers no order relative to the other clips or to the text of a document. Media Overlays solve this problem of synchronization by tying the structured audio narration to its corresponding text in the EPUB Content Document using SMIL markup. Media Overlays are a simplified subset of SMIL 3.0 that allow the playback sequence of these clips to be defined.
  • Unfortunately, creating Media Overlay files is largely a manual process. Consequently, the granularity of the mapping between audio and textual versions of a work is very coarse. For example, a media overlay file may associate the beginning of each paragraph in an e-book with a corresponding location in an audio version of the book. The reason that media overlay files, especially for novels, do not contain a mapping at any finer level of granularity, such as on a word-by-word basis, is that creating such a highly granular media overlay file might take countless hours in human labor.
  • The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings:
  • FIG. 1 is a flow diagram that depicts a process for automatically creating a mapping between text data and audio data, according to an embodiment of the invention;
  • FIG. 2 is a block diagram that depicts a process that involves an audio-to-text correlator in generating a mapping between text data and audio data, according to an embodiment of the invention;
  • FIG. 3 is a flow diagram that depicts a process for using a mapping in one or more of these scenarios, according to an embodiment of the invention;
  • FIG. 4 is a block diagram that an example system 400 that may be used to implement some of the processes described herein, according to an embodiment of the invention.
  • FIGS. 5A-B are flow diagrams that depict processes for bookmark switching, according to an embodiment of the invention;
  • FIG. 6 is a flow diagram that depicts a process for causing text, from a textual version of a work, to be highlighted while an audio version of the work is being played, according to an embodiment of the invention;
  • FIG. 7 is a flow diagram that depicts a process of highlighting displayed text in response to audio input from a user, according to an embodiment of the invention;
  • FIGS. 8A-B are flow diagrams that depict processes for transferring an annotation from one media context to another, according to an embodiment of the invention; and
  • FIG. 9 is a block diagram that illustrates a computer system upon which an embodiment of the invention may be implemented.
  • DETAILED DESCRIPTION
  • In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
  • Overview of Automatic Generation of Audio-to-Text Mapping
  • According to one approach, a mapping is automatically created where the mapping maps locations within an audio version of a work (e.g., an audio book) with corresponding locations in a textual version of the work (e.g., an e-book). The mapping is created by performing a speech-to-text analysis on the audio version to identify words reflected in the audio version. The identified words are matched up with the corresponding words in the textual version of the work. The mapping associates locations (within the audio version) of the identified words with locations in the textual version of the work where the identified words are found.
  • Audio Version Formats
  • The audio data reflects an audible reading of text of a textual version of a work, such as a book, web page, pamphlet, flyer, etc. The audio data may be stored in one or more audio files. The one or more audio files may be in one of many file formats. Non-limiting examples of audio file formats include AAC, MP3, WAV, and PCM.
  • Textual Version Formats
  • Similarly, the text data to which the audio data is mapped may be stored in one of many document file formats. Non-limiting examples of document file formats include DOC, TXT, PDF, RTF, HTML, XHTML, and EPUB.
  • A typical EPUB document is accompanied by a file that (a) lists each XHTML content document, and (b) indicates an order of the XHTML content documents. For example, if a book comprises 20 chapters, then an EPUB document for that book may have 20 different XHTML documents, one for each chapter. A file that accompanies the EPUB document identifies an order of the XHTML documents that corresponds to the order of the chapters in the book. Thus, a single (logical) document (whether an EPUB document or another type of document) may comprise multiple data items or files.
  • The words or characters reflected in the text data may be in one or multiple languages. For example, one portion of the text data may be in English while another portion of the text data may be in French. Although examples of English words are provided herein, embodiments of the invention may be applied to other languages, including character-based languages.
  • Audio and Text Locations in Mapping
  • As described herein, a mapping comprises a set of mapping records, where each mapping record associates an audio location with a text location.
  • Each audio location identifies a location in audio data. An audio location may indicate an absolute location within the audio data, a relative location within the audio data, or a combination of an absolute location and a relative location. As an example of an absolute location, an audio location may indicate a time offset (e.g., 04:32:24 indicating 4 hours, 32 minutes, 24 seconds) into the audio data, or a time range, as indicated above in Example A. As an example of a relative location, an audio location may indicate a chapter number, a paragraph number, and a line number. As an example of a combination of an absolute location and a relative location, the audio location may indicate a chapter number and a time offset into the chapter indicated by the chapter number.
  • Similarly, each text location identifies a location in text data, such as a textual version of a work. A text location may indicate an absolute location within the textual version of the work, a relative location within the textual version of the work, or a combination of an absolute location and a relative location. As an example of an absolute location, a text location may indicate a byte offset into the textual version of the work and/or an “anchor” within the textual version of the work. An anchor is metadata within the text data that identifies a specific location or portion of text. An anchor may be stored separate from the text in the text data that is displayed to an end-user or may be stored among the text that is displayed to an end-user. For example, text data may include the following sentence: “Why did the chicken <i name=“123”/>cross the road?” where “<i name=“123”/>” is the anchor. When that sentence is displayed to a user, the user only sees “Why did the chicken cross the road?” Similarly, the same sentence may have multiple anchors as follows: “<i name=“123”/>Why <i name=“124”/>did <i name=“125”/>the <i name=“126”/>chicken <i name=“127”/>cross <i name=“128”/>the <i name=“129”/>road?” In this example, there is an anchor prior to each word in the sentence.
  • As an example of a relative location, a text location may indicate a page number, a chapter number, a paragraph number, and/or a line number. As an example of a combination of an absolute location and a relative location, a text location may indicate a chapter number and an anchor into the chapter indicated by the chapter number.
  • Examples of how to represent a text location and an audio location are provided in the specification entitled “EPUB Media Overlays 3.0,” which defines a usage of SMIL (Synchronized Multimedia Integration Language), an EPUB Style Sheet, and an EPUB Content Document. An example of an association that associates a text location with an audio location and that is provided in the specification is as follows:
  • <par>
    <text src=“chapter1.xhtml#sentence1”/>
    <audio src=“chapter1_audio.mp3” clipBegin=“23s”
    clipEnd=“45s”/>
    </par>
  • Example A
  • In Example A, the “par” element includes two child elements: a “text” element and an “audio” element. The text element comprises an attribute “src” that identifies a particular sentence within an XHTML document that contains content from the first chapter of a book. The audio element comprises a “src” attribute that identifies an audio file that contains an audio version of the first chapter of the book, a “clipBegin” attribute that identifies where an audio clip within the audio file begins, and a “clipEnd” attribute that identifies where the audio clip within the audio file ends. Thus, seconds 23 through 45 in the audio file correspond to the first sentence in Chapter 1 of the book.
  • Creating a Mapping Between Text and Audio
  • According to an embodiment, a mapping between a textual version of a work and an audio version of the same work is automatically generated. Because the mapping is generated automatically, the mapping may use much finer granularity than would be practical using manual text-to-audio mapping techniques. Each automatically-generated text-to-audio mapping includes multiple mapping records, each of which associates a text location in the textual version with an audio location in the audio version.
  • FIG. 1 is a flow diagram that depicts a process 100 for automatically creating a mapping between a textual version of a work and an audio version of the same work, according to an embodiment of the invention. At step 110, a speech-to-text analyzer receives audio data that reflects an audible version of the work. At step 120, while the speech-to-text analyzer performs an analysis of the audio data, the speech-to-text analyzer generates text for portions of the audio data. At step 130, based on the text generated for the portions of the audio data, the speech-to-text analyzer generates a mapping between a plurality of audio locations in the audio data and a corresponding plurality of text locations in the textual version of the work.
  • Step 130 may involve the speech-to-text analyzer comparing the generated text with text in the textual version of the work to determine where, within the textual version of the work, the generated text is located. For each portion of generated text that is found in the textual version of the work, the speech-to-text analyzer associates (1) an audio location that indicates where, within the audio data, the corresponding portion of audio data is found with (2) a text location that indicates where, within the textual version of the work, the portion of text is found.
  • Textual Context
  • Every document has a “textual context”. The textual context of a textual version of a work includes intrinsic characteristics of the textual version of the work (e.g. the language the textual version of the work is written in, the specific words that textual version of the work uses, the grammar and punctuation that textual version of the work uses, the way the textual version of the work is structured, etc.) and extrinsic characteristics of the work (e.g. the time period in which the work was created, the genre to which the work belongs, the author of the work, etc.)
  • Different works may have significantly different textual contexts. For example, the grammar used in a classic English novel may be very different that the grammar of modern poetry. Thus, while a certain word order may follow the rules of one grammar, that same word order may violate the rules of another grammar. Similarly, the grammar used in both a classic English novel and modern poetry may differ from the grammar (or lack thereof) employed in a text message sent from one teenager to another.
  • As mentioned above, one technique described herein automatically creates a fine granularity mapping between the audio version of a work and the textual version of the same work by performing a speech-to-text conversion of the audio version of the work. In an embodiment, the textual context of a work is used to increase the accuracy of the speech-to-text analysis that is performed on the audio version of the work. For example, in order to determine the grammar employed in a work, the speech-to-text analyzer (or another process) may analyze the textual version of the work prior to performing a speech-to-text analysis. The speech-to-text analyzer may then make use of the grammar information thus obtained to increase the accuracy of the speech-to-text analysis of the audio version of the work.
  • Instead of or in addition to automatically determining the grammar of a work based on the textual version of the work, a user may provide input that identifies one or more rules of grammar that are followed by the author of the work. The rules associated with the identified grammar are input to the speech-to-text analyzer to assist the analyzer in recognizing words in the audio version of the work.
  • Limiting the Candidate Dictionary Based on Textual Version
  • Typically, speech-to-text analyzers must be configured or designed to recognize virtually every word in the English language and, optionally, some words in other languages. Therefore, speech-to-text analyzers must have access to a large dictionary of words. The dictionary from which a speech-to-text analyzer selects words during a speech-to-text operation is referred to herein as the “candidate dictionary” of the speech-to-text analyzer. The number of unique words in a typical candidate dictionary is approximately 500,000.
  • In an embodiment, text from the textual version of a work is taken into account when performing the speech-to-text analysis of the audio version of the work. Specifically, in one embodiment, during the speech-to-text analysis of an audio version of a work, the candidate dictionary used by the speech-to-text analyzer is restricted to the specific set of words that are in the text version of the work. In other words, the only words that are considered to be “candidates” during the speech-to-text operation performed on an audio version of a work are those words that actually appear in the textual version of the work.
  • By limiting the candidate dictionary used in the speech-to-text translation of a particular work to those words that appear in the textual version of the work, the speech-to-text operation may be significantly improved. For example, assume that the number of unique words in a particular work is 20,000. A conventional speech-to-text analyzer may have difficulty determining to which specific word, of a 500,000 word candidate dictionary, a particular portion of audio corresponds. However, that same portion of audio may unambiguously correspond to one particular word when only the 20,000 unique words that are in the textual version of the work are considered. Thus, with such a much smaller dictionary of possible words, the accuracy of the speech-to-text analyzer may be significantly improved.
  • Limiting the Candidate Dictionary Based on Current Position
  • To improve accuracy, the candidate dictionary may be restricted to even fewer words than all of the words in the textual version of the work. In one embodiment, the candidate dictionary is limited to those words found in a particular portion of the textual version of the work. For example, during a speech-to-text translation of a work, it is possible to approximately track the “current translation position” of the translation operation relative to the textual version of the work. Such tracking may be performed, for example, by comparing (a) the text that has been generated during the speech-to-text operation so far, against (b) the textual version of the work.
  • Once the current translation position has been determined, the candidate dictionary may further restricted based on the current translation position. For example, in one embodiment, the candidate dictionary is limited to only those words that appear, within the textual version of the work, after the current translation position. Thus, words that are found prior to the current translation position, but not thereafter, are effectively removed from the candidate dictionary. Such removal may increase the accuracy of the speech-to-text analyzer, since the smaller the candidate dictionary, the less likely the speech-to-text analyzer will translate a portion of audio data to the wrong word.
  • As another example, prior to a speech-to-text analysis, an audio book and a digital book may be divided into a number of segments or sections. The audio book may be associated with an audio section mapping and the digital book may be associated with a text section mapping. For example, the audio section mapping and the text section mapping may identify where each chapter begins or ends. These respective mappings may be used by a speech-to-text analyzer to limit the candidate dictionary. For example, if the speech-to-text analyzer determines, based on the audio section mapping, that the speech-to-text analyzer is analyzing the 4th chapter of the audio book, then the speech-to-text analyzer uses the text section mapping to identify the 4th chapter of the digital book and limit the candidate dictionary to the words found in the 4th chapter.
  • In a related embodiment, the speech-to-text analyzer employs a sliding window that moves as the current translation position moves. As the speech-to-text analyzer is analyzing the audio data, the speech-to-text analyzer moves the sliding window “across” the textual version of the work. The sliding window indicates two locations within the textual version of the work. For example, the boundaries of the sliding window may be (a) the start of the paragraph that precedes the current translation position and (b) the end of the third paragraph after the current translation position. The candidate dictionary is restricted to only those words that appear between those two locations.
  • While a specific example was given above, the window may span any amount of text within the textual version of the work. For example, the window may span an absolute amount of text, such as 60 characters. As another example, the window may span a relative amount of text from the textual version of the work, such as ten words, three “lines” of text, 2 sentences, or 1 “page” of text. In the relative amount scenario, the speech-to-text analyzer may use formatting data within the textual version of the work to determine how much of the textual version of the work constitutes a line or a page. For example, the textual version of a work may comprise a page indicator (e.g., in the form of an HTML or XML tag) that indicates, within the content of the textual version of the work, the beginning of a page or the ending of a page.
  • In an embodiment, the start of the window corresponds to the current translation position. For example, the speech-to-text analyzer maintains a current text location that indicates the most recently-matched word in the textual version of the work and maintains a current audio location that indicates the most recently-identified word in the audio data. Unless the narrator (whose voice is reflected in the audio data) misreads text of the textual version of the work, adds his/her own content, or skips portions of the textual version of the work during the recording, the next word that the speech-to-text analyzer detects in the audio data (i.e., after the current audio location) is most likely the next word in the textual version of the work (i.e., after the current text location). Maintaining both locations may significantly increase the accuracy of the speech-to-text translation.
  • Creating a Mapping Using Audio-to-Audio Correlation
  • In an embodiment, a text-to-speech generator and an audio-to-text correlator are used to automatically create a mapping between the audio version of a work and the textual version of a work. FIG. 2 is a block diagram that depicts these analyzers and the data used to generate the mapping. Textual version 210 of a work (such as an EPUB document) is input to text-to-speech generator 220. Text-to-speech generator 220 may be implemented in software, hardware, or a combination of hardware and software. Whether implemented in software or hardware, text-to-speech generator 220 may be implemented on a single computing device or may be distributed among multiple computing devices.
  • Text-to-speech generator 220 generates audio data 230 based on document 210. During the generation of the audio data 230, text-to-speech generator 220 (or another component not shown) creates an audio-to-document mapping 240. Audio-to-document mapping 240 maps, multiple text locations within document 210 to corresponding audio locations within generated audio data 230.
  • For example, assume that text-to-speech generator 220 generates audio data for a word located at location Y within document 210. Further assume that the audio data that was generated for the work is located at a location X within audio data 230. To reflect the correlation between the location of the word within the document 210 and the location of the corresponding audio in the audio data 230, a mapping would be created between location X and location Y.
  • Because text-to-speech generator 220 knows where a word or phrase occurs in document 210 when a corresponding word or phrase of audio is generated, each mapping between the corresponding words or phrases can be easily generated.
  • Audio-to-text correlator 260 accepts, as input, generated audio data 230, audio book 250, and audio-to-document mapping 240. Audio-to-text correlator 260 performs two main steps: an audio-to-audio correlation step and a look-up step. For the audio-to-audio correlation step, audio-to-text correlator 260 compares generated audio data 230 with audio book 250 to determine the correlation between portions of audio data 230 and portions of audio book 250. For example, audio-to-text correlator 260 may determine, for each word represented in audio data 230, the location of the corresponding word in audio book 250.
  • The granularity at which audio data 230 is divided, for the purpose of establishing correlations, may vary from implementation to implementation. For example, a correlation may be established between each word in audio data 230 and each corresponding word in audio book 250. Alternatively, a correlation may be established based on fixed-duration time intervals (e.g. one mapping for every 1 minute of audio). In yet another alternative, a correlation may be established for portions of audio established based on other criteria, such as at paragraph or chapter boundaries, significant pauses (e.g., silence of greater than 3 seconds), or other locations based on data in audio book 250, such as audio markers within audio book 250.
  • After a correlation between a portion of audio data 230 and a portion of audio book 250 is identified, audio-to-text correlator 260 uses audio-to-document mapping 240 to identify a text location (indicated in mapping 240) that corresponds to the audio location within generated audio data 230. Audio-to-text correlator 260 then associates the text location with the audio location within audio book 250 to create a mapping record in document-to-audio mapping 270.
  • For example, assume that a portion of audio book 250 (located at location Z) matches the portion of generated audio data 230 that is located at location X. Based on a mapping record (in audio-to-document mapping 240) that correlates location X to location Y within document 210, a mapping record in document-to-audio mapping 270 would be created that correlates location Z of the audio book 250 with location Y within document 210.
  • Audio-to-text correlator 260 repeatedly performs the audio-to-audio correlation and look-up steps for each portion of audio data 230. Therefore, document-to-audio mapping 270 comprises multiple mapping records, each mapping record mapping a location within document 210 to a location within audio book 250.
  • In an embodiment, the audio-to-audio correlation for each portion of audio data 230 is immediately followed by the look-up step for that portion of audio. Thus, document-to-audio mapping 270 may be created for each portion of audio data 230 prior to proceeding to the next portion of audio data 230. Alternatively, the audio-to-audio correlation step may be performed for many or for all of the portions of audio data 230 before any look-up step is performed. The look-up steps for all portions can be performed in a batch, after all of the audio-to-audio correlations have been established.
  • Mapping Granularity
  • A mapping has a number of attributes, one of which is the mapping's size, which refers to the number of mapping records in the mapping. Another attribute of a mapping is the mapping's “granularity.” The “granularity” of a mapping refers to the number of mapping records in the mapping relative to the size of the digital work. Thus, the granularity of a mapping may vary from one digital work to another digital work. For example, a first mapping for a digital book that comprises 200 “pages” includes a mapping record only for each paragraph in the digital book. Thus, the first mapping may comprise 1000 mapping records. On the other hand, a second mapping for a digital “children's” book that comprises 20 pages includes a mapping record for each word in the children's book. Thus, the second mapping may comprise 800 mapping records. Even though the first mapping comprises more mapping records than the second mapping, the granularity of the second mapping is finer than the granularity of the first mapping.
  • In an embodiment, the granularity of a mapping may be dictated based on input to a speech-to-text analyzer that generates the mapping. For example, a user may specify a specific granularity before causing a speech-to-text analyzer to generate a mapping. Non-limiting examples of specific granularities include:
  • word granularity (i.e., an association for each word),
  • sentence granularity (i.e., an association for each sentence),
  • paragraph granularity (i.e., an association for each paragraph),
  • 10-word granularity (i.e., a mapping for each 10 word portion in the digital work), and
  • 10-second granularity (i.e., a mapping for each 10 seconds of audio).
  • As another example, a user may specify the type of digital work (e.g., novel, children's book, short story) and the speech-to-text analyzer (or another process) determines the granularity based on the work's type. For example, a children's book may be associated with word granularity while a novel may be associated with sentence granularity.
  • The granularity of a mapping may even vary within the same digital work. For example, a mapping for the first three chapters of a digital book may have sentence granularity while a mapping for the remaining chapters of the digital book have word granularity.
  • On-The-Fly Mapping Generation During Text-to-Audio Transitions
  • While an audio-to-text mapping will, in many cases, be generated prior to a user needing to rely on one, in one embodiment, an audio-to-text mapping is generated at runtime or after a user has begun to consume the audio data and/or the text data on the user's device. For example, a user reads a textual version of a digital book using a tablet computer. The tablet computer keeps track of the most recent page or section of the digital book that the tablet computer has displayed to the user. The most recent page or section is identified by a “text bookmark.”
  • Later, the user selects to play an audio book version of the same work. The playback device may be the same tablet computer on which the user was reading the digital book or another device. Regardless of the device upon which the audio book is to be played, the text bookmark is retrieved, and a speech-to-text analysis is performed relative to at least a portion of the audio book. During the speech-to-text analysis, “temporary” mapping records are generated to establish a correlation between the generated text and the corresponding locations within the audio book.
  • Once the text and correlation records have been generated, a text-to-text comparison is used to determine the generated text that corresponds to the text bookmark. Then, the temporary mapping records are used to identify the portion of the audio book that corresponds to the portion of generated text that corresponds to the text bookmark. Playback of the audio book is then initiated from that position.
  • The portion of the audio book on which the speech-to-text analysis is performed may be limited to the portion that corresponds to the text bookmark. For example, an audio section mapping may already exist that indicates where certain portions of the audio book begin and/or end. For example, an audio section mapping may indicate where each chapter begins, where one or more pages begin, etc. Such an audio section mapping may be helpful to determine where to begin the speech-to-text analysis so that a speech-to-text analysis on the entire audio book is not required to be performed. For example, if the text bookmark indicates a location within the 12th chapter of the digital book and an audio section mapping associated with the audio data identifies where the 12th chapter begins in the audio data, then a speech-to-text analysis is not required to be performed on any of the first 11 chapters of the audio book. For example, the audio data may consist of 20 audio files, one audio file for each chapter. Therefore, only the audio file that corresponds to the 12th chapter is input to a speech-to-text analyzer.
  • On-the-Fly Mapping Generation During Audio-to-Text Transitions
  • Mapping records can be generated on-the-fly to facilitate audio-to-text transitions, as well as text-to-audio transitions. For example, assume that a user is listening to an audio book using a smart phone. The smart phone keeps track of the current location within the audio book that is being played. The current location is identified by an “audio bookmark.” Later, the user picks up a tablet computer and selects a digital book version of the audio book to display. The tablet computer receives the audio bookmark (e.g., from a central server that is remote relative to the tablet computer and the smart phone), performs a speech-to-text analysis of at least a portion of the audio book, and identifies, within the audio book, a portion that corresponds to a portion of text within a textual version of the audio book that corresponds to the audio bookmark. The tablet computer then begins displaying the identified portion within the textual version.
  • The portion of the audio book on which the speech-to-text analysis is performed may be limited to the portion that corresponds to the audio bookmark. For example, a speech-to-text analysis is performed on a portion of the audio book that spans one or more time segments (e.g., seconds) prior to the audio bookmark in the audio book and/or one or more time segments after the audio bookmark in the audio book. The text produced by the speech-to-text analysis on that portion is compared to text in the textual version to locate where the series of words or phrases in the produced text match text in the textual version.
  • If there exists a text section mapping that indicates where certain portions of the textual version begin or end and the audio bookmark can be used to identify a section in the text section mapping, then much of the textual version need not be analyzed in order to locate where the series of words or phrases in the produced text match text in the textual version. For example, if the audio bookmark indicates a location within in the 3rd chapter of the audio book and a text section mapping associated with the digital book identifies where the 3rd chapter begins in the textual version, then a speech-to-text analysis is not required to be performed on any of the first two chapters of the audio book or on any of the chapters of the audio book after the 3rd chapter.
  • Overview of Use of Audio-to-Text Mappings
  • According to one approach, a mapping (whether created manually or automatically) is used to identify the locations within an audio version of a digital work (e.g., an audio book) that correspond to locations within a textual version of the digital work (e.g., an e-book). For example, a mapping may be used to identify a location within an e-book based on a “bookmark” established in an audio book. As another example, a mapping may be used to identify which displayed text corresponds to an audio recording of a person reading the text as the audio recording is being played and cause the identified text to be highlighted. Thus, while an audio book is being played, a user of an e-book reader may follow along as the e-book reader highlights the corresponding text. As another example, a mapping may be used to identify a location in audio data and play audio at that location in response to input that selects displayed text from an e-book. Thus, a user may select a word in an e-book, which selection causes audio that corresponds to that word to be played. As another example, a user may create an annotation while “consuming” (e.g., reading or listening to) one version of a digital work (e.g., an e-book) and cause the annotation to be consumed while the user is consuming another version of the digital work (e.g., an audio book). Thus, a user can make notes on a “page” of an e-book and may view those notes while listening to an audio book of the e-book. Similarly, a user can make a note while listening to an audio book and then can view that note when reading the corresponding e-book.
  • FIG. 3 is a flow diagram that depicts a process for using a mapping in one or more of these scenarios, according to an embodiment of the invention.
  • At step 310, location data that indicates a specified location within a first media item is obtained. The first media item may be a textual version of a work or audio data that corresponds to a textual version of the work. This step may be performed by a device (operated by a user) that consumes the first media item. Alternatively, the step may be performed by a server that is located remotely relative to the device that consumes the first media item. Thus, the device sends the location data to the server over a network using a communication protocol.
  • At step 320, a mapping is inspected to determine a first media location that corresponds to the specified location. Similarly, this step may be performed by a device that consumes the first media item or by a server that is located remotely relative to the device.
  • At step 330, a second media location that corresponds to the first media location and that is indicated in the mapping is determined. For example, if the specified location is an audio “bookmark”, then the first media location is an audio location indicated in the mapping and the second media location is a text location that is associated with the audio location in the mapping. Similarly, For example, if the specified location is a text “bookmark”, then the first media location is a text location indicated in the mapping and the second media location is an audio location that is associated with the text location in the mapping.
  • At step 340, the second media item is processed based on the second media location. For example, if the second media item is audio data, then the second media location is an audio location and is used as a current playback position in the audio data. As another example, if the second media item is a textual version of a work, then the second media location is a text location and is used to determine which portion of the textual version of the work to display.
  • Examples of using process 300 in specific scenarios are provided below.
  • Architecture Overview
  • Each of the example scenarios mentioned above and described in detail below may involve one or more computing devices. FIG. 4 is a block diagram that an example system 400 that may be used to implement some of the processes described herein, according to an embodiment of the invention. System 400 includes end-user device 410, intermediary device 420, and end-user device 430. Non-limiting examples of end- user devices 410 and 430 include desktop computers, laptop computers, smart phones, tablet computers, and other handheld computing devices.
  • As depicted in FIG. 4, device 410 stores a digital media item 402 and executes a text media player 412 and an audio media player 414. Text media player 412 is configured to process electronic text data and cause device 410 to display text (e.g., on a touch screen of device 410, not shown). Thus, if digital media item 402 is an e-book, then text media player 412 may be configured to process digital media item 402, as long as digital media item 402 is in a text format that text media player 412 is configured to process. Device 410 may execute one or more other media players (not shown) that are configured to process other types of media, such as video.
  • Similarly, audio media player 414 is configured to process audio data and cause device 410 to generate audio (e.g., via speakers on device 410, not shown). Thus, if digital media item 402 is an audio book, then audio media player 414 may be configured to process digital media item 402, as long as digital media item 402 is in an audio format that audio media player 414 is configured to process. Whether item 402 is an e-book or an audio book, item 402 may comprise multiple files, whether audio files or text files.
  • Device 430 similarly stores a digital media item 404 and executes an audio media player 432 that is configured to process audio data and cause device 430 to generate audio. Device 430 may execute one or more other media players (not shown) that are configured to process other types of media, such as video and text.
  • Intermediary device 420 stores a mapping 406 that maps audio locations within audio data to text location in text data. For example, mapping 406 may map audio locations within digital media item 404 to text locations within digital media item 402. Although not depicted in FIG. 4, intermediary device 420 may store many mappings, one for each corresponding set of audio data and text data. Also, intermediary device 420 may interact with many end-user devices not shown.
  • Also, intermediary device 420 may store digital media items that users may access via their respective devices. Thus, instead of storing a local copy of a digital media item, a device (e.g., device 430) may request the digital media item from intermediary device 420.
  • Additionally, intermediary device 420 may store account data that associates one or more devices of a user with a single account. Thus, such account data may indicate that devices 410 and 430 are registered by the same user under the same account. Intermediary device 420 may also store account-item association data that associates an account with one or more digital media items owned (or purchased) by a particular user. Thus, intermediary device 420 may verify that device 430 may access a particular digital media item by determining whether the account-item association data indicates that device 430 and the particular digital media item are associated with the same account.
  • Although only two end-user devices are depicted, an end-user may own and operate more or less devices that consume digital media items, such as e-books and audio books. Similarly, although only a single intermediary device 420 is depicted, the entity that owns and operates intermediary device 420 may operate multiple devices, each of which provide the same service or may operate together to provide a service to the user of end- user devices 410 and 430.
  • Communication between intermediary device 420 and end- user devices 410 and 430 is made possible via network 440. Network 440 may be implemented by any medium or mechanism that provides for the exchange of data between various computing devices. Examples of such a network include, without limitation, a network such as a Local Area Network (LAN), Wide Area Network (WAN), Ethernet or the Internet, or one or more terrestrial, satellite, or wireless links. The network may include a combination of networks such as those described. The network may transmit data according to Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and/or Internet Protocol (IP).
  • Storage Location of Mapping
  • A mapping may be stored separate from the text data and the audio data from which the mapping was generated. For example, as depicted in FIG. 4, mapping 406 is stored separate from digital media items 402 and 404 even though mapping 406 may be used to identify a media location in one digital media item based on a media location in the other digital media item. In fact, mapping 406 is stored on a separate computing device (intermediary device 420) than devices 410 and 430 that store, respectively, digital media items 402 and 404.
  • Additionally or alternatively, a mapping may be stored as part of the corresponding text data. For example, mapping 406 may be stored in digital media item 402. However, even if the mapping is stored as part of the text data, the mapping may not be displayed to an end-user that consumes the text data. Additionally or alternatively still, a mapping may be stored as part of the audio data. For example, mapping 406 may be stored in digital media item 404.
  • Bookmark Switching
  • “Bookmark switching” refers to establishing a specified location (or “bookmark”) in one version of a digital work and using the bookmark to find the corresponding location within another version of the digital work. There are two types of bookmark switching: text-to-audio (TA) bookmark switching and audio-to-text (AT) bookmark switching. TA bookmark switching involves using a text bookmark established in an e-book to identify a corresponding audio location in an audio book. Conversely, another type of bookmark switching referred to herein as AT bookmark switching involves using an audio bookmark established in an audio book to identify a corresponding text location within an e-book.
  • Text-to-Audio Bookmark Switching
  • FIG. 5A is a flow diagram that depicts a process 500 for TA bookmark switching, according to an embodiment of the invention. FIG. 5A is described using elements of system 400 depicted in FIG. 4.
  • At step 502, a text media player 412 (e.g., an e-reader) determines a text bookmark within digital media item 402 (e.g., a digital book). Device 410 displays content from digital media item 402 to a user of device 410.
  • The text bookmark may be determined in response to input from the user. For example, the user may touch an area on a touch screen of device 410. Device 410's display, at or near that area, displays one or more words. In response to the input, the text media player 412 determines the one or more words that are closest to the area. The text media player 412 determines the text bookmark based on the determined one or more words.
  • Alternatively, the text bookmark may be determined based on the last text data that was displayed to the user. For example, the digital media item 402 may comprise 200 electronic “pages” and page 110 was the last page that was displayed. Text media player 412 determines that page 110 was the last page that was displayed. Text media player 412 may establish page 110 as the text bookmark or may establish a point at the beginning of page 110 as the text bookmark, since there may be no way to know where the user stopped reading. It may be safe to assume that the user at least read the last sentence on page 109, which sentence may have ended on page 109 or on page 110. Therefore, the text media player 412 may establish the beginning of the next sentence (which begins on page 110) as the text bookmark. However, if the granularity of the mapping is at the paragraph level, then text media player 412 may establish the beginning of the last paragraph on page 109. Similarly, if the granularity of the mapping is at the sentence level, then text media player 412 may establish the beginning of the chapter that includes page 110 as the text bookmark.
  • At step 504, text media player 412 sends, over network 440 to intermediary device 420, data that indicates the text bookmark. Intermediary device 420 may store the text bookmark in association with device 410 and/or an account of the user of device 410. Previous to step 502, the user may have established an account with an operator of intermediary device 420. The user then registered one or more devices, including device 410, with the operator. The registration caused each of the one or more devices to be associated with the user's account.
  • One or more factors may cause the text media player 412 to send the text bookmark to intermediary device 420. Such factors may include the exiting (or closing down) of text media player 412, the establishment of the text bookmark by the user, or an explicit instruction by the user to save the text bookmark for use when listening to the audio book that corresponds to the textual version of the work for which the text bookmark is established.
  • As noted previously, intermediary device 420 has access to (e.g., stores) mapping 406, which, in this example, maps multiple audio locations in digital media item 404 with multiple text locations within digital media item 402.
  • At step 506, intermediary device 420 inspects mapping 406 to determine a particular text location, of the multiple text locations, that corresponds to the text bookmark. The text bookmark may not exactly match any of the multiple text locations in mapping 406. However, intermediary device 420 may select the text location that is closest to the text bookmark. Alternatively, intermediary device 420 may select the text location that is immediately before the text bookmark, which text location may or may not be the closest text location to the text bookmark. For example, if the text bookmark indicates 5th chapter, 3rd paragraph, 5th sentence and the closest text locations in mapping 406 are (1) 5th chapter, 3rd paragraph, 1st sentence and (2), 5th chapter, 3rd paragraph, 6th sentence, then the text location (1) is selected.
  • At step 508, once the particular text location in the mapping is identified, intermediary device 420 determines a particular audio location, in mapping 406, that corresponds to the particular text location.
  • At step 510, intermediary device 420 sends the particular audio location to device 430, which, in this example, is different than device 410. For example, device 410 may be a tablet computer and the device 430 may be a smart phone. In a related embodiment, device 430 is not involved. Thus, intermediary device 420 may send the particular audio location to device 410.
  • Step 510 may be performed automatically, i.e., in response to intermediary device 420 determining the particular audio location. Alternatively, step 510 or step 506) may be performed in response to receiving, from device 430, an indication that device 430 is about to process digital media item 404. The indication may be a request for an audio location that corresponds to the text bookmark.
  • At step 512, audio media player 432 establishes the particular audio location as a current playback position of the audio data in digital media item 404. This establishment may be performed in response to receiving the particular audio location from intermediary device 420. Because the current playback position becomes the particular audio location, audio media player 432 is not required to play any of the audio that precedes the particular audio location in the audio data. For example, if the particular audio location indicates 2:56:03 (2 hours, 56 minutes, and 3 seconds), then audio media player 432 establishes that time in the audio data as the current playback position. Thus, if the user of device 430 selects a “play” button (whether graphical or physical) on device 430, then audio media player 430 begins processing the audio data at that 2:56:03 mark.
  • In an alternative embodiment, device 410 stores mapping 406 (or a copy thereof). Therefore, in place of steps 504-508, text media player 412 inspects mapping 406 to determine a particular text location, of the multiple text locations, that corresponds to the text bookmark. Then, text media player 412 determines a particular audio location, in mapping 406, that corresponds to the particular text location. The text media player 412 may then cause the particular audio location to be sent to intermediary device 420 to allow device 430 to retrieve the particular audio location and establish a current playback position in the audio data to be the particular audio location. Text media player 412 may also cause the particular text location (or text bookmark) to be sent to intermediary device 420 to allow device 410 (or another device, not shown) to later retrieve the particular text location to allow another text media player executing on the other device to display a portion (e.g., a page) of another copy of digital media item 402, where the portion corresponds to the particular text location.
  • In another alternative embodiment, intermediary device 420 and device 430 are not involved. Thus, steps 504 and 510 are not performed. Thus, device 410 performs all other steps in FIG. 5A, including steps 506 and 508.
  • Audio-to-Text Bookmark Switching
  • FIG. 5B is a flow diagram that depicts a process 550 for AT bookmark switching, according to an embodiment of the invention. Similarly to FIG. 5A, FIG. 5B is described using elements of system 400 depicted in FIG. 4.
  • At step 552, audio media player 432 determines an audio bookmark within digital media item 404 (e.g., an audio book).
  • The audio bookmark may be determined in response to input from the user. For example, the user may stop the playback of the audio data, for example, by selecting a “stop” button that is displayed on a touch screen of device 430. Audio media player 432 determines the location within audio data of digital media item 404 that corresponds to where playback stopped. Thus, the audio bookmark may simply be the last place where the user stopped listening to the audio generated from digital media item 404. Additionally or alternatively, the user may select one or more graphical buttons on the touch screen of device 430 to establish a particular location within digital media item 404 as the audio bookmark. For example, device 430 displays a timeline that corresponds to the length of the audio data in digital media item 404. The user may select a position on the timeline and then provide one or more additional inputs that are used by audio media player 432 to establish the audio bookmark.
  • At step 554, device 430 sends, over network 440 to intermediary device 420, data that indicates the audio bookmark. The intermediary device 420 may store the audio bookmark in association with device 430 and/or an account of the user of device 430. Previous to step 552, the user established an account with an operator of intermediary device 420. The user then registered one or more devices, including device 430, with the operator. The registration caused each of the one or more devices to be associated with the user's account.
  • Intermediary device 420 also has access to (e.g., stores) mapping 406. Mapping 406 maps multiple audio locations in the audio data of digital media item 404 with multiple text locations within text data of digital media item 402.
  • One or more factors may cause audio media player 432 to send the audio bookmark to intermediary device 420. Such factors may include the exiting (or closing down) of audio media player 432, the establishment of the audio bookmark by the user, or an explicit instruction by the user to save the audio bookmark for use when displaying portions of the textual version of the work (reflected in digital media item 402) that corresponds to digital media item 404, for which the audio bookmark is established.
  • At step 556, intermediary device 420 inspects mapping 406 to determine a particular audio location, of the multiple audio locations, that corresponds to the audio bookmark. The audio bookmark may not exactly match any of the multiple audio locations in mapping 406. However, intermediary device 420 may select the audio location that is closest to the audio bookmark. Alternatively, intermediary device 420 may select the audio location that is immediately before the audio bookmark, which audio location may or may not be the closest audio location to the audio bookmark. For example, if the audio bookmark indicates 02:43:19 (or 2 hours, 43 minutes, and 19 seconds) and the closest audio locations in mapping 406 are (1) 02:41:07 and (2), 0:43:56, then the audio location (1) is selected, even though audio location (2) is closest to the audio bookmark.
  • At step 558, once the particular audio location in the mapping is identified, intermediary device 420 determines a particular text location, in mapping 406, that corresponds to the particular audio location.
  • At step 560, intermediary device 420 sends the particular text location to device 410, which, in this example, is different than device 430. For example, device 410 may be a tablet computer and device 430 may be a smart phone that is configured to process audio data and generate audible sounds.
  • Step 560 may be performed automatically, i.e., in response to intermediary device 420 determining the particular text location. Alternatively, step 560 (or step 556) may be performed in response to receiving, from device 410, an indication that device 410 is about to process the digital media item 402. The indication may be a request for a text location that corresponds to the audio bookmark.
  • At step 562, text media player 412 displays information about the particular text location. Step 562 may be performed in response to receiving the particular text location from intermediary device 420. Device 410 is not required to display any of the content that precedes the particular text location in the textual version of the work reflected in digital media item 402. For example, if the particular text location indicates Chapter 3, paragraph 2, sentence 4, then device 410 displays a page that includes that sentence. Text media player 412 may cause a marker to be displayed at the particular text location in the page that visually indicates, to a user of device 410, where to begin reading in the page. Thus, the user is able to immediately read the textual version of the work beginning at a location that corresponds to the last words spoken by a narrator in the audio book.
  • In an alternative embodiment, the device 410 stores mapping 406. Therefore, in place of steps 556-560, after step 554 (wherein the device 430 sends data that indicates the audio bookmark to intermediary device 420), intermediary device 420 sends the audio bookmark to device 410. Then, text media player 412 inspects mapping 406 to determine a particular audio location, of the multiple audio locations, that corresponds to the audio bookmark. Then, text media player 412 determines a particular text location, in mapping 406, that corresponds to the particular audio location. This alternative process then proceeds to step 562, described above.
  • In another alternative embodiment, intermediary device 420 is not involved. Thus, steps 554 and 560 are not performed. Thus, device 430 performs all other steps in FIG. 5B, including steps 556 and 558.
  • Highlight Text in Response to Playing Audio
  • In an embodiment, text from a portion of a textual version of a work is highlighted or “lit up” while audio data that corresponds to the textual version of the work is played. As noted previously, the audio data is an audio version of a textual version of the work and may reflect a reading, of text from the textual version, by a human user. As used herein, “highlighting” text refers to a media player (e.g., an “e-reader”) visually distinguishing that text from other text that is concurrently displayed with the highlighted text. Highlighting text may involve changing the font of the text, changing the font style of the text (e.g., italicize, bold, underline), changing the size of the text, changing the color of the text, changing the background color of the text, or creating an animation associated with the text. An example of creating an animation is causing the text (or background of the text) to blink on and off or to change colors. Another example of creating an animation is creating a graphic to appear above, below, or around the text. For example, in response to the word “toaster” being played and detected by a media player, the media player displays a toaster image above the word “toaster” in the displayed text. Another example of an animation is a bouncing ball that “bounces” on a portion of text (e.g., word, syllable, or letter) when that portion is detected in audio data that is played.
  • FIG. 6 is a flow diagram that depicts a process 600 for causing text, from a textual version of a work, to be highlighted while an audio version of the work is being played, according to an embodiment of the invention.
  • At step 610, the current playback position (which is constantly changing) of audio data of the audio version is determined. This step may be performed by a media player executing on a user's device. The media player processes the audio data to generate audio for the user.
  • At step 620, based on the current playback position, a mapping record in a mapping is identified. The current playback position may match or nearly match the audio location identified in the mapping record.
  • Step 620 may be performed by the media player if the media player has access to a mapping that maps multiple audio locations in the audio data with multiple text locations in the textual version of the work. Alternatively, step 620 may be performed by another process executing on the user's device or by a server that receives the current playback position from the user's device over a network.
  • At step 630, the text location identified in the mapping record is identified.
  • At step 640, a portion of the textual version of the work that corresponds to the text location is caused to be highlighted. This step may be performed by the media player or another software application executing on the user's device. If a server performs the look-up steps (620 and 630), then step 640 may further involve the server sending the text location to the user's device. In response, the media player, or another software application, accepts the text location as input and causes the corresponding text to be highlighted.
  • In an embodiment, different text locations that are identified, by the media player, in the mapping are associated with different types of highlighting. For example, one text location in the mapping may be associated with the changing of the font color from black to red while another text location in the mapping may be associated with an animation, such as a toaster graphic that shows a piece of toast “popping” out of toaster. Therefore, each mapping record in the mapping may include “highlighting data” that indicates how the text identified by the corresponding text location is to be highlighted. Thus, for each mapping record in the mapping that the media player identifies and that includes highlighting data, the media player uses the highlighting data to determine how to highlight the text. If a mapping record does not include highlighting data, then the media player may not highlight the corresponding text. Alternatively, if an mapping record in the mapping does not include highlighting data, then the media player may use a “default” highlight technique (e.g., bolding the text) to highlight the text.
  • Highlighting Text Based on Audio Input
  • FIG. 7 is a flow diagram that depicts a process 700 of highlighting displayed text in response to audio input from a user, according to an embodiment of the invention. In this embodiment, a mapping is not required. The audio input is used to highlight text in a portion of a textual version of a work that is concurrently displayed to the user.
  • At step 710, audio input is received. The audio input may be based on a user reading aloud text from a textual version of a work. The audio input may be received by a device that displays a portion of the textual version. The device may prompt the user to read aloud a word, phrase, or entire sentence. The prompt may be visual or audio. As an example of a visual prompt, the device may cause the following text to be displayed: “Please read the underlined text” while or immediately before the device displays a sentence that is underlined. As an example of an audio prompt, the device may cause a computer-generated voice to read “Please read the underlined text” or cause a pre-recorded human voice to be played, where the pre-recorded human voice provides the same instruction.
  • At step 720, a speech-to-text analysis is performed on the audio input to detect one or more words reflected in the audio input.
  • At step 730, for each detected word reflected in the audio input, that detected word is compared to a particular set of words. The particular set of words may be all the words that are currently displayed by a computing device (e.g., an e-reader). Alternatively, the particular set of words may be all the words that the user was prompted to read.
  • At step 740, for each detected word that matches a word in the particular set, the device causes that matching word to be highlighted.
  • The steps depicted in process 700 may be performed by a single computing device that displays text from a textual version of a work. Alternatively, the steps depicted in process 700 may be performed by one or more computing devices that are different than the computing device that displays text from the textual version. For example, the audio input from a user in step 710 may be sent from the user's device over a network to a network server that performs the speech-to-text analysis. The network server may then send highlight data to the user's device to cause the user's device to highlight the appropriate text.
  • Playing Audio in Response to Text Selection
  • In an embodiment, a user of a media player that displays portions of a textual version of a work may select portions of displayed text and cause the corresponding audio to be played. For example, if a displayed word from the digital book is “donut” and the user selects that word (e.g., by touching a portion of the media player's touch screen that displays that word), then the audio of “donut” may be played.
  • A mapping that maps text locations in a textual version of the work with audio locations in audio data is used to identify the portion of the audio data that corresponds to the selected text. The user may select a single word, a phrase, or even one or more sentences. In response to input that selects a portion of the displayed text, the media player may identify one or more text locations. For example, the media player may identify a single text location that corresponds to the selected portion, even if the selected portion comprises multiple lines or sentences. The identified text location may correspond to the beginning of the selected portion. As another example, the media player may identify a first text location that corresponds to the beginning of the selected portion and a second text location that corresponds to the ending of the selected portion.
  • The media player uses the identified text location to look up a mapping record in the mapping that indicates a text location that is closest (or closest prior) to the identified text location. The media player uses the audio location indicated in the mapping record to identify where, in the audio data, to begin processing the audio data in order to generate audio. If only a single text location is identified, then only the word or sounds at or near the audio location may be played. Thus, after the word or sounds are played, the media player ceases to play any more audio. Alternatively, the media player begins playing at or near the audio location and does not cease playing the audio that follows the audio location until (a) the end of the audio data is reached, (b) further input from the user (e.g., selection of a “stop” button), or (c) a pre-designated stopping point in the audio data (e.g., end of a page or chapter that requires further input to proceed).
  • If the media player identifies two text locations based on the selected portion, then two audio locations are identified and may be used to identify where to begin playing and where to stop playing the corresponding audio.
  • In an embodiment, the audio data identified by the audio location may be played slowly (i.e., at a slow playback speed) or continuously without advancing the current playback position in the audio data. For example, if a user of a tablet computer selects the displayed word “two” by touching a touch screen of the tablet computer with his finger and continuously touches the displayed word (i.e., without lifting his finger and without moving his finger to another displayed word), then the tablet computer plays the corresponding audio creating a sound reflected by reading the word “twoooooooooooooooo”.
  • In a similar embodiment, the speed at which a user drags her finger across displayed text on a touch screen of a media player causes the corresponding audio to be played at the same or similar speed. For example, a user selects the letter “d” of the displayed word “donut” and then slowly moves his finger across the displayed word. In response to this input, the media player identifies the corresponding audio data (using the mapping) and plays the corresponding audio at the same speed at which the user moves his finger. Therefore, the media player creates audio that sounds as if the reader of the text of the textual version of the work pronounced the word “donut” as “dooooooonnnnnnuuuuuut.”
  • In a similar embodiment, the time that a user “touches” a word displayed on a touch screen dictates how quickly or slowly the audio version of the word is played. For example, a quick tap of a displayed word by the user's finger causes the corresponding audio to be played at a normal speed, whereas the user holding down his finger on the selected word for more than 1 second causes the corresponding audio to be played at ½ the normal speed.
  • Transferring User Annotations
  • In an embodiment, a user initiates the creation of annotations to one media version (e.g., audio) of a digital work and causes the annotations to be associated with another media version (e.g., text) of the digital work. Thus, while an annotation may be created in the context of one type of media, the annotation may be consumed in the context of another type of media. The “context” in which an annotation is created or consumed refers to whether text is being displayed or audio is being played when the creation or consumption occurs.
  • Although the following examples involve determining a location within audio or text location when an annotation is created, some embodiments of the invention are not so limited. For example, the current playback position within an audio file when an annotation is created in the audio context is not used when consuming the annotation in the text context. Instead, an indication of the annotation may be displayed, by a device, at the beginning or the end of the corresponding textual version or on each “page” of the corresponding textual version. As another example, the text that is displayed when an annotation is created in the text context is not used when consuming the annotation in the audio context. Instead, an indication of the annotation may be displayed, by a device, at the beginning or end of the corresponding audio version or continuously while the corresponding audio version is being played. Additionally or alternatively to a visual indication, an audio indication of the annotation may be played. For example, a “beep” is played simultaneously with the audio track in such a way that both the beep and the audio track can be heard.
  • FIGS. 8A-B are flow diagrams that depict processes for transferring an annotation from one context to another, according to an embodiment of the invention. Specifically, FIG. 8A is a flow diagram depicts a process 800 for creating an annotation in the “text” context and consuming the annotation in the “audio” context, while FIG. 8B is a flow diagram that depicts a process 850 for creating an annotation in the “audio” context and consuming the annotation in the “text” context. The creation and consumption of an annotation may occur on the same computing device (e.g., device 410) or on separate computing devices (e.g., devices 410 and 430). FIG. 8A describes a scenario where the annotation is created and consumed on device 410 while FIG. 8B describes a scenario where the annotation is created on device 410 and later consumed on device 430.
  • At step 802 in FIG. 8A, text media player 412, executing on device 410, causes text (e.g., in the form of a page) from digital media item 402 to be displayed.
  • At step 804, text media player 412 determines a text location within a textual version of the work reflected in digital media item 402. The text location is eventually stored in association with an annotation. The text location may be determined in a number of ways. For example, text media player 412 may receive input that selects the text location within the displayed text. The input may be a user touching a touch screen (that displays the text) of device 410 for a period of time. The input may select a specific word, a number of words, the beginning or ending of a page, before or after a sentence, etc. The input may also include first selecting a button, which causes text media player 412 to change to a “create annotation” mode where an annotation may be created and associated with the text location.
  • As another example of determining a text location, text media player 412 determines the text location automatically (without user input) based on which portion of the textual version of the work (reflected in digital media item 402) is being displayed. For example, if device 410 is displaying page 20 of the textual version of the work, then the annotation will be associated with page 20.
  • At step 806, text media player 412 receives input that selects a “Create Annotation” button that may be displayed on the touch screen. Such a button may be displayed in response to input in step 804 that selects the text location, where, for example, the user touches the touch screen for a period of time, such as one second.
  • Although step 804 is depicted as occurring before step 806, alternatively, the selection of the “Create Annotation” button may occur prior to the determination of the text location.
  • At step 808, text media player 412 receives input that is used to create annotation data. The input may be voice data (such as the user speaking into a microphone of device 410) or text data (such as the user selecting keys on a keyboard, whether physical or graphical). If the annotation data is voice data, text media player 412 (or another process) may perform speech-to-text analysis on the voice data to create a textual version of the voice data.
  • At step 810, text media player 412 stores the annotation data in association with the text location. Text media player 412 uses a mapping (e.g., a copy of mapping 406) to identify a particular text location, in mapping, that is closest to the text location. Then, using mapping, text media player identifies an audio location that corresponds to the particular text location.
  • Alternatively to step 810, text media player 412 sends, over network 440 to intermediary device 420, the annotation data and the text location. In response, intermediary device 420 stores the annotation data in association with the text location. Intermediary device 420 uses a mapping (e.g., mapping 406) to identify a particular text location, in mapping 406, that is closest to the text location. Then, using mapping 406, intermediary device 420 identifies an audio location that corresponds to the particular text location. Intermediary device 420 sends the identified audio location over network 440 to device 410. Intermediary device 420 may send the identified audio location in response to a request, from device 410, for certain audio data and/or for annotations associated with certain audio data. For example, in response to a request for an audio book version of “The Tale of Two Cities”, intermediary device 420 determines whether there is any annotation data associated with that audio book and, if so, sends the annotation data to device 410.
  • Step 810 may also comprise storing date and/or time information that indicates when the annotation was created. This information may be displayed later when the annotation is consumed in the audio context.
  • At step 812, audio media player 414 plays audio by processing audio data of digital media item 404, which, in this example (although not shown), may be stored on device 410 or may be streamed to device 410 from intermediary device 420 over network 440.
  • At step 814, audio media player 414 determines when the current playback position in the audio data matches or nearly matches the audio location identified in step 810 using mapping 406. Alternatively, audio media player 414 may cause data that indicates that an annotation is available to be displayed, regardless of where the current playback position is located and without having to play any audio, as indicated in step 812. In other words, step 812 is unnecessary. For example, a user may launch audio media player 414 and cause audio media player 414 to load the audio data of digital media item 404. Audio media player 414 determines that annotation data is associated with the audio data. Audio media player 414 causes information about the audio data (e.g., title, artist, genre, length, etc.) to be displayed without generating any audio associated with the audio data. The information may include a reference to the annotation data and information about a location within the audio data that is associated with the annotation data, where the location corresponds to the audio location identified in step 810.
  • At step 816, audio media player 414 consumes the annotation data. If the annotation data is voice data, then consuming the annotation data may involve processing the voice data to generate audio or converting the voice data to text data and displaying the text data. If the annotation data is text data, then consuming the annotation data may involve displaying the text data, for example, in a side panel of a GUI that displays attributes of the audio data that is played or in a new window that appears separate from the GUI. Non-limiting examples of attributes include time length of the audio data, the current playback position, which may indicate an absolute location within the audio data (e.g., a time offset) or a relative position within the audio data (e.g., chapter or section number), a waveform of the audio data, and title of the digital work.
  • FIG. 8B describes a scenario, as noted previously, where an annotation is created on device 430 and later consumed on device 410.
  • At step 852, audio media player 432 processes audio data from digital media item 404 to play audio.
  • At step 854, audio media player 432 determines an audio location within the audio data. The audio location is eventually stored in association with an annotation. The audio location may be determined in a number of ways. For example, audio media player 432 may receive input that selects the audio location within the audio data. The input may be a user touching a touch screen (that displays attributes of the audio data) of device 430 for a period of time. The input may select an absolute position within a timeline that reflects the length of the audio data or a relative position within the audio data, such as a chapter number and a paragraph number. The input may also comprise first selecting a button, which causes audio media player 432 to change to a “create annotation” mode where an annotation may be created and associated with the audio location.
  • As another example of determining an audio location, audio media player 432 determines the audio location automatically (without user input) based on which portion of the audio data is being processed. For example, if audio media player 432 is processing a portion of the audio data that corresponds to chapter 20 of a digital work reflected in digital media item 404, then audio media player 432 determines that the audio location is at least be somewhere within chapter 20.
  • At step 856, audio media player 432 receives input that selects a “Create Annotation” button that may be displayed on the touch screen of device 430. Such a button may be displayed in response to input in step 854 that selects the audio location, where, for example, the user touches the touch screen continuously for a period of time, such as one second.
  • Although step 854 is depicted as occurring before step 856, alternatively, the selection of the “Create Annotation” button may occur prior to the determination of the audio location.
  • At step 858, the first media player receives input that is used to create annotation data, similar to step 808.
  • At step 860, audio media player 432 stores the annotation data in association with the audio location. Audio media player 432 uses a mapping (e.g., mapping 406) to identify a particular audio location, in the mapping, that is closest to the audio location determined in step 854. Then, using the mapping, audio media player 432 identifies a text location that corresponds to the particular audio location.
  • Alternatively to step 860, audio media player 432 sends, over network 400 to intermediary device 420, the annotation data and the audio location. In response, intermediary device 420 stores the annotation data in association with the audio location. Intermediary device 420 uses mapping 406 to identify a particular audio location, in the mapping, that is closest to the audio location determined in step 854. Then, using mapping 406, intermediary device 420 identifies a text location that corresponds to the particular audio location. Intermediary device 420 sends the identified text location over network 440 to device 410. Intermediary device 420 may send the identified text location in response to a request, from device 410, for certain text data and/or for annotations associated with certain text data. For example, in response to a request for a digital book of “The Grapes of Wrath”, intermediary device 420 determines whether there is any annotation data associated with that digital book and, if so, sends the annotation data to device 430.
  • Step 860 may also comprise storing date and/or time information that indicates when the annotation was created. This information may be displayed later when the annotation is consumed in the text context.
  • At step 862, device 410 displays text data associated with digital media item 402, which is a textual version of digital media item 404. Device 410 displays the text data of digital media item 402 based on a locally-stored copy of digital media item 402 or, if a locally-stored copy does not exist, may display the text data while the text data is streamed from intermediary device 420.
  • At step 864, device 410 determines when a portion of the textual version of the work (reflected in digital media item 402) that includes the text location (identified in step 860) is displayed. Alternatively, device 410 may display data that indicates that an annotation is available regardless of what portion of the textual version of the work, if any, is displayed.
  • At step 866, text media player 412 consumes the annotation data. If the annotation data is voice data, then consuming the annotation data may comprise playing the voice data or converting the voice data to text data and displaying the text data. If the annotation data is text data, then consuming the annotation data may comprises displaying the text data, for example, in a side panel of a GUI that displays a portion of the textual version of the work or in a new window that appears separate from the GUI.
  • Hardware Overview
  • According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
  • For example, FIG. 9 is a block diagram that illustrates a computer system 900 upon which an embodiment of the invention may be implemented. Computer system 900 includes a bus 902 or other communication mechanism for communicating information, and a hardware processor 904 coupled with bus 902 for processing information. Hardware processor 904 may be, for example, a general purpose microprocessor.
  • Computer system 900 also includes a main memory 906, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 902 for storing information and instructions to be executed by processor 904. Main memory 906 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 904. Such instructions, when stored in non-transitory storage media accessible to processor 904, render computer system 900 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • Computer system 900 further includes a read only memory (ROM) 908 or other static storage device coupled to bus 902 for storing static information and instructions for processor 904. A storage device 910, such as a magnetic disk or optical disk, is provided and coupled to bus 902 for storing information and instructions.
  • Computer system 900 may be coupled via bus 902 to a display 912, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 914, including alphanumeric and other keys, is coupled to bus 902 for communicating information and command selections to processor 904. Another type of user input device is cursor control 916, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 904 and for controlling cursor movement on display 912. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • Computer system 900 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 900 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 900 in response to processor 904 executing one or more sequences of one or more instructions contained in main memory 906. Such instructions may be read into main memory 906 from another storage medium, such as storage device 910. Execution of the sequences of instructions contained in main memory 906 causes processor 904 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 910. Volatile media includes dynamic memory, such as main memory 906. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
  • Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 902. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 904 for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 900 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 902. Bus 902 carries the data to main memory 906, from which processor 904 retrieves and executes the instructions. The instructions received by main memory 906 may optionally be stored on storage device 910 either before or after execution by processor 904.
  • Computer system 900 also includes a communication interface 918 coupled to bus 902. Communication interface 918 provides a two-way data communication coupling to a network link 920 that is connected to a local network 922. For example, communication interface 918 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 918 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 918 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 920 typically provides data communication through one or more networks to other data devices. For example, network link 920 may provide a connection through local network 922 to a host computer 924 or to data equipment operated by an Internet Service Provider (ISP) 926. ISP 926 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 928. Local network 922 and Internet 928 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 920 and through communication interface 918, which carry the digital data to and from computer system 900, are example forms of transmission media.
  • Computer system 900 can send messages and receive data, including program code, through the network(s), network link 920 and communication interface 918. In the Internet example, a server 930 might transmit a requested code for an application program through Internet 928, ISP 926, local network 922 and communication interface 918.
  • The received code may be executed by processor 904 as it is received, and/or stored in storage device 910, or other non-volatile storage for later execution.
  • In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.

Claims (30)

1. A method comprising:
receiving audio data that reflects an audible version of a work for which a textual version exists;
performing a speech-to-text analysis of the audio data to generate text for portions of the audio data; and
based on the text generated for the portions of the audio data, generating a mapping between a plurality of audio locations in the audio data and a corresponding plurality of text locations in the textual version of the work;
wherein the method is performed by one or more computing devices.
2. The method of claim 1 wherein generating text for portions of the audio data includes generating text for portions of the audio data based, at least in part, on textual context of the work.
3. The method of claim 2, wherein generating text for portions of the audio data based, at least in part, on textual context of the work includes generating text based, at least in part, on one or more rules of grammar used in the textual version of the work.
4. The method of claim 2, wherein generating text for portions of the audio data based, at least in part, on textual context of the work includes limiting which words the portions can be translated to based on which words are in the textual version of the work, or a subset thereof.
5. The method of claim 4, wherein limiting which words the portions can be translated to based on which words are in the textual version of the work includes, for a given portion of the audio data, identifying a sub-section of the textual version of the work that corresponds to the given portion and limiting the words to only those words in the sub-section of the textual version of the work.
6. The method of claim 5, wherein:
identifying the sub-section of the textual version of the work includes maintaining a current text location in the textual version of the work that corresponds to a current audio location, in the audio data, of the speech-to-text analysis; and
the sub-section of the textual version of the work is a section associated with the current text location.
7. The method of claim 1, wherein the portions include portions that correspond to individual words, and the mapping maps the locations of the portions that correspond to individual words to individual words in the textual version of the work.
8. The method of claim 1, wherein the portions include portions that correspond to individual sentences, and the mapping maps the locations of the portions that correspond to individual sentences to individual sentences in the textual version of the work.
9. The method of claim 1, wherein the portions include portions that correspond to fixed amounts of data, and the mapping maps the locations of the portions that correspond to fixed amounts of data to corresponding locations in the textual version of the work.
10. The method of claim 1, wherein generating the mapping includes: (1) embedding anchors in the audio data; (2) embedding anchors in the textual version of the work; or (3) storing the mapping in a media overlay that is stored in association with the audio data or the textual version of the work.
11. The method of claim 1, wherein each of one or more text locations of the plurality of text locations indicates a relative location in the textual version of the work.
12. The method of claim 1, wherein one text location, of the plurality of text locations, indicates a relative location in the textual version of the work and another text location, of the plurality of text locations, indicates an absolute location from the relative location.
13. The method of claim 1, wherein each of one or more text locations of the plurality of text locations indicates an anchor within the textual version of the work.
14. A method comprising:
receiving a textual version of a work;
performing a text-to-speech analysis of the textual version to generate first audio data;
based on the first audio data and the textual version, generating a first mapping between a first plurality of audio locations in the first audio data and a corresponding plurality of text locations in the textual version of the work;
receiving second audio data that reflects an audible version of the work for which the textual version exists; and
based on (1) a comparison of the first audio data and the second audio data and (2) the first mapping, generating a second mapping between a second plurality of audio locations in the second audio data and the plurality of text locations in the textual version of the work;
wherein the method is performed by one or more computing devices.
15. A method comprising:
receiving audio input;
performing a speech-to-text analysis of the audio input to generate text for portions of the audio input;
determining whether the text generated for portions of the audio input matches text that is currently displayed; and
in response to determining that the text matches text that is currently displayed, causing the text that is currently displayed to be highlighted;
wherein the method is performed by one or more computing devices.
16. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 1.
17. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 2.
18. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 3.
19. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 4.
20. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 5.
21. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 6.
22. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 7.
23. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 8.
24. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 9.
25. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 10.
26. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 11.
27. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 12.
28. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 13.
29. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 14.
30. One or more storage media storing instructions which, when executed by one or more processors, causes performance of the method recited in claim 15.
US13/267,738 2011-06-03 2011-10-06 Automatically creating a mapping between text data and audio data Abandoned US20120310642A1 (en)

Priority Applications (15)

Application Number Priority Date Filing Date Title
US13/267,738 US20120310642A1 (en) 2011-06-03 2011-10-06 Automatically creating a mapping between text data and audio data
JP2012126444A JP5463385B2 (en) 2011-06-03 2012-06-01 Automatic creation of mapping between text data and audio data
TW101119921A TWI488174B (en) 2011-06-03 2012-06-01 Automatically creating a mapping between text data and audio data
KR1020157017690A KR101674851B1 (en) 2011-06-03 2012-06-04 Automatically creating a mapping between text data and audio data
KR1020137034641A KR101622015B1 (en) 2011-06-03 2012-06-04 Automatically creating a mapping between text data and audio data
PCT/US2012/040801 WO2012167276A1 (en) 2011-06-03 2012-06-04 Automatically creating a mapping between text data and audio data
EP12729332.2A EP2593846A4 (en) 2011-06-03 2012-06-04 Automatically creating a mapping between text data and audio data
JP2014513799A JP2014519058A (en) 2011-06-03 2012-06-04 Automatic creation of mapping between text data and audio data
CN201280036281.5A CN103703431B (en) 2011-06-03 2012-06-04 Automatically create the mapping between text data and voice data
KR1020120060060A KR101324910B1 (en) 2011-06-03 2012-06-04 Automatically creating a mapping between text data and audio data
CN2012103062689A CN102937959A (en) 2011-06-03 2012-06-04 Automatically creating a mapping between text data and audio data
AU2012261818A AU2012261818B2 (en) 2011-06-03 2012-06-04 Automatically creating a mapping between text data and audio data
KR1020167006970A KR101700076B1 (en) 2011-06-03 2012-06-04 Automatically creating a mapping between text data and audio data
JP2014008040A JP2014132345A (en) 2011-06-03 2014-01-20 Automatically creating mapping between text data and audio data
AU2016202974A AU2016202974B2 (en) 2011-06-03 2016-05-09 Automatically creating a mapping between text data and audio data

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201161493372P 2011-06-03 2011-06-03
US201161494375P 2011-06-07 2011-06-07
US13/267,738 US20120310642A1 (en) 2011-06-03 2011-10-06 Automatically creating a mapping between text data and audio data

Publications (1)

Publication Number Publication Date
US20120310642A1 true US20120310642A1 (en) 2012-12-06

Family

ID=47262337

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/267,738 Abandoned US20120310642A1 (en) 2011-06-03 2011-10-06 Automatically creating a mapping between text data and audio data
US13/267,749 Active 2035-05-15 US10672399B2 (en) 2011-06-03 2011-10-06 Switching between text data and audio data based on a mapping

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/267,749 Active 2035-05-15 US10672399B2 (en) 2011-06-03 2011-10-06 Switching between text data and audio data based on a mapping

Country Status (7)

Country Link
US (2) US20120310642A1 (en)
EP (1) EP2593846A4 (en)
JP (1) JP2014519058A (en)
KR (4) KR101674851B1 (en)
CN (1) CN103703431B (en)
AU (2) AU2012261818B2 (en)
WO (1) WO2012167276A1 (en)

Cited By (238)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130054609A1 (en) * 2011-08-30 2013-02-28 International Business Machines Corporation Accessing Anchors in Voice Site Content
US20130268826A1 (en) * 2012-04-06 2013-10-10 Google Inc. Synchronizing progress in audio and text versions of electronic books
US20140013192A1 (en) * 2012-07-09 2014-01-09 Sas Institute Inc. Techniques for touch-based digital document audio and user interface enhancement
US20140059076A1 (en) * 2006-10-13 2014-02-27 Syscom Inc. Method and system for converting audio text files originating from audio files to searchable text and for processing the searchable text
US20140223272A1 (en) * 2013-02-04 2014-08-07 Audible, Inc. Selective synchronous presentation
WO2014137074A1 (en) * 2013-03-05 2014-09-12 Lg Electronics Inc. Mobile terminal and method of controlling the mobile terminal
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
WO2015002585A1 (en) 2013-07-03 2015-01-08 Telefonaktiebolaget L M Ericsson (Publ) Providing an electronic book to a user equipment
US8948892B2 (en) 2011-03-23 2015-02-03 Audible, Inc. Managing playback of synchronized content
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US9031493B2 (en) 2011-11-18 2015-05-12 Google Inc. Custom narration of electronic books
US9047356B2 (en) 2012-09-05 2015-06-02 Google Inc. Synchronizing multiple reading positions in electronic books
US9063641B2 (en) 2011-02-24 2015-06-23 Google Inc. Systems and methods for remote collaborative studying using electronic books
US9075760B2 (en) 2012-05-07 2015-07-07 Audible, Inc. Narration settings distribution for content customization
US9099089B2 (en) 2012-08-02 2015-08-04 Audible, Inc. Identifying corresponding regions of content
US9141257B1 (en) 2012-06-18 2015-09-22 Audible, Inc. Selecting and conveying supplemental content
US9141404B2 (en) 2011-10-24 2015-09-22 Google Inc. Extensible framework for ereader tools
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
US9213705B1 (en) * 2011-12-19 2015-12-15 Audible, Inc. Presenting content related to primary audio content
US9223830B1 (en) 2012-10-26 2015-12-29 Audible, Inc. Content presentation analysis
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US9317486B1 (en) 2013-06-07 2016-04-19 Audible, Inc. Synchronizing playback of digital content with captured physical content
US9317500B2 (en) 2012-05-30 2016-04-19 Audible, Inc. Synchronizing translated digital content
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9367196B1 (en) 2012-09-26 2016-06-14 Audible, Inc. Conveying branched content
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US9472113B1 (en) 2013-02-05 2016-10-18 Audible, Inc. Synchronizing playback of digital content with physical content
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9489360B2 (en) 2013-09-05 2016-11-08 Audible, Inc. Identifying extra material in companion content
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9536439B1 (en) 2012-06-27 2017-01-03 Audible, Inc. Conveying questions with content
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US20170060365A1 (en) * 2015-08-27 2017-03-02 LENOVO ( Singapore) PTE, LTD. Enhanced e-reader experience
US20170083214A1 (en) * 2015-09-18 2017-03-23 Microsoft Technology Licensing, Llc Keyword Zoom
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9632647B1 (en) * 2012-10-09 2017-04-25 Audible, Inc. Selecting presentation positions in dynamic content
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9679608B2 (en) 2012-06-28 2017-06-13 Audible, Inc. Pacing content
US9684641B1 (en) * 2012-09-21 2017-06-20 Amazon Technologies, Inc. Presenting content in multiple languages
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9703781B2 (en) 2011-03-23 2017-07-11 Audible, Inc. Managing related digital content
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
WO2017123419A1 (en) * 2016-01-11 2017-07-20 Microsoft Technology Licensing, Llc Organization, retrieval, annotation and presentation of media data files using signals captured from a viewing environment
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734153B2 (en) 2011-03-23 2017-08-15 Audible, Inc. Managing related digital content
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9792027B2 (en) 2011-03-23 2017-10-17 Audible, Inc. Managing playback of synchronized content
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US20170315976A1 (en) * 2016-04-29 2017-11-02 Seagate Technology Llc Annotations for digital media items post capture
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
WO2018098093A1 (en) * 2016-11-28 2018-05-31 Microsoft Technology Licensing, Llc Audio landmarking for aural user interface
US10019995B1 (en) 2011-03-01 2018-07-10 Alice J. Stiebel Methods and systems for language learning based on a series of pitch patterns
US10038886B2 (en) 2015-09-18 2018-07-31 Microsoft Technology Licensing, Llc Inertia audio scrolling
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
WO2018187234A1 (en) * 2017-04-03 2018-10-11 Ex-Iq, Inc. Hands-free annotations of audio text
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10108312B2 (en) 2013-10-17 2018-10-23 Samsung Electronics Co., Ltd. Apparatus and method for processing information list in terminal device
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10210860B1 (en) 2018-07-27 2019-02-19 Deepgram, Inc. Augmented generalized deep learning with special vocabulary
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10331304B2 (en) 2015-05-06 2019-06-25 Microsoft Technology Licensing, Llc Techniques to automatically generate bookmarks for media files
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10489110B2 (en) 2016-11-22 2019-11-26 Microsoft Technology Licensing, Llc Implicit narration for aural user interface
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US20200043473A1 (en) * 2018-07-31 2020-02-06 Korea Electronics Technology Institute Audio segmentation method based on attention mechanism
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
WO2020046269A1 (en) * 2018-08-27 2020-03-05 Google Llc Algorithmic determination of a story readers discontinuation of reading
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US10606940B2 (en) 2013-09-20 2020-03-31 Kabushiki Kaisha Toshiba Annotation sharing method, annotation sharing apparatus, and computer program product
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
WO2020095021A1 (en) * 2018-11-06 2020-05-14 Arm Ip Limited Resources and methods for tracking progression in a literary work
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10672399B2 (en) 2011-06-03 2020-06-02 Apple Inc. Switching between text data and audio data based on a mapping
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10748546B2 (en) 2017-05-16 2020-08-18 Apple Inc. Digital assistant services based on device capabilities
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10930284B2 (en) 2019-04-11 2021-02-23 Advanced New Technologies Co., Ltd. Information processing system, method, device and equipment
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US11010127B2 (en) 2015-06-29 2021-05-18 Apple Inc. Virtual assistant for media playback
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11062615B1 (en) 2011-03-01 2021-07-13 Intelligibility Training LLC Methods and systems for remote language learning in a pandemic-aware world
US11070949B2 (en) 2015-05-27 2021-07-20 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display
US11127392B2 (en) * 2019-07-09 2021-09-21 Google Llc On-device speech synthesis of textual segments for training of on-device speech recognition model
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US11217251B2 (en) 2019-05-06 2022-01-04 Apple Inc. Spoken notifications
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US11237797B2 (en) 2019-05-31 2022-02-01 Apple Inc. User activity shortcut suggestions
US11269678B2 (en) 2012-05-15 2022-03-08 Apple Inc. Systems and methods for integrating third party services with a digital assistant
WO2022047516A1 (en) * 2020-09-04 2022-03-10 The University Of Melbourne System and method for audio annotation
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11354920B2 (en) * 2019-10-12 2022-06-07 International Business Machines Corporation Updating and implementing a document from an audio proceeding
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US11417325B2 (en) 2018-09-04 2022-08-16 Google Llc Detection of story reader progress for pre-caching special effects
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US11467802B2 (en) 2017-05-11 2022-10-11 Apple Inc. Maintaining privacy of personal information
US11468282B2 (en) 2015-05-15 2022-10-11 Apple Inc. Virtual assistant in a communication session
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US11501769B2 (en) 2018-08-31 2022-11-15 Google Llc Dynamic adjustment of story time special effects based on contextual data
US11526671B2 (en) 2018-09-04 2022-12-13 Google Llc Reading progress estimation based on phonetic fuzzy matching and confidence interval
US11532306B2 (en) 2017-05-16 2022-12-20 Apple Inc. Detecting a trigger of a digital assistant
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11657813B2 (en) 2019-05-31 2023-05-23 Apple Inc. Voice identification in digital assistant systems
US11696060B2 (en) 2020-07-21 2023-07-04 Apple Inc. User identification using headphones
US11755276B2 (en) 2020-05-12 2023-09-12 Apple Inc. Reducing description length based on confidence
US11765209B2 (en) 2020-05-11 2023-09-19 Apple Inc. Digital assistant hardware abstraction
US11790914B2 (en) 2019-06-01 2023-10-17 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US11798547B2 (en) 2013-03-15 2023-10-24 Apple Inc. Voice activated device for use with a voice-based digital assistant
US11809483B2 (en) 2015-09-08 2023-11-07 Apple Inc. Intelligent automated assistant for media search and playback
US11838734B2 (en) 2020-07-20 2023-12-05 Apple Inc. Multi-device audio adjustment coordination
US11853536B2 (en) 2015-09-08 2023-12-26 Apple Inc. Intelligent automated assistant in a media environment
US11886805B2 (en) 2015-11-09 2024-01-30 Apple Inc. Unconventional virtual assistant interactions
US11914848B2 (en) 2020-05-11 2024-02-27 Apple Inc. Providing relevant data items based on context

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629033B (en) * 2010-01-11 2022-07-08 苹果公司 Manipulation and display of electronic text
JP5941264B2 (en) * 2011-11-01 2016-06-29 キヤノン株式会社 Information processing apparatus and information processing method
US20130129310A1 (en) * 2011-11-22 2013-05-23 Pleiades Publishing Limited Inc. Electronic book
US9117195B2 (en) * 2012-02-13 2015-08-25 Google Inc. Synchronized consumption modes for e-books
US8933312B2 (en) * 2012-06-01 2015-01-13 Makemusic, Inc. Distribution of audio sheet music as an electronic book
US9542936B2 (en) * 2012-12-29 2017-01-10 Genesys Telecommunications Laboratories, Inc. Fast out-of-vocabulary search in automatic speech recognition systems
US20140191976A1 (en) * 2013-01-07 2014-07-10 Microsoft Corporation Location Based Augmentation For Story Reading
KR102045281B1 (en) * 2013-06-04 2019-11-15 삼성전자주식회사 Method for processing data and an electronis device thereof
KR20150024188A (en) * 2013-08-26 2015-03-06 삼성전자주식회사 A method for modifiying text data corresponding to voice data and an electronic device therefor
US20150089368A1 (en) * 2013-09-25 2015-03-26 Audible, Inc. Searching within audio content
CN106033678A (en) * 2015-03-18 2016-10-19 珠海金山办公软件有限公司 Playing content display method and apparatus thereof
US10089059B1 (en) * 2015-03-30 2018-10-02 Audible, Inc. Managing playback of media content with location data
US20170098324A1 (en) * 2015-10-05 2017-04-06 Vitthal Srinivasan Method and system for automatically converting input text into animated video
KR101663300B1 (en) * 2015-11-04 2016-10-07 주식회사 디앤피코퍼레이션 Apparatus and method for implementing interactive fairy tale book
US10147416B2 (en) * 2015-12-09 2018-12-04 Amazon Technologies, Inc. Text-to-speech processing systems and methods
CN105632484B (en) * 2016-02-19 2019-04-09 云知声(上海)智能科技有限公司 Speech database for speech synthesis pause information automatic marking method and system
CN108885869B (en) * 2016-03-16 2023-07-18 索尼移动通讯有限公司 Method, computing device, and medium for controlling playback of audio data containing speech
JP6891879B2 (en) * 2016-04-27 2021-06-18 ソニーグループ株式会社 Information processing equipment, information processing methods, and programs
CN106527845B (en) * 2016-10-11 2019-12-10 东南大学 Method and device for carrying out voice annotation and reproducing mouse operation in text
US10475438B1 (en) * 2017-03-02 2019-11-12 Amazon Technologies, Inc. Contextual text-to-speech processing
CN107122179A (en) * 2017-03-31 2017-09-01 阿里巴巴集团控股有限公司 The function control method and device of voice
CN107657973B (en) * 2017-09-27 2020-05-08 风变科技(深圳)有限公司 Text and audio mixed display method and device, terminal equipment and storage medium
CN107885430B (en) * 2017-11-07 2020-07-24 Oppo广东移动通信有限公司 Audio playing method and device, storage medium and electronic equipment
CN108255386B (en) * 2018-02-12 2019-07-05 掌阅科技股份有限公司 The display methods of the hand-written notes of e-book calculates equipment and computer storage medium
CN108460120A (en) * 2018-02-13 2018-08-28 广州视源电子科技股份有限公司 Data save method, device, terminal device and storage medium
JP6918255B1 (en) * 2018-06-27 2021-08-11 グーグル エルエルシーGoogle LLC Rendering the response to the user's oral utterance using a local text response map
KR102493141B1 (en) * 2018-07-19 2023-01-31 돌비 인터네셔널 에이비 Method and system for generating object-based audio content
CN109522427B (en) * 2018-09-30 2021-12-10 北京光年无限科技有限公司 Intelligent robot-oriented story data processing method and device
CN109491740B (en) * 2018-10-30 2021-09-10 北京云测信息技术有限公司 Automatic multi-version funnel page optimization method based on context background information
EP3660848A1 (en) 2018-11-29 2020-06-03 Ricoh Company, Ltd. Apparatus, system, and method of display control, and carrier means
KR20200092763A (en) * 2019-01-25 2020-08-04 삼성전자주식회사 Electronic device for processing user speech and controlling method thereof
CN110110136A (en) * 2019-02-27 2019-08-09 咪咕数字传媒有限公司 A kind of text sound matching process, electronic equipment and storage medium
US11350185B2 (en) * 2019-12-13 2022-05-31 Bank Of America Corporation Text-to-audio for interactive videos using a markup language
US10805665B1 (en) 2019-12-13 2020-10-13 Bank Of America Corporation Synchronizing text-to-audio with interactive videos in the video framework
USD954967S1 (en) * 2020-02-21 2022-06-14 Bone Foam, Inc. Dual leg support device
CN112530472B (en) * 2020-11-26 2022-06-21 北京字节跳动网络技术有限公司 Audio and text synchronization method and device, readable medium and electronic equipment
CN112990173B (en) * 2021-02-04 2023-10-27 上海哔哩哔哩科技有限公司 Reading processing method, device and system
US11798536B2 (en) 2021-06-14 2023-10-24 International Business Machines Corporation Annotation of media files with convenient pause points
KR102553832B1 (en) 2021-07-27 2023-07-07 울산과학기술원 Controlling and assisting device for listening means
US11537781B1 (en) 2021-09-15 2022-12-27 Lumos Information Services, LLC System and method to support synchronization, closed captioning and highlight within a text document or a media file
US20230177258A1 (en) * 2021-12-02 2023-06-08 At&T Intellectual Property I, L.P. Shared annotation of media sub-content

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5649060A (en) * 1993-10-18 1997-07-15 International Business Machines Corporation Automatic indexing and aligning of audio and text using speech recognition
US6081780A (en) * 1998-04-28 2000-06-27 International Business Machines Corporation TTS and prosody based authoring system
US6260011B1 (en) * 2000-03-20 2001-07-10 Microsoft Corporation Methods and apparatus for automatically synchronizing electronic audio files with electronic text files
US20020099552A1 (en) * 2001-01-25 2002-07-25 Darryl Rubin Annotating electronic information with audio clips
US6442518B1 (en) * 1999-07-14 2002-08-27 Compaq Information Technologies Group, L.P. Method for refining time alignments of closed captions
US20070055514A1 (en) * 2005-09-08 2007-03-08 Beattie Valerie L Intelligent tutoring feedback
US20080140652A1 (en) * 2006-12-07 2008-06-12 Jonathan Travis Millman Authoring tool
US20090112572A1 (en) * 2007-10-30 2009-04-30 Karl Ola Thorn System and method for input of text to an application operating on a device
US20100278453A1 (en) * 2006-09-15 2010-11-04 King Martin T Capture and display of annotations in paper and electronic documents
US20100324905A1 (en) * 2009-01-15 2010-12-23 K-Nfb Reading Technology, Inc. Voice models for document narration
US20110054901A1 (en) * 2009-08-28 2011-03-03 International Business Machines Corporation Method and apparatus for aligning texts
US20110153330A1 (en) * 2009-11-27 2011-06-23 i-SCROLL System and method for rendering text synchronized audio

Family Cites Families (791)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US626011A (en) * 1899-05-30 stuckwisch
US3828132A (en) 1970-10-30 1974-08-06 Bell Telephone Labor Inc Speech synthesis by concatenation of formant encoded words
US3704345A (en) 1971-03-19 1972-11-28 Bell Telephone Labor Inc Conversion of printed text into synthetic speech
US3979557A (en) 1974-07-03 1976-09-07 International Telephone And Telegraph Corporation Speech processor system for pitch period extraction using prediction filters
BG24190A1 (en) 1976-09-08 1978-01-10 Antonov Method of synthesis of speech and device for effecting same
JPS597120B2 (en) 1978-11-24 1984-02-16 日本電気株式会社 speech analysis device
US4310721A (en) 1980-01-23 1982-01-12 The United States Of America As Represented By The Secretary Of The Army Half duplex integral vocoder modem system
US4348553A (en) 1980-07-02 1982-09-07 International Business Machines Corporation Parallel pattern verifier with dynamic time warping
US5047617A (en) 1982-01-25 1991-09-10 Symbol Technologies, Inc. Narrow-bodied, single- and twin-windowed portable laser scanning head for reading bar code symbols
DE3382796T2 (en) 1982-06-11 1996-03-28 Mitsubishi Electric Corp Intermediate image coding device.
US4688195A (en) 1983-01-28 1987-08-18 Texas Instruments Incorporated Natural-language interface generating system
JPS603056A (en) 1983-06-21 1985-01-09 Toshiba Corp Information rearranging device
DE3335358A1 (en) 1983-09-29 1985-04-11 Siemens AG, 1000 Berlin und 8000 München METHOD FOR DETERMINING LANGUAGE SPECTRES FOR AUTOMATIC VOICE RECOGNITION AND VOICE ENCODING
US5164900A (en) 1983-11-14 1992-11-17 Colman Bernath Method and device for phonetically encoding Chinese textual data for data processing entry
US4726065A (en) 1984-01-26 1988-02-16 Horst Froessl Image manipulation by speech signals
US4955047A (en) 1984-03-26 1990-09-04 Dytel Corporation Automated attendant with direct inward system access
US4811243A (en) 1984-04-06 1989-03-07 Racine Marsh V Computer aided coordinate digitizing system
US4692941A (en) 1984-04-10 1987-09-08 First Byte Real-time text-to-speech conversion system
US4783807A (en) 1984-08-27 1988-11-08 John Marley System and method for sound recognition with feature selection synchronized to voice pitch
US4718094A (en) 1984-11-19 1988-01-05 International Business Machines Corp. Speech recognition system
US5165007A (en) 1985-02-01 1992-11-17 International Business Machines Corporation Feneme-based Markov models for words
US4944013A (en) 1985-04-03 1990-07-24 British Telecommunications Public Limited Company Multi-pulse speech coder
US4819271A (en) 1985-05-29 1989-04-04 International Business Machines Corporation Constructing Markov model word baseforms from multiple utterances by concatenating model sequences for word segments
US4833712A (en) 1985-05-29 1989-05-23 International Business Machines Corporation Automatic generation of simple Markov model stunted baseforms for words in a vocabulary
US4829583A (en) 1985-06-03 1989-05-09 Sino Business Machines, Inc. Method and apparatus for processing ideographic characters
EP0218859A3 (en) 1985-10-11 1989-09-06 International Business Machines Corporation Signal processor communication interface
US4776016A (en) 1985-11-21 1988-10-04 Position Orientation Systems, Inc. Voice control system
JPH0833744B2 (en) 1986-01-09 1996-03-29 株式会社東芝 Speech synthesizer
US4724542A (en) 1986-01-22 1988-02-09 International Business Machines Corporation Automatic reference adaptation during dynamic signature verification
US5128752A (en) 1986-03-10 1992-07-07 Kohorn H Von System and method for generating and redeeming tokens
US5759101A (en) 1986-03-10 1998-06-02 Response Reward Systems L.C. Central and remote evaluation of responses of participatory broadcast audience with automatic crediting and couponing
US5032989A (en) 1986-03-19 1991-07-16 Realpro, Ltd. Real estate search and location system and method
DE3779351D1 (en) 1986-03-28 1992-07-02 American Telephone And Telegraph Co., New York, N.Y., Us
US4903305A (en) 1986-05-12 1990-02-20 Dragon Systems, Inc. Method for representing word models for use in speech recognition
EP0262938B1 (en) 1986-10-03 1993-12-15 BRITISH TELECOMMUNICATIONS public limited company Language translation system
WO1988002975A1 (en) 1986-10-16 1988-04-21 Mitsubishi Denki Kabushiki Kaisha Amplitude-adapted vector quantizer
US4829576A (en) 1986-10-21 1989-05-09 Dragon Systems, Inc. Voice recognition system
US4852168A (en) 1986-11-18 1989-07-25 Sprague Richard P Compression of stored waveforms for artificial speech
US4727354A (en) 1987-01-07 1988-02-23 Unisys Corporation System for selecting best fit vector code in vector quantization encoding
US4827520A (en) 1987-01-16 1989-05-02 Prince Corporation Voice actuated control system for use in a vehicle
US5179627A (en) 1987-02-10 1993-01-12 Dictaphone Corporation Digital dictation system
US4965763A (en) 1987-03-03 1990-10-23 International Business Machines Corporation Computer method for automatic extraction of commonly specified information from business correspondence
US5644727A (en) 1987-04-15 1997-07-01 Proprietary Financial Products, Inc. System for the operation and management of one or more financial accounts through the use of a digital communication and computation system for exchange, investment and borrowing
EP0293259A3 (en) 1987-05-29 1990-03-07 Kabushiki Kaisha Toshiba Voice recognition system used in telephone apparatus
DE3723078A1 (en) 1987-07-11 1989-01-19 Philips Patentverwaltung METHOD FOR DETECTING CONTINUOUSLY SPOKEN WORDS
CA1288516C (en) 1987-07-31 1991-09-03 Leendert M. Bijnagte Apparatus and method for communicating textual and image information between a host computer and a remote display terminal
US4974191A (en) 1987-07-31 1990-11-27 Syntellect Software Inc. Adaptive natural language computer interface system
US4827518A (en) 1987-08-06 1989-05-02 Bell Communications Research, Inc. Speaker verification system using integrated circuit cards
US5022081A (en) 1987-10-01 1991-06-04 Sharp Kabushiki Kaisha Information recognition system
US4852173A (en) 1987-10-29 1989-07-25 International Business Machines Corporation Design and construction of a binary-tree system for language modelling
EP0314908B1 (en) 1987-10-30 1992-12-02 International Business Machines Corporation Automatic determination of labels and markov word models in a speech recognition system
US5072452A (en) 1987-10-30 1991-12-10 International Business Machines Corporation Automatic determination of labels and Markov word models in a speech recognition system
US4914586A (en) 1987-11-06 1990-04-03 Xerox Corporation Garbage collector for hypermedia systems
US4992972A (en) 1987-11-18 1991-02-12 International Business Machines Corporation Flexible context searchable on-line information system with help files and modules for on-line computer system documentation
US5220657A (en) 1987-12-02 1993-06-15 Xerox Corporation Updating local copy of shared data in a collaborative system
US4984177A (en) 1988-02-05 1991-01-08 Advanced Products And Technologies, Inc. Voice language translator
US5194950A (en) 1988-02-29 1993-03-16 Mitsubishi Denki Kabushiki Kaisha Vector quantizer
US4914590A (en) 1988-05-18 1990-04-03 Emhart Industries, Inc. Natural language understanding system
FR2636163B1 (en) 1988-09-02 1991-07-05 Hamon Christian METHOD AND DEVICE FOR SYNTHESIZING SPEECH BY ADDING-COVERING WAVEFORMS
US4839853A (en) 1988-09-15 1989-06-13 Bell Communications Research, Inc. Computer information retrieval using latent semantic structure
JPH0293597A (en) 1988-09-30 1990-04-04 Nippon I B M Kk Speech recognition device
US4905163A (en) 1988-10-03 1990-02-27 Minnesota Mining & Manufacturing Company Intelligent optical navigator dynamic information presentation and navigation system
US5282265A (en) 1988-10-04 1994-01-25 Canon Kabushiki Kaisha Knowledge information processing system
DE3837590A1 (en) 1988-11-05 1990-05-10 Ant Nachrichtentech PROCESS FOR REDUCING THE DATA RATE OF DIGITAL IMAGE DATA
DE68913669T2 (en) 1988-11-23 1994-07-21 Digital Equipment Corp Pronunciation of names by a synthesizer.
US5027406A (en) 1988-12-06 1991-06-25 Dragon Systems, Inc. Method for interactive speech recognition and training
US5127055A (en) 1988-12-30 1992-06-30 Kurzweil Applied Intelligence, Inc. Speech recognition apparatus & method having dynamic reference pattern adaptation
US5293448A (en) 1989-10-02 1994-03-08 Nippon Telegraph And Telephone Corporation Speech analysis-synthesis method and apparatus therefor
SE466029B (en) 1989-03-06 1991-12-02 Ibm Svenska Ab DEVICE AND PROCEDURE FOR ANALYSIS OF NATURAL LANGUAGES IN A COMPUTER-BASED INFORMATION PROCESSING SYSTEM
JPH0782544B2 (en) 1989-03-24 1995-09-06 インターナショナル・ビジネス・マシーンズ・コーポレーション DP matching method and apparatus using multi-template
US4977598A (en) 1989-04-13 1990-12-11 Texas Instruments Incorporated Efficient pruning algorithm for hidden markov model speech recognition
US5197005A (en) 1989-05-01 1993-03-23 Intelligent Business Systems Database retrieval system having a natural language interface
US5010574A (en) 1989-06-13 1991-04-23 At&T Bell Laboratories Vector quantizer search arrangement
JP2940005B2 (en) 1989-07-20 1999-08-25 日本電気株式会社 Audio coding device
US5091945A (en) 1989-09-28 1992-02-25 At&T Bell Laboratories Source dependent channel coding with error protection
CA2027705C (en) 1989-10-17 1994-02-15 Masami Akamine Speech coding system utilizing a recursive computation technique for improvement in processing speed
US5020112A (en) 1989-10-31 1991-05-28 At&T Bell Laboratories Image recognition method using two-dimensional stochastic grammars
US5220639A (en) 1989-12-01 1993-06-15 National Science Council Mandarin speech input method for Chinese computers and a mandarin speech recognition machine
US5021971A (en) 1989-12-07 1991-06-04 Unisys Corporation Reflective binary encoder for vector quantization
US5179652A (en) 1989-12-13 1993-01-12 Anthony I. Rozmanith Method and apparatus for storing, transmitting and retrieving graphical and tabular data
CH681573A5 (en) 1990-02-13 1993-04-15 Astral Automatic teller arrangement involving bank computers - is operated by user data card carrying personal data, account information and transaction records
DE69133296T2 (en) 1990-02-22 2004-01-29 Nec Corp speech
US5301109A (en) 1990-06-11 1994-04-05 Bell Communications Research, Inc. Computerized cross-language document retrieval using latent semantic indexing
JP3266246B2 (en) 1990-06-15 2002-03-18 インターナシヨナル・ビジネス・マシーンズ・コーポレーシヨン Natural language analysis apparatus and method, and knowledge base construction method for natural language analysis
US5202952A (en) 1990-06-22 1993-04-13 Dragon Systems, Inc. Large-vocabulary continuous speech prefiltering and processing system
GB9017600D0 (en) 1990-08-10 1990-09-26 British Aerospace An assembly and method for binary tree-searched vector quanisation data compression processing
US5309359A (en) 1990-08-16 1994-05-03 Boris Katz Method and apparatus for generating and utlizing annotations to facilitate computer text retrieval
US5404295A (en) 1990-08-16 1995-04-04 Katz; Boris Method and apparatus for utilizing annotations to facilitate computer retrieval of database material
US5297170A (en) 1990-08-21 1994-03-22 Codex Corporation Lattice and trellis-coded quantization
US5400434A (en) 1990-09-04 1995-03-21 Matsushita Electric Industrial Co., Ltd. Voice source for synthetic speech system
JPH0833739B2 (en) 1990-09-13 1996-03-29 三菱電機株式会社 Pattern expression model learning device
US5216747A (en) 1990-09-20 1993-06-01 Digital Voice Systems, Inc. Voiced/unvoiced estimation of an acoustic signal
US5128672A (en) 1990-10-30 1992-07-07 Apple Computer, Inc. Dynamic predictive keyboard
US5325298A (en) 1990-11-07 1994-06-28 Hnc, Inc. Methods for generating or revising context vectors for a plurality of word stems
US5317507A (en) 1990-11-07 1994-05-31 Gallant Stephen I Method for document retrieval and for word sense disambiguation using neural networks
US5247579A (en) 1990-12-05 1993-09-21 Digital Voice Systems, Inc. Methods for speech transmission
US5345536A (en) 1990-12-21 1994-09-06 Matsushita Electric Industrial Co., Ltd. Method of speech recognition
US5127053A (en) 1990-12-24 1992-06-30 General Electric Company Low-complexity method for improving the performance of autocorrelation-based pitch detectors
US5133011A (en) 1990-12-26 1992-07-21 International Business Machines Corporation Method and apparatus for linear vocal control of cursor position
US5268990A (en) 1991-01-31 1993-12-07 Sri International Method for recognizing speech using linguistically-motivated hidden Markov models
GB9105367D0 (en) 1991-03-13 1991-04-24 Univ Strathclyde Computerised information-retrieval database systems
US5303406A (en) 1991-04-29 1994-04-12 Motorola, Inc. Noise squelch circuit with adaptive noise shaping
US5500905A (en) 1991-06-12 1996-03-19 Microelectronics And Computer Technology Corporation Pattern recognition neural network with saccade-like operation
US5475587A (en) 1991-06-28 1995-12-12 Digital Equipment Corporation Method and apparatus for efficient morphological text analysis using a high-level language for compact specification of inflectional paradigms
US5293452A (en) 1991-07-01 1994-03-08 Texas Instruments Incorporated Voice log-in using spoken name input
US5687077A (en) 1991-07-31 1997-11-11 Universal Dynamics Limited Method and apparatus for adaptive control
US5199077A (en) 1991-09-19 1993-03-30 Xerox Corporation Wordspotting for voice editing and indexing
JP2662120B2 (en) 1991-10-01 1997-10-08 インターナショナル・ビジネス・マシーンズ・コーポレイション Speech recognition device and processing unit for speech recognition
US5222146A (en) 1991-10-23 1993-06-22 International Business Machines Corporation Speech recognition apparatus having a speech coder outputting acoustic prototype ranks
US5757979A (en) 1991-10-30 1998-05-26 Fuji Electric Co., Ltd. Apparatus and method for nonlinear normalization of image
KR940002854B1 (en) 1991-11-06 1994-04-04 한국전기통신공사 Sound synthesizing system
US5386494A (en) 1991-12-06 1995-01-31 Apple Computer, Inc. Method and apparatus for controlling a speech recognition function using a cursor control device
US5903454A (en) 1991-12-23 1999-05-11 Hoffberg; Linda Irene Human-factored interface corporating adaptive pattern recognition based controller apparatus
US6081750A (en) 1991-12-23 2000-06-27 Hoffberg; Steven Mark Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US5502790A (en) 1991-12-24 1996-03-26 Oki Electric Industry Co., Ltd. Speech recognition method and system using triphones, diphones, and phonemes
US5349645A (en) 1991-12-31 1994-09-20 Matsushita Electric Industrial Co., Ltd. Word hypothesizer for continuous speech decoding using stressed-vowel centered bidirectional tree searches
US5267345A (en) 1992-02-10 1993-11-30 International Business Machines Corporation Speech recognition apparatus which predicts word classes from context and words from word classes
EP0559349B1 (en) 1992-03-02 1999-01-07 AT&T Corp. Training method and apparatus for speech recognition
US6055514A (en) 1992-03-20 2000-04-25 Wren; Stephen Corey System for marketing foods and services utilizing computerized centraland remote facilities
US5317647A (en) 1992-04-07 1994-05-31 Apple Computer, Inc. Constrained attribute grammars for syntactic pattern recognition
US5412804A (en) 1992-04-30 1995-05-02 Oracle Corporation Extending the semantics of the outer join operator for un-nesting queries to a data base
US5862233A (en) 1992-05-20 1999-01-19 Industrial Research Limited Wideband assisted reverberation system
US5293584A (en) 1992-05-21 1994-03-08 International Business Machines Corporation Speech recognition system for natural language translation
US5390281A (en) 1992-05-27 1995-02-14 Apple Computer, Inc. Method and apparatus for deducing user intent and providing computer implemented services
US5434777A (en) 1992-05-27 1995-07-18 Apple Computer, Inc. Method and apparatus for processing natural language
US5734789A (en) 1992-06-01 1998-03-31 Hughes Electronics Voiced, unvoiced or noise modes in a CELP vocoder
JPH064093A (en) 1992-06-18 1994-01-14 Matsushita Electric Ind Co Ltd Hmm generating device, hmm storage device, likelihood calculating device, and recognizing device
US5333275A (en) 1992-06-23 1994-07-26 Wheatley Barbara J System and method for time aligning speech
US5325297A (en) 1992-06-25 1994-06-28 System Of Multiple-Colored Images For Internationally Listed Estates, Inc. Computer implemented method and system for storing and retrieving textual data and compressed image data
JPH0619965A (en) 1992-07-01 1994-01-28 Canon Inc Natural language processor
US5999908A (en) 1992-08-06 1999-12-07 Abelow; Daniel H. Customer-based product design module
US5412806A (en) 1992-08-20 1995-05-02 Hewlett-Packard Company Calibration of logical cost formulae for queries in a heterogeneous DBMS using synthetic database
GB9220404D0 (en) 1992-08-20 1992-11-11 Nat Security Agency Method of identifying,retrieving and sorting documents
US5333236A (en) 1992-09-10 1994-07-26 International Business Machines Corporation Speech recognizer having a speech coder for an acoustic match based on context-dependent speech-transition acoustic models
US5384893A (en) 1992-09-23 1995-01-24 Emerson & Stern Associates, Inc. Method and apparatus for speech synthesis based on prosodic analysis
FR2696036B1 (en) 1992-09-24 1994-10-14 France Telecom Method of measuring resemblance between sound samples and device for implementing this method.
JPH0772840B2 (en) 1992-09-29 1995-08-02 日本アイ・ビー・エム株式会社 Speech model configuration method, speech recognition method, speech recognition device, and speech model training method
US5758313A (en) 1992-10-16 1998-05-26 Mobile Information Systems, Inc. Method and apparatus for tracking vehicle location
US6092043A (en) 1992-11-13 2000-07-18 Dragon Systems, Inc. Apparatuses and method for training and operating speech recognition systems
US5983179A (en) 1992-11-13 1999-11-09 Dragon Systems, Inc. Speech recognition system which turns its voice response on for confirmation when it has been turned off without confirmation
DE69327774T2 (en) 1992-11-18 2000-06-21 Canon Information Syst Inc Processor for converting data into speech and sequence control for this
US5455888A (en) 1992-12-04 1995-10-03 Northern Telecom Limited Speech bandwidth extension method and apparatus
US5412756A (en) 1992-12-22 1995-05-02 Mitsubishi Denki Kabushiki Kaisha Artificial intelligence software shell for plant operation simulation
US5734791A (en) 1992-12-31 1998-03-31 Apple Computer, Inc. Rapid tree-based method for vector quantization
US5390279A (en) 1992-12-31 1995-02-14 Apple Computer, Inc. Partitioning speech rules by context for speech recognition
US5613036A (en) 1992-12-31 1997-03-18 Apple Computer, Inc. Dynamic categories for a speech recognition system
US5384892A (en) 1992-12-31 1995-01-24 Apple Computer, Inc. Dynamic language model for speech recognition
US6122616A (en) 1993-01-21 2000-09-19 Apple Computer, Inc. Method and apparatus for diphone aliasing
US5864844A (en) 1993-02-18 1999-01-26 Apple Computer, Inc. System and method for enhancing a user interface with a computer based training tool
CA2091658A1 (en) 1993-03-15 1994-09-16 Matthew Lennig Method and apparatus for automation of directory assistance using speech recognition
US6055531A (en) 1993-03-24 2000-04-25 Engate Incorporated Down-line transcription system having context sensitive searching capability
US5536902A (en) 1993-04-14 1996-07-16 Yamaha Corporation Method of and apparatus for analyzing and synthesizing a sound by extracting and controlling a sound parameter
US5444823A (en) 1993-04-16 1995-08-22 Compaq Computer Corporation Intelligent search engine for associated on-line documentation having questionless case-based knowledge base
US5574823A (en) 1993-06-23 1996-11-12 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Communications Frequency selective harmonic coding
US5515475A (en) 1993-06-24 1996-05-07 Northern Telecom Limited Speech recognition method using a two-pass search
JPH0756933A (en) 1993-06-24 1995-03-03 Xerox Corp Method for retrieval of document
JP3685812B2 (en) 1993-06-29 2005-08-24 ソニー株式会社 Audio signal transmitter / receiver
US5794207A (en) 1996-09-04 1998-08-11 Walker Asset Management Limited Partnership Method and apparatus for a cryptographically assisted commercial network system designed to facilitate buyer-driven conditional purchase offers
AU7323694A (en) 1993-07-07 1995-02-06 Inference Corporation Case-based organizing and querying of a database
US5495604A (en) 1993-08-25 1996-02-27 Asymetrix Corporation Method and apparatus for the modeling and query of database structures using natural language-like constructs
US5619694A (en) 1993-08-26 1997-04-08 Nec Corporation Case database storage/retrieval system
US5940811A (en) 1993-08-27 1999-08-17 Affinity Technology Group, Inc. Closed loop financial transaction method and apparatus
US5377258A (en) 1993-08-30 1994-12-27 National Medical Research Council Method and apparatus for an automated and interactive behavioral guidance system
US5873056A (en) 1993-10-12 1999-02-16 The Syracuse University Natural language processing system for semantic vector representation which accounts for lexical ambiguity
US5578808A (en) 1993-12-22 1996-11-26 Datamark Services, Inc. Data card that can be used for transactions involving separate card issuers
WO1995017711A1 (en) 1993-12-23 1995-06-29 Diacom Technologies, Inc. Method and apparatus for implementing user feedback
US5621859A (en) 1994-01-19 1997-04-15 Bbn Corporation Single tree method for grammar directed, very large vocabulary speech recognizer
US5584024A (en) 1994-03-24 1996-12-10 Software Ag Interactive database query system and method for prohibiting the selection of semantically incorrect query parameters
US5642519A (en) 1994-04-29 1997-06-24 Sun Microsystems, Inc. Speech interpreter with a unified grammer compiler
DE69520302T2 (en) 1994-05-25 2001-08-09 Victor Company Of Japan Data transfer device with variable transmission rate
US5493677A (en) 1994-06-08 1996-02-20 Systems Research & Applications Corporation Generation, archiving, and retrieval of digital images with evoked suggestion-set captions and natural language interface
US5812697A (en) 1994-06-10 1998-09-22 Nippon Steel Corporation Method and apparatus for recognizing hand-written characters using a weighting dictionary
US5675819A (en) 1994-06-16 1997-10-07 Xerox Corporation Document information retrieval using global word co-occurrence patterns
JPH0869470A (en) 1994-06-21 1996-03-12 Canon Inc Natural language processing device and method
US5948040A (en) 1994-06-24 1999-09-07 Delorme Publishing Co. Travel reservation information and planning system
DE69533479T2 (en) 1994-07-01 2005-09-22 Palm Computing, Inc., Los Altos CHARACTER SET WITH CHARACTERS FROM MULTIPLE BARS AND HANDWRITING IDENTIFICATION SYSTEM
US5682539A (en) 1994-09-29 1997-10-28 Conrad; Donovan Anticipated meaning natural language interface
GB2293667B (en) 1994-09-30 1998-05-27 Intermation Limited Database management system
US5715468A (en) 1994-09-30 1998-02-03 Budzinski; Robert Lucius Memory system for storing and retrieving experience and knowledge with natural language
US5845255A (en) 1994-10-28 1998-12-01 Advanced Health Med-E-Systems Corporation Prescription management system
US5577241A (en) 1994-12-07 1996-11-19 Excite, Inc. Information retrieval system and method with implementation extensible query architecture
US5748974A (en) 1994-12-13 1998-05-05 International Business Machines Corporation Multimodal natural language interface for cross-application tasks
US5794050A (en) 1995-01-04 1998-08-11 Intelligent Text Processing, Inc. Natural language understanding system
DE69637733D1 (en) 1995-02-13 2008-12-11 Intertrust Tech Corp SYSTEMS AND METHOD FOR SAFE TRANSMISSION
US5701400A (en) 1995-03-08 1997-12-23 Amado; Carlos Armando Method and apparatus for applying if-then-else rules to data sets in a relational data base and generating from the results of application of said rules a database of diagnostics linked to said data sets to aid executive analysis of financial data
US5749081A (en) 1995-04-06 1998-05-05 Firefly Network, Inc. System and method for recommending items to a user
US5642464A (en) 1995-05-03 1997-06-24 Northern Telecom Limited Methods and apparatus for noise conditioning in digital speech compression systems using linear predictive coding
US5812698A (en) 1995-05-12 1998-09-22 Synaptics, Inc. Handwriting recognition system and method
TW338815B (en) 1995-06-05 1998-08-21 Motorola Inc Method and apparatus for character recognition of handwritten input
US6496182B1 (en) 1995-06-07 2002-12-17 Microsoft Corporation Method and system for providing touch-sensitive screens for the visually impaired
US5664055A (en) 1995-06-07 1997-09-02 Lucent Technologies Inc. CS-ACELP speech compression system with adaptive pitch prediction filter gain based on a measure of periodicity
US5991441A (en) 1995-06-07 1999-11-23 Wang Laboratories, Inc. Real time handwriting recognition system
US5710886A (en) 1995-06-16 1998-01-20 Sellectsoft, L.C. Electric couponing method and apparatus
JP3284832B2 (en) 1995-06-22 2002-05-20 セイコーエプソン株式会社 Speech recognition dialogue processing method and speech recognition dialogue device
US6038533A (en) 1995-07-07 2000-03-14 Lucent Technologies Inc. System and method for selecting training text
US6026388A (en) 1995-08-16 2000-02-15 Textwise, Llc User interface and other enhancements for natural language information retrieval system and method
JP3697748B2 (en) 1995-08-21 2005-09-21 セイコーエプソン株式会社 Terminal, voice recognition device
US5712957A (en) 1995-09-08 1998-01-27 Carnegie Mellon University Locating and correcting erroneously recognized portions of utterances by rescoring based on two n-best lists
US5737734A (en) 1995-09-15 1998-04-07 Infonautics Corporation Query word relevance adjustment in a search of an information retrieval system
US6173261B1 (en) 1998-09-30 2001-01-09 At&T Corp Grammar fragment acquisition using syntactic and semantic clustering
US5790978A (en) 1995-09-15 1998-08-04 Lucent Technologies, Inc. System and method for determining pitch contours
US5884323A (en) 1995-10-13 1999-03-16 3Com Corporation Extendible method and apparatus for synchronizing files on two different computer systems
US5799276A (en) 1995-11-07 1998-08-25 Accent Incorporated Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals
US5794237A (en) 1995-11-13 1998-08-11 International Business Machines Corporation System and method for improving problem source identification in computer systems employing relevance feedback and statistical source ranking
US6064959A (en) 1997-03-28 2000-05-16 Dragon Systems, Inc. Error correction in speech recognition
US5706442A (en) 1995-12-20 1998-01-06 Block Financial Corporation System for on-line financial services using distributed objects
WO1997026612A1 (en) 1996-01-17 1997-07-24 Personal Agents, Inc. Intelligent agents for electronic commerce
US6119101A (en) 1996-01-17 2000-09-12 Personal Agents, Inc. Intelligent agents for electronic commerce
US6125356A (en) 1996-01-18 2000-09-26 Rosefaire Development, Ltd. Portable sales presentation system with selective scripted seller prompts
US5987404A (en) 1996-01-29 1999-11-16 International Business Machines Corporation Statistical natural language understanding using hidden clumpings
US5729694A (en) 1996-02-06 1998-03-17 The Regents Of The University Of California Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
US6076088A (en) 1996-02-09 2000-06-13 Paik; Woojin Information extraction system and method using concept relation concept (CRC) triples
US5835893A (en) 1996-02-15 1998-11-10 Atr Interpreting Telecommunications Research Labs Class-based word clustering for speech recognition using a three-level balanced hierarchical similarity
US5901287A (en) 1996-04-01 1999-05-04 The Sabre Group Inc. Information aggregation and synthesization system
US5867799A (en) 1996-04-04 1999-02-02 Lang; Andrew K. Information system and method for filtering a massive flow of information entities to meet user information classification needs
US5987140A (en) 1996-04-26 1999-11-16 Verifone, Inc. System, method and article of manufacture for secure network electronic payment and credit collection
US5963924A (en) 1996-04-26 1999-10-05 Verifone, Inc. System, method and article of manufacture for the use of payment instrument holders and payment instruments in network electronic commerce
US5913193A (en) 1996-04-30 1999-06-15 Microsoft Corporation Method and system of runtime acoustic unit selection for speech synthesis
US5857184A (en) 1996-05-03 1999-01-05 Walden Media, Inc. Language and method for creating, organizing, and retrieving data from a database
FR2748342B1 (en) 1996-05-06 1998-07-17 France Telecom METHOD AND DEVICE FOR FILTERING A SPEECH SIGNAL BY EQUALIZATION, USING A STATISTICAL MODEL OF THIS SIGNAL
US5828999A (en) 1996-05-06 1998-10-27 Apple Computer, Inc. Method and system for deriving a large-span semantic language model for large-vocabulary recognition systems
US5826261A (en) 1996-05-10 1998-10-20 Spencer; Graham System and method for querying multiple, distributed databases by selective sharing of local relative significance information for terms related to the query
US6366883B1 (en) 1996-05-15 2002-04-02 Atr Interpreting Telecommunications Concatenation of speech segments by use of a speech synthesizer
US5727950A (en) 1996-05-22 1998-03-17 Netsage Corporation Agent based instruction system and method
US6556712B1 (en) 1996-05-23 2003-04-29 Apple Computer, Inc. Methods and apparatus for handwriting recognition
US5966533A (en) 1996-06-11 1999-10-12 Excite, Inc. Method and system for dynamically synthesizing a computer program by differentially resolving atoms based on user context data
US5915249A (en) 1996-06-14 1999-06-22 Excite, Inc. System and method for accelerated query evaluation of very large full-text databases
US5987132A (en) 1996-06-17 1999-11-16 Verifone, Inc. System, method and article of manufacture for conditionally accepting a payment method utilizing an extensible, flexible architecture
US5825881A (en) 1996-06-28 1998-10-20 Allsoft Distributing Inc. Public network merchandising system
US6070147A (en) 1996-07-02 2000-05-30 Tecmark Services, Inc. Customer identification and marketing analysis systems
CA2261262C (en) 1996-07-22 2007-08-21 Cyva Research Corporation Personal information security and exchange tool
US6453281B1 (en) 1996-07-30 2002-09-17 Vxi Corporation Portable audio database device with icon-based graphical user-interface
EP0829811A1 (en) 1996-09-11 1998-03-18 Nippon Telegraph And Telephone Corporation Method and system for information retrieval
US6181935B1 (en) 1996-09-27 2001-01-30 Software.Com, Inc. Mobility extended telephone application programming interface and method of use
US5794182A (en) 1996-09-30 1998-08-11 Apple Computer, Inc. Linear predictive speech encoding systems with efficient combination pitch coefficients computation
US5721827A (en) 1996-10-02 1998-02-24 James Logan System for electrically distributing personalized information
US6199076B1 (en) * 1996-10-02 2001-03-06 James Logan Audio program player including a dynamic program selection controller
US5732216A (en) 1996-10-02 1998-03-24 Internet Angles, Inc. Audio message exchange system
US5913203A (en) 1996-10-03 1999-06-15 Jaesent Inc. System and method for pseudo cash transactions
US5930769A (en) 1996-10-07 1999-07-27 Rose; Andrea System and method for fashion shopping
US5836771A (en) 1996-12-02 1998-11-17 Ho; Chi Fai Learning method and system based on questioning
US6282511B1 (en) 1996-12-04 2001-08-28 At&T Voiced interface with hyperlinked information
US6665639B2 (en) 1996-12-06 2003-12-16 Sensory, Inc. Speech recognition in consumer electronic products
US6078914A (en) 1996-12-09 2000-06-20 Open Text Corporation Natural language meta-search system and method
US5839106A (en) 1996-12-17 1998-11-17 Apple Computer, Inc. Large-vocabulary speech recognition using an integrated syntactic and semantic statistical language model
US5966126A (en) 1996-12-23 1999-10-12 Szabo; Andrew J. Graphic user interface for database system
US5932869A (en) 1996-12-27 1999-08-03 Graphic Technology, Inc. Promotional system with magnetic stripe and visual thermo-reversible print surfaced medium
JP3579204B2 (en) 1997-01-17 2004-10-20 富士通株式会社 Document summarizing apparatus and method
US5941944A (en) 1997-03-03 1999-08-24 Microsoft Corporation Method for providing a substitute for a requested inaccessible object by identifying substantially similar objects using weights corresponding to object features
US5930801A (en) 1997-03-07 1999-07-27 Xerox Corporation Shared-data environment in which each file has independent security properties
US6076051A (en) 1997-03-07 2000-06-13 Microsoft Corporation Information retrieval utilizing semantic representation of text
US6260013B1 (en) 1997-03-14 2001-07-10 Lernout & Hauspie Speech Products N.V. Speech recognition system employing discriminatively trained models
AU6566598A (en) 1997-03-20 1998-10-12 Schlumberger Technologies, Inc. System and method of transactional taxation using secure stored data devices
US5822743A (en) 1997-04-08 1998-10-13 1215627 Ontario Inc. Knowledge-based information retrieval system
US5970474A (en) 1997-04-24 1999-10-19 Sears, Roebuck And Co. Registry information system for shoppers
US5895464A (en) 1997-04-30 1999-04-20 Eastman Kodak Company Computer program product and a method for using natural language for the description, search and retrieval of multi-media objects
DE69816185T2 (en) 1997-06-12 2004-04-15 Hewlett-Packard Co. (N.D.Ges.D.Staates Delaware), Palo Alto Image processing method and device
US6017219A (en) 1997-06-18 2000-01-25 International Business Machines Corporation System and method for interactive reading and language instruction
EP1008084A1 (en) 1997-07-02 2000-06-14 Philippe J. M. Coueignoux System and method for the secure discovery, exploitation and publication of information
US5860063A (en) 1997-07-11 1999-01-12 At&T Corp Automated meaningful phrase clustering
US5933822A (en) 1997-07-22 1999-08-03 Microsoft Corporation Apparatus and methods for an information retrieval system that employs natural language processing of search results to improve overall precision
US5974146A (en) 1997-07-30 1999-10-26 Huntington Bancshares Incorporated Real time bank-centric universal payment system
US6016476A (en) 1997-08-11 2000-01-18 International Business Machines Corporation Portable information and transaction processing system and method utilizing biometric authorization and digital certificate security
US5895466A (en) 1997-08-19 1999-04-20 At&T Corp Automated natural language understanding customer service system
US6081774A (en) 1997-08-22 2000-06-27 Novell, Inc. Natural language information retrieval system and method
US6404876B1 (en) 1997-09-25 2002-06-11 Gte Intelligent Network Services Incorporated System and method for voice activated dialing and routing under open access network control
US6023684A (en) 1997-10-01 2000-02-08 Security First Technologies, Inc. Three tier financial transaction system with cache memory
DE69712485T2 (en) 1997-10-23 2002-12-12 Sony Int Europe Gmbh Voice interface for a home network
US6108627A (en) 1997-10-31 2000-08-22 Nortel Networks Corporation Automatic transcription tool
US5943670A (en) 1997-11-21 1999-08-24 International Business Machines Corporation System and method for categorizing objects in combined categories
US5960422A (en) 1997-11-26 1999-09-28 International Business Machines Corporation System and method for optimized source selection in an information retrieval system
US6026375A (en) 1997-12-05 2000-02-15 Nortel Networks Corporation Method and apparatus for processing orders from customers in a mobile environment
US6064960A (en) 1997-12-18 2000-05-16 Apple Computer, Inc. Method and apparatus for improved duration modeling of phonemes
US6094649A (en) 1997-12-22 2000-07-25 Partnet, Inc. Keyword searches of structured databases
US6173287B1 (en) 1998-03-11 2001-01-09 Digital Equipment Corporation Technique for ranking multimedia annotations of interest
US6195641B1 (en) 1998-03-27 2001-02-27 International Business Machines Corp. Network universal spoken language vocabulary
US6026393A (en) 1998-03-31 2000-02-15 Casebank Technologies Inc. Configuration knowledge as an aid to case retrieval
US6233559B1 (en) 1998-04-01 2001-05-15 Motorola, Inc. Speech control of multiple applications using applets
US6173279B1 (en) 1998-04-09 2001-01-09 At&T Corp. Method of using a natural language interface to retrieve information from one or more data resources
US6088731A (en) 1998-04-24 2000-07-11 Associative Computing, Inc. Intelligent assistant for use with a local computer and with the internet
AU3717099A (en) 1998-04-27 1999-11-16 British Telecommunications Public Limited Company Database access tool
US6029132A (en) 1998-04-30 2000-02-22 Matsushita Electric Industrial Co. Method for letter-to-sound in text-to-speech synthesis
US6016471A (en) 1998-04-29 2000-01-18 Matsushita Electric Industrial Co., Ltd. Method and apparatus using decision trees to generate and score multiple pronunciations for a spelled word
US6285786B1 (en) 1998-04-30 2001-09-04 Motorola, Inc. Text recognizer and method using non-cumulative character scoring in a forward search
US6144938A (en) 1998-05-01 2000-11-07 Sun Microsystems, Inc. Voice user interface with personality
US6778970B2 (en) 1998-05-28 2004-08-17 Lawrence Au Topological methods to organize semantic network data flows for conversational applications
US20070094222A1 (en) 1998-05-28 2007-04-26 Lawrence Au Method and system for using voice input for performing network functions
US7711672B2 (en) 1998-05-28 2010-05-04 Lawrence Au Semantic network methods to disambiguate natural language meaning
US6144958A (en) 1998-07-15 2000-11-07 Amazon.Com, Inc. System and method for correcting spelling errors in search queries
US6105865A (en) 1998-07-17 2000-08-22 Hardesty; Laurence Daniel Financial transaction system with retirement saving benefit
US6434524B1 (en) 1998-09-09 2002-08-13 One Voice Technologies, Inc. Object interactive user interface using speech recognition and natural language processing
US6499013B1 (en) 1998-09-09 2002-12-24 One Voice Technologies, Inc. Interactive user interface using speech recognition and natural language processing
US6266637B1 (en) 1998-09-11 2001-07-24 International Business Machines Corporation Phrase splicing and variable substitution using a trainable speech synthesizer
DE29825146U1 (en) 1998-09-11 2005-08-18 Püllen, Rainer Audio on demand system
US6792082B1 (en) 1998-09-11 2004-09-14 Comverse Ltd. Voice mail system with personal assistant provisioning
US6317831B1 (en) 1998-09-21 2001-11-13 Openwave Systems Inc. Method and apparatus for establishing a secure connection over a one-way data path
US6275824B1 (en) 1998-10-02 2001-08-14 Ncr Corporation System and method for managing data privacy in a database management system
US7137126B1 (en) 1998-10-02 2006-11-14 International Business Machines Corporation Conversational computing via conversational virtual machine
GB9821969D0 (en) 1998-10-08 1998-12-02 Canon Kk Apparatus and method for processing natural language
US6928614B1 (en) 1998-10-13 2005-08-09 Visteon Global Technologies, Inc. Mobile office with speech recognition
US6453292B2 (en) 1998-10-28 2002-09-17 International Business Machines Corporation Command boundary identifier for conversational natural language
US6208971B1 (en) 1998-10-30 2001-03-27 Apple Computer, Inc. Method and apparatus for command recognition using data-driven semantic inference
US6321092B1 (en) 1998-11-03 2001-11-20 Signal Soft Corporation Multiple input data management for wireless location-based applications
US6839669B1 (en) 1998-11-05 2005-01-04 Scansoft, Inc. Performing actions identified in recognized speech
US6519565B1 (en) 1998-11-10 2003-02-11 Voice Security Systems, Inc. Method of comparing utterances for security control
US6446076B1 (en) 1998-11-12 2002-09-03 Accenture Llp. Voice interactive web-based agent system responsive to a user location for prioritizing and formatting information
EP1138038B1 (en) 1998-11-13 2005-06-22 Lernout &amp; Hauspie Speech Products N.V. Speech synthesis using concatenation of speech waveforms
US6606599B2 (en) 1998-12-23 2003-08-12 Interactive Speech Technologies, Llc Method for integrating computing processes with an interface controlled by voice actuated grammars
US6246981B1 (en) 1998-11-25 2001-06-12 International Business Machines Corporation Natural language task-oriented dialog manager and method
US7082397B2 (en) 1998-12-01 2006-07-25 Nuance Communications, Inc. System for and method of creating and browsing a voice web
US6260024B1 (en) 1998-12-02 2001-07-10 Gary Shkedy Method and apparatus for facilitating buyer-driven purchase orders on a commercial network system
US7881936B2 (en) 1998-12-04 2011-02-01 Tegic Communications, Inc. Multimodal disambiguation of speech recognition
US6317707B1 (en) 1998-12-07 2001-11-13 At&T Corp. Automatic clustering of tokens from a corpus for grammar acquisition
US6177905B1 (en) 1998-12-08 2001-01-23 Avaya Technology Corp. Location-triggered reminder for mobile user devices
US6308149B1 (en) 1998-12-16 2001-10-23 Xerox Corporation Grouping words with equivalent substrings by automatic clustering based on suffix relationships
US6523172B1 (en) 1998-12-17 2003-02-18 Evolutionary Technologies International, Inc. Parser translator system and method
US6460029B1 (en) 1998-12-23 2002-10-01 Microsoft Corporation System for improving search text
US6523061B1 (en) 1999-01-05 2003-02-18 Sri International, Inc. System, method, and article of manufacture for agent-based navigation in a speech-based data navigation system
US7036128B1 (en) 1999-01-05 2006-04-25 Sri International Offices Using a community of distributed electronic agents to support a highly mobile, ambient computing environment
US6757718B1 (en) 1999-01-05 2004-06-29 Sri International Mobile navigation of network-based electronic information using spoken input
US6742021B1 (en) 1999-01-05 2004-05-25 Sri International, Inc. Navigating network-based electronic information using spoken input with multimodal error feedback
US6851115B1 (en) 1999-01-05 2005-02-01 Sri International Software-based architecture for communication and cooperation among distributed electronic agents
US6513063B1 (en) 1999-01-05 2003-01-28 Sri International Accessing network-based electronic information through scripted online interfaces using spoken input
US7152070B1 (en) 1999-01-08 2006-12-19 The Regents Of The University Of California System and method for integrating and accessing multiple data sources within a data warehouse architecture
JP2000207167A (en) 1999-01-14 2000-07-28 Hewlett Packard Co <Hp> Method for describing language for hyper presentation, hyper presentation system, mobile computer and hyper presentation method
US6282507B1 (en) 1999-01-29 2001-08-28 Sony Corporation Method and apparatus for interactive source language expression recognition and alternative hypothesis presentation and selection
US6505183B1 (en) 1999-02-04 2003-01-07 Authoria, Inc. Human resource knowledge modeling and delivery system
US20020095290A1 (en) 1999-02-05 2002-07-18 Jonathan Kahn Speech recognition program mapping tool to align an audio file to verbatim text
US6317718B1 (en) 1999-02-26 2001-11-13 Accenture Properties (2) B.V. System, method and article of manufacture for location-based filtering for shopping agent in the physical world
GB9904662D0 (en) 1999-03-01 1999-04-21 Canon Kk Natural language search method and apparatus
US6356905B1 (en) 1999-03-05 2002-03-12 Accenture Llp System, method and article of manufacture for mobile communication utilizing an interface support framework
US6928404B1 (en) 1999-03-17 2005-08-09 International Business Machines Corporation System and methods for acoustic and language modeling for automatic speech recognition with large vocabularies
US6584464B1 (en) 1999-03-19 2003-06-24 Ask Jeeves, Inc. Grammar template query system
WO2000058946A1 (en) 1999-03-26 2000-10-05 Koninklijke Philips Electronics N.V. Client-server speech recognition
US6356854B1 (en) 1999-04-05 2002-03-12 Delphi Technologies, Inc. Holographic object position and type sensing system and method
US6631346B1 (en) 1999-04-07 2003-10-07 Matsushita Electric Industrial Co., Ltd. Method and apparatus for natural language parsing using multiple passes and tags
WO2000060435A2 (en) 1999-04-07 2000-10-12 Rensselaer Polytechnic Institute System and method for accessing personal information
US6647260B2 (en) 1999-04-09 2003-11-11 Openwave Systems Inc. Method and system facilitating web based provisioning of two-way mobile communications devices
US6924828B1 (en) 1999-04-27 2005-08-02 Surfnotes Method and apparatus for improved information representation
US6697780B1 (en) 1999-04-30 2004-02-24 At&T Corp. Method and apparatus for rapid acoustic unit selection from a large speech corpus
US20020032564A1 (en) 2000-04-19 2002-03-14 Farzad Ehsani Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface
WO2000073936A1 (en) 1999-05-28 2000-12-07 Sehda, Inc. Phrase-based dialogue modeling with particular application to creating recognition grammars for voice-controlled user interfaces
US6931384B1 (en) 1999-06-04 2005-08-16 Microsoft Corporation System and method providing utility-based decision making about clarification dialog given communicative uncertainty
US6598039B1 (en) 1999-06-08 2003-07-22 Albert-Inc. S.A. Natural language interface for searching database
US8065155B1 (en) 1999-06-10 2011-11-22 Gazdzinski Robert F Adaptive advertising apparatus and methods
US7711565B1 (en) 1999-06-10 2010-05-04 Gazdzinski Robert F “Smart” elevator system and method
US6615175B1 (en) 1999-06-10 2003-09-02 Robert F. Gazdzinski “Smart” elevator system and method
US7093693B1 (en) 1999-06-10 2006-08-22 Gazdzinski Robert F Elevator access control system and method
US6711585B1 (en) 1999-06-15 2004-03-23 Kanisa Inc. System and method for implementing a knowledge management system
JP3361291B2 (en) 1999-07-23 2003-01-07 コナミ株式会社 Speech synthesis method, speech synthesis device, and computer-readable medium recording speech synthesis program
US6421672B1 (en) 1999-07-27 2002-07-16 Verizon Services Corp. Apparatus for and method of disambiguation of directory listing searches utilizing multiple selectable secondary search keys
US6628808B1 (en) 1999-07-28 2003-09-30 Datacard Corporation Apparatus and method for verifying a scanned image
US6622121B1 (en) 1999-08-20 2003-09-16 International Business Machines Corporation Testing speech recognition systems using test data generated by text-to-speech conversion
EP1079387A3 (en) 1999-08-26 2003-07-09 Matsushita Electric Industrial Co., Ltd. Mechanism for storing information about recorded television broadcasts
US6697824B1 (en) 1999-08-31 2004-02-24 Accenture Llp Relationship management in an E-commerce application framework
US6601234B1 (en) 1999-08-31 2003-07-29 Accenture Llp Attribute dictionary in a business logic services environment
US6912499B1 (en) 1999-08-31 2005-06-28 Nortel Networks Limited Method and apparatus for training a multilingual speech model set
US7127403B1 (en) 1999-09-13 2006-10-24 Microstrategy, Inc. System and method for personalizing an interactive voice broadcast of a voice service based on particulars of a request
US6601026B2 (en) 1999-09-17 2003-07-29 Discern Communications, Inc. Information retrieval by natural language querying
US6625583B1 (en) 1999-10-06 2003-09-23 Goldman, Sachs & Co. Handheld trading system interface
US6505175B1 (en) 1999-10-06 2003-01-07 Goldman, Sachs & Co. Order centric tracking system
US7020685B1 (en) 1999-10-08 2006-03-28 Openwave Systems Inc. Method and apparatus for providing internet content to SMS-based wireless devices
AU8030300A (en) 1999-10-19 2001-04-30 Sony Electronics Inc. Natural language interface control system
US6807574B1 (en) 1999-10-22 2004-10-19 Tellme Networks, Inc. Method and apparatus for content personalization over a telephone interface
JP2001125896A (en) 1999-10-26 2001-05-11 Victor Co Of Japan Ltd Natural language interactive system
US7310600B1 (en) 1999-10-28 2007-12-18 Canon Kabushiki Kaisha Language recognition using a similarity measure
GB2355834A (en) 1999-10-29 2001-05-02 Nokia Mobile Phones Ltd Speech recognition
US9076448B2 (en) 1999-11-12 2015-07-07 Nuance Communications, Inc. Distributed real time speech recognition system
US6633846B1 (en) 1999-11-12 2003-10-14 Phoenix Solutions, Inc. Distributed realtime speech recognition system
US6615172B1 (en) 1999-11-12 2003-09-02 Phoenix Solutions, Inc. Intelligent query engine for processing voice based queries
US7392185B2 (en) 1999-11-12 2008-06-24 Phoenix Solutions, Inc. Speech based learning/training system using semantic decoding
US6665640B1 (en) 1999-11-12 2003-12-16 Phoenix Solutions, Inc. Interactive speech based learning/training system formulating search queries based on natural language parsing of recognized user queries
US7050977B1 (en) 1999-11-12 2006-05-23 Phoenix Solutions, Inc. Speech-enabled server for internet website and method
US7725307B2 (en) 1999-11-12 2010-05-25 Phoenix Solutions, Inc. Query engine for processing voice based queries including semantic decoding
US7412643B1 (en) * 1999-11-23 2008-08-12 International Business Machines Corporation Method and apparatus for linking representation and realization data
US6532446B1 (en) 1999-11-24 2003-03-11 Openwave Systems Inc. Server based speech recognition user interface for wireless devices
US6526382B1 (en) 1999-12-07 2003-02-25 Comverse, Inc. Language-oriented user interfaces for voice activated services
US7024363B1 (en) 1999-12-14 2006-04-04 International Business Machines Corporation Methods and apparatus for contingent transfer and execution of spoken language interfaces
US6397186B1 (en) 1999-12-22 2002-05-28 Ambush Interactive, Inc. Hands-free, voice-operated remote control transmitter
US6526395B1 (en) 1999-12-31 2003-02-25 Intel Corporation Application of personality models and interaction with synthetic characters in a computing system
US6556983B1 (en) 2000-01-12 2003-04-29 Microsoft Corporation Methods and apparatus for finding semantic information, such as usage logs, similar to a query using a pattern lattice data space
US6546388B1 (en) 2000-01-14 2003-04-08 International Business Machines Corporation Metadata search results ranking system
US6701294B1 (en) 2000-01-19 2004-03-02 Lucent Technologies, Inc. User interface for translating natural language inquiries into database queries and data presentations
US6829603B1 (en) 2000-02-02 2004-12-07 International Business Machines Corp. System, method and program product for interactive natural dialog
US6895558B1 (en) 2000-02-11 2005-05-17 Microsoft Corporation Multi-access mode electronic personal assistant
US6640098B1 (en) 2000-02-14 2003-10-28 Action Engine Corporation System for obtaining service-related information for local interactive wireless devices
WO2001063382A2 (en) 2000-02-25 2001-08-30 Synquiry Technologies, Ltd. Conceptual factoring and unification of graphs representing semantic models
US6895380B2 (en) 2000-03-02 2005-05-17 Electro Standards Laboratories Voice actuation with contextual learning for intelligent machine control
US6449620B1 (en) 2000-03-02 2002-09-10 Nimble Technology, Inc. Method and apparatus for generating information pages using semi-structured data stored in a structured manner
US6466654B1 (en) 2000-03-06 2002-10-15 Avaya Technology Corp. Personal virtual assistant with semantic tagging
US6757362B1 (en) 2000-03-06 2004-06-29 Avaya Technology Corp. Personal virtual assistant
EP1275042A2 (en) 2000-03-06 2003-01-15 Kanisa Inc. A system and method for providing an intelligent multi-step dialog with a user
US6477488B1 (en) 2000-03-10 2002-11-05 Apple Computer, Inc. Method for dynamic context scope selection in hybrid n-gram+LSA language modeling
US6615220B1 (en) 2000-03-14 2003-09-02 Oracle International Corporation Method and mechanism for data consolidation
US6510417B1 (en) 2000-03-21 2003-01-21 America Online, Inc. System and method for voice access to internet-based information
GB2366009B (en) 2000-03-22 2004-07-21 Canon Kk Natural language machine interface
US20020035474A1 (en) 2000-07-18 2002-03-21 Ahmet Alpdemir Voice-interactive marketplace providing time and money saving benefits and real-time promotion publishing and feedback
US6934684B2 (en) 2000-03-24 2005-08-23 Dialsurf, Inc. Voice-interactive marketplace providing promotion and promotion tracking, loyalty reward and redemption, and other features
JP3728172B2 (en) 2000-03-31 2005-12-21 キヤノン株式会社 Speech synthesis method and apparatus
US7177798B2 (en) 2000-04-07 2007-02-13 Rensselaer Polytechnic Institute Natural language interface using constrained intermediate dictionary of results
US6865533B2 (en) 2000-04-21 2005-03-08 Lessac Technology Inc. Text to speech
US7107204B1 (en) 2000-04-24 2006-09-12 Microsoft Corporation Computer-aided writing system and method with cross-language writing wizard
US6810379B1 (en) 2000-04-24 2004-10-26 Sensory, Inc. Client/server architecture for text-to-speech synthesis
WO2001084535A2 (en) 2000-05-02 2001-11-08 Dragon Systems, Inc. Error correction in speech recognition
AU2001263397A1 (en) 2000-05-24 2001-12-03 Stars 1-To-1 Interactive voice communication method and system for information and entertainment
US20020042707A1 (en) 2000-06-19 2002-04-11 Gang Zhao Grammar-packaged parsing
US6680675B1 (en) 2000-06-21 2004-01-20 Fujitsu Limited Interactive to-do list item notification system including GPS interface
US6691111B2 (en) 2000-06-30 2004-02-10 Research In Motion Limited System and method for implementing a natural language user interface
US6684187B1 (en) 2000-06-30 2004-01-27 At&T Corp. Method and system for preselection of suitable units for concatenative speech
US6505158B1 (en) 2000-07-05 2003-01-07 At&T Corp. Synthesis-based pre-selection of suitable units for concatenative speech
JP3949356B2 (en) 2000-07-12 2007-07-25 三菱電機株式会社 Spoken dialogue system
TW521266B (en) 2000-07-13 2003-02-21 Verbaltek Inc Perceptual phonetic feature speech recognition system and method
US7139709B2 (en) 2000-07-20 2006-11-21 Microsoft Corporation Middleware layer between speech related applications and engines
US20060143007A1 (en) 2000-07-24 2006-06-29 Koh V E User interaction with voice information services
JP2002041276A (en) 2000-07-24 2002-02-08 Sony Corp Interactive operation-supporting system, interactive operation-supporting method and recording medium
US7092928B1 (en) 2000-07-31 2006-08-15 Quantum Leap Research, Inc. Intelligent portal engine
US7853664B1 (en) 2000-07-31 2010-12-14 Landmark Digital Services Llc Method and system for purchasing pre-recorded music
US6778951B1 (en) 2000-08-09 2004-08-17 Concerto Software, Inc. Information retrieval method with natural language interface
US6766320B1 (en) 2000-08-24 2004-07-20 Microsoft Corporation Search engine with natural language-based robust parsing for user query and relevance feedback learning
DE10042944C2 (en) 2000-08-31 2003-03-13 Siemens Ag Grapheme-phoneme conversion
US6799098B2 (en) 2000-09-01 2004-09-28 Beltpack Corporation Remote control system for a locomotive using voice commands
WO2002023796A1 (en) 2000-09-11 2002-03-21 Sentrycom Ltd. A biometric-based system and method for enabling authentication of electronic messages sent over a network
JP3784289B2 (en) 2000-09-12 2006-06-07 松下電器産業株式会社 Media editing method and apparatus
AU2001290882A1 (en) 2000-09-15 2002-03-26 Lernout And Hauspie Speech Products N.V. Fast waveform synchronization for concatenation and time-scale modification of speech
US6795806B1 (en) 2000-09-20 2004-09-21 International Business Machines Corporation Method for enhancing dictation and command discrimination
AU2001295080A1 (en) 2000-09-29 2002-04-08 Professorq, Inc. Natural-language voice-activated personal assistant
US7219058B1 (en) 2000-10-13 2007-05-15 At&T Corp. System and method for processing speech recognition results
US6832194B1 (en) 2000-10-26 2004-12-14 Sensory, Incorporated Audio recognition peripheral system
US7027974B1 (en) 2000-10-27 2006-04-11 Science Applications International Corporation Ontology-based parser for natural language processing
US7006969B2 (en) 2000-11-02 2006-02-28 At&T Corp. System and method of pattern recognition in very high-dimensional space
TW518482B (en) * 2000-11-10 2003-01-21 Future Display Systems Inc Method for taking notes on an article displayed by an electronic book
JP2002169588A (en) 2000-11-16 2002-06-14 Internatl Business Mach Corp <Ibm> Text display device, text display control method, storage medium, program transmission device, and reception supporting method
US6957076B2 (en) 2000-11-22 2005-10-18 Denso Corporation Location specific reminders for wireless mobiles
US20040085162A1 (en) 2000-11-29 2004-05-06 Rajeev Agarwal Method and apparatus for providing a mixed-initiative dialog between a user and a machine
US20020067308A1 (en) 2000-12-06 2002-06-06 Xerox Corporation Location/time-based reminder for personal electronic devices
WO2002050816A1 (en) 2000-12-18 2002-06-27 Koninklijke Philips Electronics N.V. Store speech, select vocabulary to recognize word
US20040190688A1 (en) 2003-03-31 2004-09-30 Timmins Timothy A. Communications methods and systems using voiceprints
TW490655B (en) 2000-12-27 2002-06-11 Winbond Electronics Corp Method and device for recognizing authorized users using voice spectrum information
US6937986B2 (en) 2000-12-28 2005-08-30 Comverse, Inc. Automatic dynamic speech recognition vocabulary based on external sources of information
US20020133347A1 (en) 2000-12-29 2002-09-19 Eberhard Schoneburg Method and apparatus for natural language dialog interface
MXPA02008345A (en) 2000-12-29 2002-12-13 Gen Electric Method and system for identifying repeatedly malfunctioning equipment.
US7257537B2 (en) 2001-01-12 2007-08-14 International Business Machines Corporation Method and apparatus for performing dialog management in a computer conversational interface
US7085723B2 (en) 2001-01-12 2006-08-01 International Business Machines Corporation System and method for determining utterance context in a multi-context speech application
JP2002229955A (en) 2001-02-02 2002-08-16 Matsushita Electric Ind Co Ltd Information terminal device and authentication system
US6964023B2 (en) 2001-02-05 2005-11-08 International Business Machines Corporation System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input
US6885987B2 (en) 2001-02-09 2005-04-26 Fastmobile, Inc. Method and apparatus for encoding and decoding pause information
US7171365B2 (en) 2001-02-16 2007-01-30 International Business Machines Corporation Tracking time using portable recorders and speech recognition
US6622136B2 (en) 2001-02-16 2003-09-16 Motorola, Inc. Interactive tool for semi-automatic creation of a domain model
US7290039B1 (en) 2001-02-27 2007-10-30 Microsoft Corporation Intent based processing
GB2372864B (en) 2001-02-28 2005-09-07 Vox Generation Ltd Spoken language interface
US6721728B2 (en) 2001-03-02 2004-04-13 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration System, method and apparatus for discovering phrases in a database
US7366979B2 (en) 2001-03-09 2008-04-29 Copernicus Investments, Llc Method and apparatus for annotating a document
US7216073B2 (en) 2001-03-13 2007-05-08 Intelligate, Ltd. Dynamic natural language understanding
US6677929B2 (en) 2001-03-21 2004-01-13 Agilent Technologies, Inc. Optical pseudo trackball controls the operation of an appliance or machine
JP3925611B2 (en) 2001-03-22 2007-06-06 セイコーエプソン株式会社 Information providing system, information providing apparatus, program, information storage medium, and user interface setting method
US7058889B2 (en) * 2001-03-23 2006-06-06 Koninklijke Philips Electronics N.V. Synchronizing text/visual information with audio playback
US6738743B2 (en) 2001-03-28 2004-05-18 Intel Corporation Unified client-server distributed architectures for spoken dialogue systems
US6996531B2 (en) 2001-03-30 2006-02-07 Comverse Ltd. Automated database assistance using a telephone for a speech based or text based multimedia communication mode
GB0110326D0 (en) 2001-04-27 2001-06-20 Ibm Method and apparatus for interoperation between legacy software and screen reader programs
US6654740B2 (en) 2001-05-08 2003-11-25 Sunflare Co., Ltd. Probabilistic information retrieval based on differential latent semantic space
JP4369132B2 (en) 2001-05-10 2009-11-18 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Background learning of speaker voice
US7085722B2 (en) 2001-05-14 2006-08-01 Sony Computer Entertainment America Inc. System and method for menu-driven voice control of characters in a game environment
JP2002344880A (en) 2001-05-22 2002-11-29 Megafusion Corp Contents distribution system
US7020663B2 (en) * 2001-05-30 2006-03-28 George M. Hay System and method for the delivery of electronic books
US6944594B2 (en) 2001-05-30 2005-09-13 Bellsouth Intellectual Property Corporation Multi-context conversational environment system and method
US20020194003A1 (en) 2001-06-05 2002-12-19 Mozer Todd F. Client-server security system and method
US20020198714A1 (en) 2001-06-26 2002-12-26 Guojun Zhou Statistical spoken dialog system
US7139722B2 (en) 2001-06-27 2006-11-21 Bellsouth Intellectual Property Corporation Location and time sensitive wireless calendaring
US6604059B2 (en) 2001-07-10 2003-08-05 Koninklijke Philips Electronics N.V. Predictive calendar
US7987151B2 (en) 2001-08-10 2011-07-26 General Dynamics Advanced Info Systems, Inc. Apparatus and method for problem solving using intelligent agents
US6813491B1 (en) 2001-08-31 2004-11-02 Openwave Systems Inc. Method and apparatus for adapting settings of wireless communication devices in accordance with user proximity
US7313526B2 (en) 2001-09-05 2007-12-25 Voice Signal Technologies, Inc. Speech recognition using selectable recognition modes
US7953447B2 (en) 2001-09-05 2011-05-31 Vocera Communications, Inc. Voice-controlled communications system and method using a badge application
US7403938B2 (en) 2001-09-24 2008-07-22 Iac Search & Media, Inc. Natural language query processing
US20050196732A1 (en) 2001-09-26 2005-09-08 Scientific Learning Corporation Method and apparatus for automated training of language learning skills
US6985865B1 (en) 2001-09-26 2006-01-10 Sprint Spectrum L.P. Method and system for enhanced response to voice commands in a voice command platform
US6650735B2 (en) 2001-09-27 2003-11-18 Microsoft Corporation Integrated voice access to a variety of personal information services
US7324947B2 (en) 2001-10-03 2008-01-29 Promptu Systems Corporation Global speech user interface
US7167832B2 (en) 2001-10-15 2007-01-23 At&T Corp. Method for dialog management
GB2381409B (en) 2001-10-27 2004-04-28 Hewlett Packard Ltd Asynchronous access to synchronous voice services
NO316480B1 (en) 2001-11-15 2004-01-26 Forinnova As Method and system for textual examination and discovery
US7747655B2 (en) 2001-11-19 2010-06-29 Ricoh Co. Ltd. Printable representations for time-based media
US20030101054A1 (en) 2001-11-27 2003-05-29 Ncc, Llc Integrated system and method for electronic speech recognition and transcription
JP2003163745A (en) 2001-11-28 2003-06-06 Matsushita Electric Ind Co Ltd Telephone set, interactive responder, interactive responding terminal, and interactive response system
US7483832B2 (en) 2001-12-10 2009-01-27 At&T Intellectual Property I, L.P. Method and system for customizing voice translation of text to speech
US7490039B1 (en) 2001-12-13 2009-02-10 Cisco Technology, Inc. Text to speech system and method having interactive spelling capabilities
TW541517B (en) 2001-12-25 2003-07-11 Univ Nat Cheng Kung Speech recognition system
US20030167335A1 (en) 2002-03-04 2003-09-04 Vigilos, Inc. System and method for network-based communication
KR100760666B1 (en) 2002-03-27 2007-09-20 노키아 코포레이션 Pattern recognition
US7197460B1 (en) 2002-04-23 2007-03-27 At&T Corp. System for handling frequently asked questions in a natural language dialog service
US6847966B1 (en) 2002-04-24 2005-01-25 Engenium Corporation Method and system for optimally searching a document database using a representative semantic space
US7546382B2 (en) 2002-05-28 2009-06-09 International Business Machines Corporation Methods and systems for authoring of mixed-initiative multi-modal interactions and related browsing mechanisms
US7634532B2 (en) 2002-05-31 2009-12-15 Onkyo Corporation Network type content reproduction system
US7398209B2 (en) 2002-06-03 2008-07-08 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US20030233230A1 (en) 2002-06-12 2003-12-18 Lucent Technologies Inc. System and method for representing and resolving ambiguity in spoken dialogue systems
US6999066B2 (en) 2002-06-24 2006-02-14 Xerox Corporation System for audible feedback for touch screen displays
CN1663249A (en) 2002-06-24 2005-08-31 松下电器产业株式会社 Metadata preparing device, preparing method therefor and retrieving device
US7299033B2 (en) 2002-06-28 2007-11-20 Openwave Systems Inc. Domain-based management of distribution of digital content from multiple suppliers to multiple wireless services subscribers
US7233790B2 (en) 2002-06-28 2007-06-19 Openwave Systems, Inc. Device capability based discovery, packaging and provisioning of content for wireless mobile devices
US7693720B2 (en) 2002-07-15 2010-04-06 Voicebox Technologies, Inc. Mobile systems and methods for responding to natural language speech utterance
US7567902B2 (en) 2002-09-18 2009-07-28 Nuance Communications, Inc. Generating speech recognition grammars from a large corpus of data
US7467087B1 (en) 2002-10-10 2008-12-16 Gillick Laurence S Training and using pronunciation guessers in speech recognition
JP2004152063A (en) 2002-10-31 2004-05-27 Nec Corp Structuring method, structuring device and structuring program of multimedia contents, and providing method thereof
AU2003293071A1 (en) 2002-11-22 2004-06-18 Roy Rosser Autonomous response engine
US7684985B2 (en) 2002-12-10 2010-03-23 Richard Dominach Techniques for disambiguating speech input using multimodal interfaces
US7386449B2 (en) 2002-12-11 2008-06-10 Voice Enabling Systems Technology Inc. Knowledge-based flexible natural speech dialogue system
US7956766B2 (en) 2003-01-06 2011-06-07 Panasonic Corporation Apparatus operating system
US20040152055A1 (en) * 2003-01-30 2004-08-05 Gliessner Michael J.G. Video based language learning system
US7529671B2 (en) 2003-03-04 2009-05-05 Microsoft Corporation Block synchronous decoding
US6980949B2 (en) 2003-03-14 2005-12-27 Sonum Technologies, Inc. Natural language processor
US20040186714A1 (en) 2003-03-18 2004-09-23 Aurilab, Llc Speech recognition improvement through post-processsing
US20060217967A1 (en) 2003-03-20 2006-09-28 Doug Goertzen System and methods for storing and presenting personal information
US7496498B2 (en) 2003-03-24 2009-02-24 Microsoft Corporation Front-end architecture for a multi-lingual text-to-speech system
US20040220798A1 (en) 2003-05-01 2004-11-04 Visteon Global Technologies, Inc. Remote voice identification system
US7421393B1 (en) 2004-03-01 2008-09-02 At&T Corp. System for developing a dialog manager using modular spoken-dialog components
US20050045373A1 (en) 2003-05-27 2005-03-03 Joseph Born Portable media device with audio prompt menu
US7200559B2 (en) 2003-05-29 2007-04-03 Microsoft Corporation Semantic object synchronous understanding implemented with speech application language tags
US7720683B1 (en) 2003-06-13 2010-05-18 Sensory, Inc. Method and apparatus of specifying and performing speech recognition operations
US7580551B1 (en) 2003-06-30 2009-08-25 The Research Foundation Of State University Of Ny Method and apparatus for analyzing and/or comparing handwritten and/or biometric samples
US20050015772A1 (en) 2003-07-16 2005-01-20 Saare John E. Method and system for device specific application optimization via a portal server
JP4551635B2 (en) 2003-07-31 2010-09-29 ソニー株式会社 Pipeline processing system and information processing apparatus
JP2005070645A (en) 2003-08-27 2005-03-17 Casio Comput Co Ltd Text and voice synchronizing device and text and voice synchronization processing program
US7475010B2 (en) 2003-09-03 2009-01-06 Lingospot, Inc. Adaptive and scalable method for resolving natural language ambiguities
US7418392B1 (en) 2003-09-25 2008-08-26 Sensory, Inc. System and method for controlling the operation of a device by voice commands
US7460652B2 (en) 2003-09-26 2008-12-02 At&T Intellectual Property I, L.P. VoiceXML and rule engine based switchboard for interactive voice response (IVR) services
US7155706B2 (en) 2003-10-24 2006-12-26 Microsoft Corporation Administrative tool environment
US7292726B2 (en) 2003-11-10 2007-11-06 Microsoft Corporation Recognition of electronic ink with late strokes
US7302099B2 (en) 2003-11-10 2007-11-27 Microsoft Corporation Stroke segmentation for template-based cursive handwriting recognition
US7584092B2 (en) 2004-11-15 2009-09-01 Microsoft Corporation Unsupervised learning of paraphrase/translation alternations and selective application thereof
US7412385B2 (en) 2003-11-12 2008-08-12 Microsoft Corporation System for identifying paraphrases using machine translation
US20050108074A1 (en) 2003-11-14 2005-05-19 Bloechl Peter E. Method and system for prioritization of task items
US7447630B2 (en) 2003-11-26 2008-11-04 Microsoft Corporation Method and apparatus for multi-sensory speech enhancement
DE602004016681D1 (en) 2003-12-05 2008-10-30 Kenwood Corp AUDIO DEVICE CONTROL DEVICE, AUDIO DEVICE CONTROL METHOD AND PROGRAM
WO2005059895A1 (en) 2003-12-16 2005-06-30 Loquendo S.P.A. Text-to-speech method and system, computer program product therefor
US7427024B1 (en) 2003-12-17 2008-09-23 Gazdzinski Mark J Chattel management apparatus and methods
JP2005189454A (en) 2003-12-25 2005-07-14 Casio Comput Co Ltd Text synchronous speech reproduction controller and program
US7552055B2 (en) 2004-01-10 2009-06-23 Microsoft Corporation Dialog component re-use in recognition systems
US7298904B2 (en) 2004-01-14 2007-11-20 International Business Machines Corporation Method and apparatus for scaling handwritten character input for handwriting recognition
AU2005207606B2 (en) 2004-01-16 2010-11-11 Nuance Communications, Inc. Corpus-based speech synthesis based on segment recombination
US20050165607A1 (en) 2004-01-22 2005-07-28 At&T Corp. System and method to disambiguate and clarify user intention in a spoken dialog system
DE602004017955D1 (en) 2004-01-29 2009-01-08 Daimler Ag Method and system for voice dialogue interface
KR100612839B1 (en) 2004-02-18 2006-08-18 삼성전자주식회사 Method and apparatus for domain-based dialog speech recognition
KR100462292B1 (en) 2004-02-26 2004-12-17 엔에이치엔(주) A method for providing search results list based on importance information and a system thereof
US7505906B2 (en) 2004-02-26 2009-03-17 At&T Intellectual Property, Ii System and method for augmenting spoken language understanding by correcting common errors in linguistic performance
US7693715B2 (en) 2004-03-10 2010-04-06 Microsoft Corporation Generating large units of graphonemes with mutual information criterion for letter to sound conversion
US7478033B2 (en) 2004-03-16 2009-01-13 Google Inc. Systems and methods for translating Chinese pinyin to Chinese characters
US7084758B1 (en) 2004-03-19 2006-08-01 Advanced Micro Devices, Inc. Location-based reminders
US7409337B1 (en) 2004-03-30 2008-08-05 Microsoft Corporation Natural language processing interface
US7496512B2 (en) 2004-04-13 2009-02-24 Microsoft Corporation Refining of segmental boundaries in speech waveforms using contextual-dependent models
US20050273626A1 (en) 2004-06-02 2005-12-08 Steven Pearson System and method for portable authentication
US8095364B2 (en) 2004-06-02 2012-01-10 Tegic Communications, Inc. Multimodal disambiguation of speech recognition
US20050289463A1 (en) 2004-06-23 2005-12-29 Google Inc., A Delaware Corporation Systems and methods for spell correction of non-roman characters and words
US7720674B2 (en) 2004-06-29 2010-05-18 Sap Ag Systems and methods for processing natural language queries
US7228278B2 (en) 2004-07-06 2007-06-05 Voxify, Inc. Multi-slot dialog systems and methods
JP2006023860A (en) 2004-07-06 2006-01-26 Sharp Corp Information browser, information browsing program, information browsing program recording medium, and information browsing system
JP4652737B2 (en) 2004-07-14 2011-03-16 インターナショナル・ビジネス・マシーンズ・コーポレーション Word boundary probability estimation device and method, probabilistic language model construction device and method, kana-kanji conversion device and method, and unknown word model construction method,
US7559089B2 (en) 2004-07-23 2009-07-07 Findaway World, Inc. Personal media player apparatus and method
TWI252049B (en) 2004-07-23 2006-03-21 Inventec Corp Sound control system and method
US7725318B2 (en) 2004-07-30 2010-05-25 Nice Systems Inc. System and method for improving the accuracy of audio searching
AU2005273948B2 (en) * 2004-08-09 2010-02-04 The Nielsen Company (Us), Llc Methods and apparatus to monitor audio/visual content from various sources
US7853574B2 (en) 2004-08-26 2010-12-14 International Business Machines Corporation Method of generating a context-inferenced search query and of sorting a result of the query
KR20060022001A (en) 2004-09-06 2006-03-09 현대모비스 주식회사 Button mounting structure for a car audio
US20060061488A1 (en) 2004-09-17 2006-03-23 Dunton Randy R Location based task reminder
US7716056B2 (en) 2004-09-27 2010-05-11 Robert Bosch Corporation Method and system for interactive conversational dialogue for cognitively overloaded device users
US7603381B2 (en) 2004-09-30 2009-10-13 Microsoft Corporation Contextual action publishing
US8107401B2 (en) 2004-09-30 2012-01-31 Avaya Inc. Method and apparatus for providing a virtual assistant to a communication participant
US7693719B2 (en) 2004-10-29 2010-04-06 Microsoft Corporation Providing personalized voice font for text-to-speech applications
US7735012B2 (en) 2004-11-04 2010-06-08 Apple Inc. Audio user interface for computing devices
US7546235B2 (en) 2004-11-15 2009-06-09 Microsoft Corporation Unsupervised learning of paraphrase/translation alternations and selective application thereof
US7552046B2 (en) 2004-11-15 2009-06-23 Microsoft Corporation Unsupervised learning of paraphrase/translation alternations and selective application thereof
US7885844B1 (en) 2004-11-16 2011-02-08 Amazon Technologies, Inc. Automatically generating task recommendations for human task performers
US7702500B2 (en) 2004-11-24 2010-04-20 Blaedow Karen R Method and apparatus for determining the meaning of natural language
CN1609859A (en) 2004-11-26 2005-04-27 孙斌 Search result clustering method
US7376645B2 (en) 2004-11-29 2008-05-20 The Intellection Group, Inc. Multimodal natural language query system and architecture for processing voice and proximity-based queries
US20080255837A1 (en) 2004-11-30 2008-10-16 Jonathan Kahn Method for locating an audio segment within an audio file
US20060122834A1 (en) 2004-12-03 2006-06-08 Bennett Ian M Emotion detection device & method for use in distributed systems
US8214214B2 (en) 2004-12-03 2012-07-03 Phoenix Solutions, Inc. Emotion detection device and method for use in distributed systems
US7636657B2 (en) 2004-12-09 2009-12-22 Microsoft Corporation Method and apparatus for automatic grammar generation from data entries
US8478589B2 (en) 2005-01-05 2013-07-02 At&T Intellectual Property Ii, L.P. Library of existing spoken dialog data for use in generating new natural language spoken dialog systems
US7873654B2 (en) 2005-01-24 2011-01-18 The Intellection Group, Inc. Multimodal natural language query system for processing and analyzing voice and proximity-based queries
US7508373B2 (en) 2005-01-28 2009-03-24 Microsoft Corporation Form factor and input method for language input
GB0502259D0 (en) 2005-02-03 2005-03-09 British Telecomm Document searching tool and method
US7949533B2 (en) 2005-02-04 2011-05-24 Vococollect, Inc. Methods and systems for assessing and improving the performance of a speech recognition system
US20060194181A1 (en) * 2005-02-28 2006-08-31 Outland Research, Llc Method and apparatus for electronic books with enhanced educational features
US7676026B1 (en) 2005-03-08 2010-03-09 Baxtech Asia Pte Ltd Desktop telephony system
US7925525B2 (en) 2005-03-25 2011-04-12 Microsoft Corporation Smart reminders
US7721301B2 (en) 2005-03-31 2010-05-18 Microsoft Corporation Processing files from a mobile device using voice commands
US20080120342A1 (en) * 2005-04-07 2008-05-22 Iofy Corporation System and Method for Providing Data to be Used in a Presentation on a Device
US7684990B2 (en) 2005-04-29 2010-03-23 Nuance Communications, Inc. Method and apparatus for multiple value confirmation and correction in spoken dialog systems
WO2006129967A1 (en) 2005-05-30 2006-12-07 Daumsoft, Inc. Conversation system and method using conversational agent
US8041570B2 (en) 2005-05-31 2011-10-18 Robert Bosch Corporation Dialogue management using scripts
US8024195B2 (en) 2005-06-27 2011-09-20 Sensory, Inc. Systems and methods of performing speech recognition using historical information
US8396715B2 (en) 2005-06-28 2013-03-12 Microsoft Corporation Confidence threshold tuning
US7925995B2 (en) 2005-06-30 2011-04-12 Microsoft Corporation Integration of location logs, GPS signals, and spatial resources for identifying user activities, goals, and context
US7826945B2 (en) 2005-07-01 2010-11-02 You Zhang Automobile speech-recognition interface
US20070027732A1 (en) 2005-07-28 2007-02-01 Accu-Spatial, Llc Context-sensitive, location-dependent information delivery at a construction site
US20070073726A1 (en) 2005-08-05 2007-03-29 Klein Eric N Jr System and method for queuing purchase transactions
US7640160B2 (en) 2005-08-05 2009-12-29 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7362738B2 (en) 2005-08-09 2008-04-22 Deere & Company Method and system for delivering information to a user
US7620549B2 (en) 2005-08-10 2009-11-17 Voicebox Technologies, Inc. System and method of supporting adaptive misrecognition in conversational speech
US20070041361A1 (en) 2005-08-15 2007-02-22 Nokia Corporation Apparatus and methods for implementing an in-call voice user interface using context information
US7949529B2 (en) 2005-08-29 2011-05-24 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US8265939B2 (en) 2005-08-31 2012-09-11 Nuance Communications, Inc. Hierarchical methods and apparatus for extracting user intent from spoken utterances
US7634409B2 (en) 2005-08-31 2009-12-15 Voicebox Technologies, Inc. Dynamic speech sharpening
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
JP4908094B2 (en) 2005-09-30 2012-04-04 株式会社リコー Information processing system, information processing method, and information processing program
US7577522B2 (en) 2005-12-05 2009-08-18 Outland Research, Llc Spatially associated personal reminder system and method
US7930168B2 (en) 2005-10-04 2011-04-19 Robert Bosch Gmbh Natural language processing of disfluent sentences
US8620667B2 (en) 2005-10-17 2013-12-31 Microsoft Corporation Flexible speech-activated command and control
US7707032B2 (en) 2005-10-20 2010-04-27 National Cheng Kung University Method and system for matching speech data
US8229745B2 (en) 2005-10-21 2012-07-24 Nuance Communications, Inc. Creating a mixed-initiative grammar from directed dialog grammars
US20070106674A1 (en) 2005-11-10 2007-05-10 Purusharth Agrawal Field sales process facilitation systems and methods
US20070112572A1 (en) 2005-11-15 2007-05-17 Fail Keith W Method and apparatus for assisting vision impaired individuals with selecting items from a list
US8326629B2 (en) 2005-11-22 2012-12-04 Nuance Communications, Inc. Dynamically changing voice attributes during speech synthesis based upon parameter differentiation for dialog contexts
US20070185926A1 (en) 2005-11-28 2007-08-09 Anand Prahlad Systems and methods for classifying and transferring information in a storage network
KR20070057496A (en) 2005-12-02 2007-06-07 삼성전자주식회사 Liquid crystal display
KR100810500B1 (en) 2005-12-08 2008-03-07 한국전자통신연구원 Method for enhancing usability in a spoken dialog system
US20070156627A1 (en) 2005-12-15 2007-07-05 General Instrument Corporation Method and apparatus for creating and using electronic content bookmarks
GB2433403B (en) 2005-12-16 2009-06-24 Emil Ltd A text editing apparatus and method
US20070211071A1 (en) 2005-12-20 2007-09-13 Benjamin Slotznick Method and apparatus for interacting with a visually displayed document on a screen reader
DE102005061365A1 (en) 2005-12-21 2007-06-28 Siemens Ag Background applications e.g. home banking system, controlling method for use over e.g. user interface, involves associating transactions and transaction parameters over universal dialog specification, and universally operating applications
US7996228B2 (en) 2005-12-22 2011-08-09 Microsoft Corporation Voice initiated network operations
US7599918B2 (en) 2005-12-29 2009-10-06 Microsoft Corporation Dynamic search with implicit user intention mining
JP2007183864A (en) 2006-01-10 2007-07-19 Fujitsu Ltd File retrieval method and system therefor
US20070174188A1 (en) 2006-01-25 2007-07-26 Fish Robert D Electronic marketplace that facilitates transactions between consolidated buyers and/or sellers
IL174107A0 (en) 2006-02-01 2006-08-01 Grois Dan Method and system for advertising by means of a search engine over a data network
JP2007206317A (en) 2006-02-01 2007-08-16 Yamaha Corp Authoring method and apparatus, and program
US8595041B2 (en) 2006-02-07 2013-11-26 Sap Ag Task responsibility system
WO2007099529A1 (en) 2006-02-28 2007-09-07 Sandisk Il Ltd Bookmarked synchronization of files
US7983910B2 (en) 2006-03-03 2011-07-19 International Business Machines Corporation Communicating across voice and text channels with emotion preservation
KR100764174B1 (en) 2006-03-03 2007-10-08 삼성전자주식회사 Apparatus for providing voice dialogue service and method for operating the apparatus
US7752152B2 (en) 2006-03-17 2010-07-06 Microsoft Corporation Using predictive user models for language modeling on a personal device with user behavior models based on statistical modeling
JP4734155B2 (en) 2006-03-24 2011-07-27 株式会社東芝 Speech recognition apparatus, speech recognition method, and speech recognition program
US7707027B2 (en) 2006-04-13 2010-04-27 Nuance Communications, Inc. Identification and rejection of meaningless input during natural language classification
US8214213B1 (en) 2006-04-27 2012-07-03 At&T Intellectual Property Ii, L.P. Speech recognition based on pronunciation modeling
US20070276714A1 (en) 2006-05-15 2007-11-29 Sap Ag Business process map management
EP1858005A1 (en) 2006-05-19 2007-11-21 Texthelp Systems Limited Streaming speech with synchronized highlighting generated by a server
US20070276651A1 (en) 2006-05-23 2007-11-29 Motorola, Inc. Grammar adaptation through cooperative client and server based speech recognition
US8423347B2 (en) 2006-06-06 2013-04-16 Microsoft Corporation Natural language personal information management
US7523108B2 (en) 2006-06-07 2009-04-21 Platformation, Inc. Methods and apparatus for searching with awareness of geography and languages
US7483894B2 (en) 2006-06-07 2009-01-27 Platformation Technologies, Inc Methods and apparatus for entity search
US20100257160A1 (en) 2006-06-07 2010-10-07 Yu Cao Methods & apparatus for searching with awareness of different types of information
KR100776800B1 (en) 2006-06-16 2007-11-19 한국전자통신연구원 Method and system (apparatus) for user specific service using intelligent gadget
KR20080001227A (en) 2006-06-29 2008-01-03 엘지.필립스 엘시디 주식회사 Apparatus for fixing a lamp of the back-light
US7548895B2 (en) 2006-06-30 2009-06-16 Microsoft Corporation Communication-prompted user assistance
US8050500B1 (en) 2006-07-06 2011-11-01 Senapps, LLC Recognition method and system
TWI312103B (en) 2006-07-17 2009-07-11 Asia Optical Co Inc Image pickup systems and methods
US20080027726A1 (en) * 2006-07-28 2008-01-31 Eric Louis Hansen Text to audio mapping, and animation of the text
US8170790B2 (en) 2006-09-05 2012-05-01 Garmin Switzerland Gmbh Apparatus for switching navigation device mode
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US20080077384A1 (en) 2006-09-22 2008-03-27 International Business Machines Corporation Dynamically translating a software application to a user selected target language that is not natively provided by the software application
US8214208B2 (en) 2006-09-28 2012-07-03 Reqall, Inc. Method and system for sharing portable voice profiles
US7930197B2 (en) 2006-09-28 2011-04-19 Microsoft Corporation Personal data mining
US7528713B2 (en) 2006-09-28 2009-05-05 Ektimisi Semiotics Holdings, Llc Apparatus and method for providing a task reminder based on travel history
US7649454B2 (en) 2006-09-28 2010-01-19 Ektimisi Semiotics Holdings, Llc System and method for providing a task reminder based on historical travel information
US20080082338A1 (en) 2006-09-29 2008-04-03 O'neil Michael P Systems and methods for secure voice identification and medical device interface
US8073681B2 (en) 2006-10-16 2011-12-06 Voicebox Technologies, Inc. System and method for a cooperative conversational voice user interface
US8600760B2 (en) 2006-11-28 2013-12-03 General Motors Llc Correcting substitution errors during automatic speech recognition by accepting a second best when first best is confusable
GB2457855B (en) 2006-11-30 2011-01-12 Nat Inst Of Advanced Ind Scien Speech recognition system and speech recognition system program
US20080129520A1 (en) 2006-12-01 2008-06-05 Apple Computer, Inc. Electronic device with enhanced audio feedback
US8045808B2 (en) 2006-12-04 2011-10-25 Trend Micro Incorporated Pure adversarial approach for identifying text content in images
US20080140413A1 (en) * 2006-12-07 2008-06-12 Jonathan Travis Millman Synchronization of audio to reading
US10185779B2 (en) 2008-03-03 2019-01-22 Oath Inc. Mechanisms for content aggregation, syndication, sharing, and updating
WO2008085742A2 (en) 2007-01-07 2008-07-17 Apple Inc. Portable multifunction device, method and graphical user interface for interacting with user input elements in displayed content
KR100883657B1 (en) 2007-01-26 2009-02-18 삼성전자주식회사 Method and apparatus for searching a music using speech recognition
US7818176B2 (en) 2007-02-06 2010-10-19 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US7822608B2 (en) 2007-02-27 2010-10-26 Nuance Communications, Inc. Disambiguating a speech recognition grammar in a multimodal application
US20080221900A1 (en) 2007-03-07 2008-09-11 Cerra Joseph P Mobile local search environment speech processing facility
US20080256613A1 (en) 2007-03-13 2008-10-16 Grover Noel J Voice print identification portal
US7801729B2 (en) 2007-03-13 2010-09-21 Sensory, Inc. Using multiple attributes to create a voice search playlist
US8219406B2 (en) 2007-03-15 2012-07-10 Microsoft Corporation Speech-centric multimodal user interface design in mobile technology
JP2008250375A (en) 2007-03-29 2008-10-16 Toshiba Corp Character input device, method, and program
US7809610B2 (en) 2007-04-09 2010-10-05 Platformation, Inc. Methods and apparatus for freshness and completeness of information
US8457946B2 (en) 2007-04-26 2013-06-04 Microsoft Corporation Recognition architecture for generating Asian characters
US7983915B2 (en) 2007-04-30 2011-07-19 Sonic Foundry, Inc. Audio content search engine
US8032383B1 (en) 2007-05-04 2011-10-04 Foneweb, Inc. Speech controlled services and devices using internet
US9292807B2 (en) 2007-05-10 2016-03-22 Microsoft Technology Licensing, Llc Recommending actions based on context
US8055708B2 (en) 2007-06-01 2011-11-08 Microsoft Corporation Multimedia spaces
US8204238B2 (en) 2007-06-08 2012-06-19 Sensory, Inc Systems and methods of sonic communication
KR20080109322A (en) 2007-06-12 2008-12-17 엘지전자 주식회사 Method and apparatus for providing services by comprehended user's intuited intension
US20080313335A1 (en) 2007-06-15 2008-12-18 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Communicator establishing aspects with context identifying
US8190627B2 (en) 2007-06-28 2012-05-29 Microsoft Corporation Machine assisted query formulation
US8019606B2 (en) 2007-06-29 2011-09-13 Microsoft Corporation Identification and selection of a software application via speech
JP4424382B2 (en) 2007-07-04 2010-03-03 ソニー株式会社 Content reproduction apparatus and content automatic reception method
JP2009036999A (en) 2007-08-01 2009-02-19 Infocom Corp Interactive method using computer, interactive system, computer program and computer-readable storage medium
KR101359715B1 (en) 2007-08-24 2014-02-10 삼성전자주식회사 Method and apparatus for providing mobile voice web
US8190359B2 (en) 2007-08-31 2012-05-29 Proxpro, Inc. Situation-aware personal information management for a mobile device
US20090058823A1 (en) 2007-09-04 2009-03-05 Apple Inc. Virtual Keyboards in Multi-Language Environment
US8838760B2 (en) 2007-09-14 2014-09-16 Ricoh Co., Ltd. Workflow-enabled provider
KR100920267B1 (en) 2007-09-17 2009-10-05 한국전자통신연구원 System for voice communication analysis and method thereof
US8706476B2 (en) 2007-09-18 2014-04-22 Ariadne Genomics, Inc. Natural language processing method by analyzing primitive sentences, logical clauses, clause types and verbal blocks
US8165886B1 (en) 2007-10-04 2012-04-24 Great Northern Research LLC Speech interface system and method for control and interaction with applications on a computing system
US8036901B2 (en) 2007-10-05 2011-10-11 Sensory, Incorporated Systems and methods of performing speech recognition using sensory inputs of human position
US20090112677A1 (en) 2007-10-24 2009-04-30 Rhett Randolph L Method for automatically developing suggested optimal work schedules from unsorted group and individual task lists
US7840447B2 (en) 2007-10-30 2010-11-23 Leonard Kleinrock Pricing and auctioning of bundled items among multiple sellers and buyers
US7983997B2 (en) 2007-11-02 2011-07-19 Florida Institute For Human And Machine Cognition, Inc. Interactive complex task teaching system that allows for natural language input, recognizes a user's intent, and automatically performs tasks in document object model (DOM) nodes
JP4926004B2 (en) 2007-11-12 2012-05-09 株式会社リコー Document processing apparatus, document processing method, and document processing program
US7890525B2 (en) 2007-11-14 2011-02-15 International Business Machines Corporation Foreign language abbreviation translation in an instant messaging system
US8112280B2 (en) 2007-11-19 2012-02-07 Sensory, Inc. Systems and methods of performing speech recognition with barge-in for use in a bluetooth system
EP2226746B1 (en) * 2007-11-28 2012-01-11 Fujitsu Limited Metallic pipe managed by wireless ic tag, and the wireless ic tag
US8140335B2 (en) 2007-12-11 2012-03-20 Voicebox Technologies, Inc. System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US8675830B2 (en) 2007-12-21 2014-03-18 Bce Inc. Method and apparatus for interrupting an active telephony session to deliver information to a subscriber
US8219407B1 (en) 2007-12-27 2012-07-10 Great Northern Research, LLC Method for processing the output of a speech recognizer
US20090187577A1 (en) 2008-01-20 2009-07-23 Aviv Reznik System and Method Providing Audio-on-Demand to a User's Personal Online Device as Part of an Online Audio Community
KR101334066B1 (en) 2008-02-11 2013-11-29 이점식 Self-evolving Artificial Intelligent cyber robot system and offer method
US8099289B2 (en) 2008-02-13 2012-01-17 Sensory, Inc. Voice interface and search for electronic devices including bluetooth headsets and remote systems
US20090239552A1 (en) 2008-03-24 2009-09-24 Yahoo! Inc. Location-based opportunistic recommendations
US8958848B2 (en) 2008-04-08 2015-02-17 Lg Electronics Inc. Mobile terminal and menu control method thereof
US8666824B2 (en) 2008-04-23 2014-03-04 Dell Products L.P. Digital media content location and purchasing system
US8594995B2 (en) 2008-04-24 2013-11-26 Nuance Communications, Inc. Multilingual asynchronous communications of speech messages recorded in digital media files
US8249857B2 (en) 2008-04-24 2012-08-21 International Business Machines Corporation Multilingual administration of enterprise data with user selected target language translation
US8285344B2 (en) 2008-05-21 2012-10-09 DP Technlogies, Inc. Method and apparatus for adjusting audio for a user environment
US8589161B2 (en) 2008-05-27 2013-11-19 Voicebox Technologies, Inc. System and method for an integrated, multi-modal, multi-device natural language voice services environment
US8694355B2 (en) 2008-05-30 2014-04-08 Sri International Method and apparatus for automated assistance with task management
US8423288B2 (en) 2009-11-30 2013-04-16 Apple Inc. Dynamic alerts for calendar events
US8166019B1 (en) 2008-07-21 2012-04-24 Sprint Communications Company L.P. Providing suggested actions in response to textual communications
US8756519B2 (en) 2008-09-12 2014-06-17 Google Inc. Techniques for sharing content on a web page
KR101005074B1 (en) 2008-09-18 2010-12-30 주식회사 수현테크 Plastic pipe connection fixing device
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US9200913B2 (en) 2008-10-07 2015-12-01 Telecommunication Systems, Inc. User interface for predictive traffic
US8832319B2 (en) * 2008-11-18 2014-09-09 Amazon Technologies, Inc. Synchronization of digital content
US8442824B2 (en) 2008-11-26 2013-05-14 Nuance Communications, Inc. Device, system, and method of liveness detection utilizing voice biometrics
US8140328B2 (en) 2008-12-01 2012-03-20 At&T Intellectual Property I, L.P. User intention based on N-best list of recognition hypotheses for utterances in a dialog
US8489599B2 (en) 2008-12-02 2013-07-16 Palo Alto Research Center Incorporated Context and activity-driven content delivery and interaction
JP5257311B2 (en) 2008-12-05 2013-08-07 ソニー株式会社 Information processing apparatus and information processing method
AU2009330073B2 (en) 2008-12-22 2012-11-15 Google Llc Asynchronous distributed de-duplication for replicated content addressable storage clusters
WO2010075623A1 (en) 2008-12-31 2010-07-08 Bce Inc. System and method for unlocking a device
US8326637B2 (en) 2009-02-20 2012-12-04 Voicebox Technologies, Inc. System and method for processing multi-modal device interactions in a natural language voice services environment
US20100225809A1 (en) * 2009-03-09 2010-09-09 Sony Corporation And Sony Electronics Inc. Electronic book with enhanced features
US8805823B2 (en) 2009-04-14 2014-08-12 Sri International Content processing systems and methods
JP5911796B2 (en) 2009-04-30 2016-04-27 サムスン エレクトロニクス カンパニー リミテッド User intention inference apparatus and method using multimodal information
KR101032792B1 (en) 2009-04-30 2011-05-06 주식회사 코오롱 Polyester fabric for airbag and manufacturing method thereof
KR101581883B1 (en) 2009-04-30 2016-01-11 삼성전자주식회사 Appratus for detecting voice using motion information and method thereof
WO2010131911A2 (en) * 2009-05-13 2010-11-18 Lee Doohan Multimedia file playing method and multimedia player
US8498857B2 (en) 2009-05-19 2013-07-30 Tata Consultancy Services Limited System and method for rapid prototyping of existing speech recognition solutions in different languages
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US20120311585A1 (en) 2011-06-03 2012-12-06 Apple Inc. Organizing task items that represent tasks to perform
US10540976B2 (en) 2009-06-05 2020-01-21 Apple Inc. Contextual voice commands
KR101562792B1 (en) 2009-06-10 2015-10-23 삼성전자주식회사 Apparatus and method for providing goal predictive interface
US8290777B1 (en) * 2009-06-12 2012-10-16 Amazon Technologies, Inc. Synchronizing the playing and displaying of digital content
US8484027B1 (en) * 2009-06-12 2013-07-09 Skyreader Media Inc. Method for live remote narration of a digital book
US20100324709A1 (en) * 2009-06-22 2010-12-23 Tree Of Life Publishing E-book reader with voice annotation
US9754224B2 (en) 2009-06-26 2017-09-05 International Business Machines Corporation Action based to-do list
US8527278B2 (en) 2009-06-29 2013-09-03 Abraham Ben David Intelligent home automation
US20110047072A1 (en) 2009-08-07 2011-02-24 Visa U.S.A. Inc. Systems and Methods for Propensity Analysis and Validation
US8768313B2 (en) 2009-08-17 2014-07-01 Digimarc Corporation Methods and systems for image or audio recognition processing
EP2473916A4 (en) 2009-09-02 2013-07-10 Stanford Res Inst Int Method and apparatus for exploiting human feedback in an intelligent automated assistant
US8321527B2 (en) 2009-09-10 2012-11-27 Tribal Brands System and method for tracking user location and associated activity and responsively providing mobile device updates
US8768308B2 (en) 2009-09-29 2014-07-01 Deutsche Telekom Ag Apparatus and method for creating and managing personal schedules via context-sensing and actuation
KR20110036385A (en) 2009-10-01 2011-04-07 삼성전자주식회사 Apparatus for analyzing intention of user and method thereof
US9197736B2 (en) 2009-12-31 2015-11-24 Digimarc Corporation Intuitive computing methods and systems
US20110099507A1 (en) 2009-10-28 2011-04-28 Google Inc. Displaying a collection of interactive elements that trigger actions directed to an item
US20120137367A1 (en) 2009-11-06 2012-05-31 Cataphora, Inc. Continuous anomaly detection based on behavior modeling and heterogeneous information analysis
US9171541B2 (en) 2009-11-10 2015-10-27 Voicebox Technologies Corporation System and method for hybrid processing in a natural language voice services environment
WO2011059997A1 (en) 2009-11-10 2011-05-19 Voicebox Technologies, Inc. System and method for providing a natural language content dedication service
WO2011060106A1 (en) 2009-11-10 2011-05-19 Dulcetta, Inc. Dynamic audio playback of soundtracks for electronic visual works
US8712759B2 (en) 2009-11-13 2014-04-29 Clausal Computing Oy Specializing disambiguation of a natural language expression
KR101960835B1 (en) 2009-11-24 2019-03-21 삼성전자주식회사 Schedule Management System Using Interactive Robot and Method Thereof
US8396888B2 (en) 2009-12-04 2013-03-12 Google Inc. Location-based searching using a search area that corresponds to a geographical location of a computing device
KR101622111B1 (en) 2009-12-11 2016-05-18 삼성전자 주식회사 Dialog system and conversational method thereof
US20110161309A1 (en) 2009-12-29 2011-06-30 Lx1 Technology Limited Method Of Sorting The Result Set Of A Search Engine
US8494852B2 (en) 2010-01-05 2013-07-23 Google Inc. Word-level correction of speech input
US8334842B2 (en) 2010-01-15 2012-12-18 Microsoft Corporation Recognizing user intent in motion capture system
US8626511B2 (en) 2010-01-22 2014-01-07 Google Inc. Multi-dimensional disambiguation of voice commands
US20110218855A1 (en) 2010-03-03 2011-09-08 Platformation, Inc. Offering Promotions Based on Query Analysis
US8521513B2 (en) 2010-03-12 2013-08-27 Microsoft Corporation Localization for interactive voice response systems
US8374864B2 (en) * 2010-03-17 2013-02-12 Cisco Technology, Inc. Correlation of transcribed text with corresponding audio
US9323756B2 (en) * 2010-03-22 2016-04-26 Lenovo (Singapore) Pte. Ltd. Audio book and e-book synchronization
KR101369810B1 (en) 2010-04-09 2014-03-05 이초강 Empirical Context Aware Computing Method For Robot
US8265928B2 (en) 2010-04-14 2012-09-11 Google Inc. Geotagged environmental audio for enhanced speech recognition accuracy
US20110279368A1 (en) 2010-05-12 2011-11-17 Microsoft Corporation Inferring user intent to engage a motion capture system
US8392186B2 (en) * 2010-05-18 2013-03-05 K-Nfb Reading Technology, Inc. Audio synchronization for document narration with user-selected playback
US8694313B2 (en) 2010-05-19 2014-04-08 Google Inc. Disambiguation of contact information using historical data
US8522283B2 (en) 2010-05-20 2013-08-27 Google Inc. Television remote control data transfer
US8468012B2 (en) 2010-05-26 2013-06-18 Google Inc. Acoustic model adaptation using geographic information
EP2397972B1 (en) 2010-06-08 2015-01-07 Vodafone Holding GmbH Smart card with microphone
US20110306426A1 (en) 2010-06-10 2011-12-15 Microsoft Corporation Activity Participation Based On User Intent
US8234111B2 (en) 2010-06-14 2012-07-31 Google Inc. Speech and noise models for speech recognition
US8411874B2 (en) 2010-06-30 2013-04-02 Google Inc. Removing noise from audio
US8861925B1 (en) 2010-07-28 2014-10-14 Intuit Inc. Methods and systems for audio-visual synchronization
US8775156B2 (en) 2010-08-05 2014-07-08 Google Inc. Translating languages in response to device motion
US8359020B2 (en) 2010-08-06 2013-01-22 Google Inc. Automatically monitoring for voice input based on context
US8473289B2 (en) 2010-08-06 2013-06-25 Google Inc. Disambiguating input based on context
US8700987B2 (en) * 2010-09-09 2014-04-15 Sony Corporation Annotating E-books / E-magazines with application results and function calls
US8812321B2 (en) 2010-09-30 2014-08-19 At&T Intellectual Property I, L.P. System and method for combining speech recognition outputs from a plurality of domain-specific speech recognizers via machine learning
US20120084634A1 (en) * 2010-10-05 2012-04-05 Sony Corporation Method and apparatus for annotating text
US8862255B2 (en) * 2011-03-23 2014-10-14 Audible, Inc. Managing playback of synchronized content
JP2014520297A (en) 2011-04-25 2014-08-21 ベベオ,インク. System and method for advanced personal timetable assistant
JP5463385B2 (en) * 2011-06-03 2014-04-09 アップル インコーポレイテッド Automatic creation of mapping between text data and audio data
US20120310642A1 (en) 2011-06-03 2012-12-06 Apple Inc. Automatically creating a mapping between text data and audio data

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5649060A (en) * 1993-10-18 1997-07-15 International Business Machines Corporation Automatic indexing and aligning of audio and text using speech recognition
US6081780A (en) * 1998-04-28 2000-06-27 International Business Machines Corporation TTS and prosody based authoring system
US6442518B1 (en) * 1999-07-14 2002-08-27 Compaq Information Technologies Group, L.P. Method for refining time alignments of closed captions
US6260011B1 (en) * 2000-03-20 2001-07-10 Microsoft Corporation Methods and apparatus for automatically synchronizing electronic audio files with electronic text files
US20020099552A1 (en) * 2001-01-25 2002-07-25 Darryl Rubin Annotating electronic information with audio clips
US20070055514A1 (en) * 2005-09-08 2007-03-08 Beattie Valerie L Intelligent tutoring feedback
US20100278453A1 (en) * 2006-09-15 2010-11-04 King Martin T Capture and display of annotations in paper and electronic documents
US20080140652A1 (en) * 2006-12-07 2008-06-12 Jonathan Travis Millman Authoring tool
US20090112572A1 (en) * 2007-10-30 2009-04-30 Karl Ola Thorn System and method for input of text to an application operating on a device
US20100324905A1 (en) * 2009-01-15 2010-12-23 K-Nfb Reading Technology, Inc. Voice models for document narration
US20110054901A1 (en) * 2009-08-28 2011-03-03 International Business Machines Corporation Method and apparatus for aligning texts
US20110153330A1 (en) * 2009-11-27 2011-06-23 i-SCROLL System and method for rendering text synchronized audio

Cited By (402)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US11928604B2 (en) 2005-09-08 2024-03-12 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9117447B2 (en) 2006-09-08 2015-08-25 Apple Inc. Using event alert text as input to an automated assistant
US8930191B2 (en) 2006-09-08 2015-01-06 Apple Inc. Paraphrasing of user requests and results by automated digital assistant
US8942986B2 (en) 2006-09-08 2015-01-27 Apple Inc. Determining user intent based on ontologies of domains
US9785707B2 (en) * 2006-10-13 2017-10-10 Syscom, Inc. Method and system for converting audio text files originating from audio files to searchable text and for processing the searchable text
US20140059076A1 (en) * 2006-10-13 2014-02-27 Syscom Inc. Method and system for converting audio text files originating from audio files to searchable text and for processing the searchable text
US11012942B2 (en) 2007-04-03 2021-05-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US11671920B2 (en) 2007-04-03 2023-06-06 Apple Inc. Method and system for operating a multifunction portable electronic device using voice-activation
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US11348582B2 (en) 2008-10-02 2022-05-31 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US11900936B2 (en) 2008-10-02 2024-02-13 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US10475446B2 (en) 2009-06-05 2019-11-12 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US10795541B2 (en) 2009-06-05 2020-10-06 Apple Inc. Intelligent organization of tasks items
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US10741185B2 (en) 2010-01-18 2020-08-11 Apple Inc. Intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US8903716B2 (en) 2010-01-18 2014-12-02 Apple Inc. Personalized vocabulary for digital assistant
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US10692504B2 (en) 2010-02-25 2020-06-23 Apple Inc. User profiling for voice input processing
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US9063641B2 (en) 2011-02-24 2015-06-23 Google Inc. Systems and methods for remote collaborative studying using electronic books
US10067922B2 (en) 2011-02-24 2018-09-04 Google Llc Automated study guide generation for electronic books
US10019995B1 (en) 2011-03-01 2018-07-10 Alice J. Stiebel Methods and systems for language learning based on a series of pitch patterns
US10565997B1 (en) 2011-03-01 2020-02-18 Alice J. Stiebel Methods and systems for teaching a hebrew bible trope lesson
US11062615B1 (en) 2011-03-01 2021-07-13 Intelligibility Training LLC Methods and systems for remote language learning in a pandemic-aware world
US11380334B1 (en) 2011-03-01 2022-07-05 Intelligible English LLC Methods and systems for interactive online language learning in a pandemic-aware world
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10417405B2 (en) 2011-03-21 2019-09-17 Apple Inc. Device access using voice authentication
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US8948892B2 (en) 2011-03-23 2015-02-03 Audible, Inc. Managing playback of synchronized content
US9734153B2 (en) 2011-03-23 2017-08-15 Audible, Inc. Managing related digital content
US9703781B2 (en) 2011-03-23 2017-07-11 Audible, Inc. Managing related digital content
US9792027B2 (en) 2011-03-23 2017-10-17 Audible, Inc. Managing playback of synchronized content
US10672399B2 (en) 2011-06-03 2020-06-02 Apple Inc. Switching between text data and audio data based on a mapping
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US11350253B2 (en) 2011-06-03 2022-05-31 Apple Inc. Active transport based notifications
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US8819012B2 (en) * 2011-08-30 2014-08-26 International Business Machines Corporation Accessing anchors in voice site content
US20130054609A1 (en) * 2011-08-30 2013-02-28 International Business Machines Corporation Accessing Anchors in Voice Site Content
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US9678634B2 (en) 2011-10-24 2017-06-13 Google Inc. Extensible framework for ereader tools
US9141404B2 (en) 2011-10-24 2015-09-22 Google Inc. Extensible framework for ereader tools
US9031493B2 (en) 2011-11-18 2015-05-12 Google Inc. Custom narration of electronic books
US9213705B1 (en) * 2011-12-19 2015-12-15 Audible, Inc. Presenting content related to primary audio content
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US11069336B2 (en) 2012-03-02 2021-07-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US20130268826A1 (en) * 2012-04-06 2013-10-10 Google Inc. Synchronizing progress in audio and text versions of electronic books
US9075760B2 (en) 2012-05-07 2015-07-07 Audible, Inc. Narration settings distribution for content customization
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US11269678B2 (en) 2012-05-15 2022-03-08 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US11321116B2 (en) 2012-05-15 2022-05-03 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9317500B2 (en) 2012-05-30 2016-04-19 Audible, Inc. Synchronizing translated digital content
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9141257B1 (en) 2012-06-18 2015-09-22 Audible, Inc. Selecting and conveying supplemental content
US9536439B1 (en) 2012-06-27 2017-01-03 Audible, Inc. Conveying questions with content
US9679608B2 (en) 2012-06-28 2017-06-13 Audible, Inc. Pacing content
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US20140013192A1 (en) * 2012-07-09 2014-01-09 Sas Institute Inc. Techniques for touch-based digital document audio and user interface enhancement
US10109278B2 (en) 2012-08-02 2018-10-23 Audible, Inc. Aligning body matter across content formats
US9099089B2 (en) 2012-08-02 2015-08-04 Audible, Inc. Identifying corresponding regions of content
US9799336B2 (en) 2012-08-02 2017-10-24 Audible, Inc. Identifying corresponding regions of content
US9047356B2 (en) 2012-09-05 2015-06-02 Google Inc. Synchronizing multiple reading positions in electronic books
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9684641B1 (en) * 2012-09-21 2017-06-20 Amazon Technologies, Inc. Presenting content in multiple languages
US9367196B1 (en) 2012-09-26 2016-06-14 Audible, Inc. Conveying branched content
US9632647B1 (en) * 2012-10-09 2017-04-25 Audible, Inc. Selecting presentation positions in dynamic content
US9223830B1 (en) 2012-10-26 2015-12-29 Audible, Inc. Content presentation analysis
US9280906B2 (en) * 2013-02-04 2016-03-08 Audible. Inc. Prompting a user for input during a synchronous presentation of audio content and textual content
US20140223272A1 (en) * 2013-02-04 2014-08-07 Audible, Inc. Selective synchronous presentation
US9472113B1 (en) 2013-02-05 2016-10-18 Audible, Inc. Synchronizing playback of digital content with physical content
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10714117B2 (en) 2013-02-07 2020-07-14 Apple Inc. Voice trigger for a digital assistant
US11636869B2 (en) 2013-02-07 2023-04-25 Apple Inc. Voice trigger for a digital assistant
US11557310B2 (en) 2013-02-07 2023-01-17 Apple Inc. Voice trigger for a digital assistant
US11862186B2 (en) 2013-02-07 2024-01-02 Apple Inc. Voice trigger for a digital assistant
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
KR20140109167A (en) * 2013-03-05 2014-09-15 엘지전자 주식회사 Mobile terminal and control method for the mobile terminal
WO2014137074A1 (en) * 2013-03-05 2014-09-12 Lg Electronics Inc. Mobile terminal and method of controlling the mobile terminal
US10241743B2 (en) * 2013-03-05 2019-03-26 Lg Electronics Inc. Mobile terminal for matching displayed text with recorded external audio and method of controlling the mobile terminal
KR101952179B1 (en) 2013-03-05 2019-05-22 엘지전자 주식회사 Mobile terminal and control method for the mobile terminal
US20160011847A1 (en) * 2013-03-05 2016-01-14 Lg Electronics Inc. Mobile terminal and method of controlling the mobile terminal
US11388291B2 (en) 2013-03-14 2022-07-12 Apple Inc. System and method for processing voicemail
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US11798547B2 (en) 2013-03-15 2023-10-24 Apple Inc. Voice activated device for use with a voice-based digital assistant
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9317486B1 (en) 2013-06-07 2016-04-19 Audible, Inc. Synchronizing playback of digital content with captured physical content
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10769385B2 (en) 2013-06-09 2020-09-08 Apple Inc. System and method for inferring user intent from speech inputs
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US11048473B2 (en) 2013-06-09 2021-06-29 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US11727219B2 (en) 2013-06-09 2023-08-15 Apple Inc. System and method for inferring user intent from speech inputs
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
EP3017576A4 (en) * 2013-07-03 2016-06-29 Ericsson Telefon Ab L M Providing an electronic book to a user equipment
WO2015002585A1 (en) 2013-07-03 2015-01-08 Telefonaktiebolaget L M Ericsson (Publ) Providing an electronic book to a user equipment
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US9489360B2 (en) 2013-09-05 2016-11-08 Audible, Inc. Identifying extra material in companion content
US10606940B2 (en) 2013-09-20 2020-03-31 Kabushiki Kaisha Toshiba Annotation sharing method, annotation sharing apparatus, and computer program product
US10108312B2 (en) 2013-10-17 2018-10-23 Samsung Electronics Co., Ltd. Apparatus and method for processing information list in terminal device
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US11810562B2 (en) 2014-05-30 2023-11-07 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10417344B2 (en) 2014-05-30 2019-09-17 Apple Inc. Exemplar-based natural language processing
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10714095B2 (en) 2014-05-30 2020-07-14 Apple Inc. Intelligent assistant for home automation
US10878809B2 (en) 2014-05-30 2020-12-29 Apple Inc. Multi-command single utterance input method
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US10699717B2 (en) 2014-05-30 2020-06-30 Apple Inc. Intelligent assistant for home automation
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US11699448B2 (en) 2014-05-30 2023-07-11 Apple Inc. Intelligent assistant for home automation
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10657966B2 (en) 2014-05-30 2020-05-19 Apple Inc. Better resolution when referencing to concepts
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US11670289B2 (en) 2014-05-30 2023-06-06 Apple Inc. Multi-command single utterance input method
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US11838579B2 (en) 2014-06-30 2023-12-05 Apple Inc. Intelligent automated assistant for TV user interactions
US11516537B2 (en) 2014-06-30 2022-11-29 Apple Inc. Intelligent automated assistant for TV user interactions
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10453443B2 (en) 2014-09-30 2019-10-22 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10438595B2 (en) 2014-09-30 2019-10-08 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10390213B2 (en) 2014-09-30 2019-08-20 Apple Inc. Social reminders
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US11556230B2 (en) 2014-12-02 2023-01-17 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US10930282B2 (en) 2015-03-08 2021-02-23 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US11842734B2 (en) 2015-03-08 2023-12-12 Apple Inc. Virtual assistant activation
US10529332B2 (en) 2015-03-08 2020-01-07 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10331304B2 (en) 2015-05-06 2019-06-25 Microsoft Technology Licensing, Llc Techniques to automatically generate bookmarks for media files
US11468282B2 (en) 2015-05-15 2022-10-11 Apple Inc. Virtual assistant in a communication session
US11127397B2 (en) 2015-05-27 2021-09-21 Apple Inc. Device voice control
US11070949B2 (en) 2015-05-27 2021-07-20 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10681212B2 (en) 2015-06-05 2020-06-09 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11947873B2 (en) 2015-06-29 2024-04-02 Apple Inc. Virtual assistant for media playback
US11010127B2 (en) 2015-06-29 2021-05-18 Apple Inc. Virtual assistant for media playback
US10387570B2 (en) * 2015-08-27 2019-08-20 Lenovo (Singapore) Pte Ltd Enhanced e-reader experience
US20170060365A1 (en) * 2015-08-27 2017-03-02 LENOVO ( Singapore) PTE, LTD. Enhanced e-reader experience
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US11853536B2 (en) 2015-09-08 2023-12-26 Apple Inc. Intelligent automated assistant in a media environment
US11126400B2 (en) 2015-09-08 2021-09-21 Apple Inc. Zero latency digital assistant
US11809483B2 (en) 2015-09-08 2023-11-07 Apple Inc. Intelligent automated assistant for media search and playback
US11550542B2 (en) 2015-09-08 2023-01-10 Apple Inc. Zero latency digital assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US11954405B2 (en) 2015-09-08 2024-04-09 Apple Inc. Zero latency digital assistant
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US10681324B2 (en) 2015-09-18 2020-06-09 Microsoft Technology Licensing, Llc Communication session processing
US20170083214A1 (en) * 2015-09-18 2017-03-23 Microsoft Technology Licensing, Llc Keyword Zoom
US10038886B2 (en) 2015-09-18 2018-07-31 Microsoft Technology Licensing, Llc Inertia audio scrolling
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US11809886B2 (en) 2015-11-06 2023-11-07 Apple Inc. Intelligent automated assistant in a messaging environment
US11886805B2 (en) 2015-11-09 2024-01-30 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10354652B2 (en) 2015-12-02 2019-07-16 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10942703B2 (en) 2015-12-23 2021-03-09 Apple Inc. Proactive assistance based on dialog communication between devices
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US11853647B2 (en) 2015-12-23 2023-12-26 Apple Inc. Proactive assistance based on dialog communication between devices
CN108292322A (en) * 2016-01-11 2018-07-17 微软技术许可有限责任公司 Use tissue, retrieval, annotation and the presentation of the media data file from the signal for checking environment capture
WO2017123419A1 (en) * 2016-01-11 2017-07-20 Microsoft Technology Licensing, Llc Organization, retrieval, annotation and presentation of media data files using signals captured from a viewing environment
US10235367B2 (en) 2016-01-11 2019-03-19 Microsoft Technology Licensing, Llc Organization, retrieval, annotation and presentation of media data files using signals captured from a viewing environment
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US20170315976A1 (en) * 2016-04-29 2017-11-02 Seagate Technology Llc Annotations for digital media items post capture
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US11657820B2 (en) 2016-06-10 2023-05-23 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US11749275B2 (en) 2016-06-11 2023-09-05 Apple Inc. Application integration with a digital assistant
US10942702B2 (en) 2016-06-11 2021-03-09 Apple Inc. Intelligent device arbitration and control
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US11809783B2 (en) 2016-06-11 2023-11-07 Apple Inc. Intelligent device arbitration and control
US10580409B2 (en) 2016-06-11 2020-03-03 Apple Inc. Application integration with a digital assistant
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10489110B2 (en) 2016-11-22 2019-11-26 Microsoft Technology Licensing, Llc Implicit narration for aural user interface
US10559297B2 (en) 2016-11-28 2020-02-11 Microsoft Technology Licensing, Llc Audio landmarking for aural user interface
CN110023898A (en) * 2016-11-28 2019-07-16 微软技术许可有限责任公司 Audio for aural user interface is calibrated
WO2018098093A1 (en) * 2016-11-28 2018-05-31 Microsoft Technology Licensing, Llc Audio landmarking for aural user interface
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US11656884B2 (en) 2017-01-09 2023-05-23 Apple Inc. Application integration with a digital assistant
WO2018187234A1 (en) * 2017-04-03 2018-10-11 Ex-Iq, Inc. Hands-free annotations of audio text
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10741181B2 (en) 2017-05-09 2020-08-11 Apple Inc. User interface for correcting recognition errors
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10847142B2 (en) 2017-05-11 2020-11-24 Apple Inc. Maintaining privacy of personal information
US11467802B2 (en) 2017-05-11 2022-10-11 Apple Inc. Maintaining privacy of personal information
US11599331B2 (en) 2017-05-11 2023-03-07 Apple Inc. Maintaining privacy of personal information
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US11837237B2 (en) 2017-05-12 2023-12-05 Apple Inc. User-specific acoustic models
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US11380310B2 (en) 2017-05-12 2022-07-05 Apple Inc. Low-latency intelligent automated assistant
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US11538469B2 (en) 2017-05-12 2022-12-27 Apple Inc. Low-latency intelligent automated assistant
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US11862151B2 (en) 2017-05-12 2024-01-02 Apple Inc. Low-latency intelligent automated assistant
US11580990B2 (en) 2017-05-12 2023-02-14 Apple Inc. User-specific acoustic models
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US11532306B2 (en) 2017-05-16 2022-12-20 Apple Inc. Detecting a trigger of a digital assistant
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10909171B2 (en) 2017-05-16 2021-02-02 Apple Inc. Intelligent automated assistant for media exploration
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10748546B2 (en) 2017-05-16 2020-08-18 Apple Inc. Digital assistant services based on device capabilities
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11675829B2 (en) 2017-05-16 2023-06-13 Apple Inc. Intelligent automated assistant for media exploration
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US11710482B2 (en) 2018-03-26 2023-07-25 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11900923B2 (en) 2018-05-07 2024-02-13 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11854539B2 (en) 2018-05-07 2023-12-26 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11907436B2 (en) 2018-05-07 2024-02-20 Apple Inc. Raise to speak
US11169616B2 (en) 2018-05-07 2021-11-09 Apple Inc. Raise to speak
US11487364B2 (en) 2018-05-07 2022-11-01 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10720160B2 (en) 2018-06-01 2020-07-21 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US11431642B2 (en) 2018-06-01 2022-08-30 Apple Inc. Variable latency device coordination
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US10984798B2 (en) 2018-06-01 2021-04-20 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11009970B2 (en) 2018-06-01 2021-05-18 Apple Inc. Attention aware virtual assistant dismissal
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US11630525B2 (en) 2018-06-01 2023-04-18 Apple Inc. Attention aware virtual assistant dismissal
US11360577B2 (en) 2018-06-01 2022-06-14 Apple Inc. Attention aware virtual assistant dismissal
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10944859B2 (en) 2018-06-03 2021-03-09 Apple Inc. Accelerated task performance
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance
US11676579B2 (en) * 2018-07-27 2023-06-13 Deepgram, Inc. Deep learning internal state index-based search and classification
US11367433B2 (en) 2018-07-27 2022-06-21 Deepgram, Inc. End-to-end neural networks for speech recognition and classification
US10540959B1 (en) 2018-07-27 2020-01-21 Deepgram, Inc. Augmented generalized deep learning with special vocabulary
US10380997B1 (en) * 2018-07-27 2019-08-13 Deepgram, Inc. Deep learning internal state index-based search and classification
US10847138B2 (en) * 2018-07-27 2020-11-24 Deepgram, Inc. Deep learning internal state index-based search and classification
US10720151B2 (en) 2018-07-27 2020-07-21 Deepgram, Inc. End-to-end neural networks for speech recognition and classification
US20210035565A1 (en) * 2018-07-27 2021-02-04 Deepgram, Inc. Deep learning internal state index-based search and classification
US20200035224A1 (en) * 2018-07-27 2020-01-30 Deepgram, Inc. Deep learning internal state index-based search and classification
US10210860B1 (en) 2018-07-27 2019-02-19 Deepgram, Inc. Augmented generalized deep learning with special vocabulary
US10978049B2 (en) * 2018-07-31 2021-04-13 Korea Electronics Technology Institute Audio segmentation method based on attention mechanism
US20200043473A1 (en) * 2018-07-31 2020-02-06 Korea Electronics Technology Institute Audio segmentation method based on attention mechanism
CN112740327A (en) * 2018-08-27 2021-04-30 谷歌有限责任公司 Algorithmic determination of story reader reading interruption
WO2020046269A1 (en) * 2018-08-27 2020-03-05 Google Llc Algorithmic determination of a story readers discontinuation of reading
US20210225392A1 (en) * 2018-08-27 2021-07-22 Google Llc Algorithmic determination of a story readers discontinuation of reading
EP4191561A1 (en) * 2018-08-27 2023-06-07 Google LLC Augmenting text story reading by physical effects
EP4191563A1 (en) * 2018-08-27 2023-06-07 Google LLC Determination of a story readers current reading location
US11862192B2 (en) * 2018-08-27 2024-01-02 Google Llc Algorithmic determination of a story readers discontinuation of reading
US11501769B2 (en) 2018-08-31 2022-11-15 Google Llc Dynamic adjustment of story time special effects based on contextual data
US11417325B2 (en) 2018-09-04 2022-08-16 Google Llc Detection of story reader progress for pre-caching special effects
US11526671B2 (en) 2018-09-04 2022-12-13 Google Llc Reading progress estimation based on phonetic fuzzy matching and confidence interval
US11749279B2 (en) 2018-09-04 2023-09-05 Google Llc Detection of story reader progress for pre-caching special effects
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11893992B2 (en) 2018-09-28 2024-02-06 Apple Inc. Multi-modal inputs for voice commands
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
WO2020095021A1 (en) * 2018-11-06 2020-05-14 Arm Ip Limited Resources and methods for tracking progression in a literary work
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11783815B2 (en) 2019-03-18 2023-10-10 Apple Inc. Multimodality in digital assistant systems
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11158319B2 (en) 2019-04-11 2021-10-26 Advanced New Technologies Co., Ltd. Information processing system, method, device and equipment
US10930284B2 (en) 2019-04-11 2021-02-23 Advanced New Technologies Co., Ltd. Information processing system, method, device and equipment
US11705130B2 (en) 2019-05-06 2023-07-18 Apple Inc. Spoken notifications
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11217251B2 (en) 2019-05-06 2022-01-04 Apple Inc. Spoken notifications
US11675491B2 (en) 2019-05-06 2023-06-13 Apple Inc. User configurable task triggers
US11888791B2 (en) 2019-05-21 2024-01-30 Apple Inc. Providing message response suggestions
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11237797B2 (en) 2019-05-31 2022-02-01 Apple Inc. User activity shortcut suggestions
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11657813B2 (en) 2019-05-31 2023-05-23 Apple Inc. Voice identification in digital assistant systems
US11360739B2 (en) 2019-05-31 2022-06-14 Apple Inc. User activity shortcut suggestions
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11790914B2 (en) 2019-06-01 2023-10-17 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US11705106B2 (en) * 2019-07-09 2023-07-18 Google Llc On-device speech synthesis of textual segments for training of on-device speech recognition model
US11127392B2 (en) * 2019-07-09 2021-09-21 Google Llc On-device speech synthesis of textual segments for training of on-device speech recognition model
US20220005458A1 (en) * 2019-07-09 2022-01-06 Google Llc On-device speech synthesis of textual segments for training of on-device speech recognition model
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators
US11354920B2 (en) * 2019-10-12 2022-06-07 International Business Machines Corporation Updating and implementing a document from an audio proceeding
US11914848B2 (en) 2020-05-11 2024-02-27 Apple Inc. Providing relevant data items based on context
US11765209B2 (en) 2020-05-11 2023-09-19 Apple Inc. Digital assistant hardware abstraction
US11924254B2 (en) 2020-05-11 2024-03-05 Apple Inc. Digital assistant hardware abstraction
US11755276B2 (en) 2020-05-12 2023-09-12 Apple Inc. Reducing description length based on confidence
US11838734B2 (en) 2020-07-20 2023-12-05 Apple Inc. Multi-device audio adjustment coordination
US11750962B2 (en) 2020-07-21 2023-09-05 Apple Inc. User identification using headphones
US11696060B2 (en) 2020-07-21 2023-07-04 Apple Inc. User identification using headphones
WO2022047516A1 (en) * 2020-09-04 2022-03-10 The University Of Melbourne System and method for audio annotation

Also Published As

Publication number Publication date
EP2593846A1 (en) 2013-05-22
KR101674851B1 (en) 2016-11-09
KR101622015B1 (en) 2016-05-17
AU2012261818A1 (en) 2014-01-16
JP2014519058A (en) 2014-08-07
CN103703431B (en) 2018-02-09
US10672399B2 (en) 2020-06-02
KR101324910B1 (en) 2013-11-04
US20120310649A1 (en) 2012-12-06
AU2016202974B2 (en) 2018-04-05
KR101700076B1 (en) 2017-01-26
CN103703431A (en) 2014-04-02
AU2012261818B2 (en) 2016-02-25
AU2016202974A1 (en) 2016-06-02
KR20140027421A (en) 2014-03-06
EP2593846A4 (en) 2014-12-03
KR20120135137A (en) 2012-12-12
WO2012167276A1 (en) 2012-12-06
KR20150085115A (en) 2015-07-22
KR20160036077A (en) 2016-04-01

Similar Documents

Publication Publication Date Title
AU2016202974B2 (en) Automatically creating a mapping between text data and audio data
JP5463385B2 (en) Automatic creation of mapping between text data and audio data
US10671251B2 (en) Interactive eReader interface generation based on synchronization of textual and audial descriptors
US10885809B2 (en) Device for language teaching with time dependent data memory
US9786283B2 (en) Transcription of speech
US11657725B2 (en) E-reader interface system with audio and highlighting synchronization for digital books
US20140039871A1 (en) Synchronous Texts
US20150170648A1 (en) Ebook interaction using speech recognition
Öktem et al. Corpora compilation for prosody-informed speech processing
US20080243510A1 (en) Overlapping screen reading of non-sequential text
Luz et al. Interface design strategies for computer-assisted speech transcription
Krůza et al. Second-generation web interface to correcting ASR output
KR101030777B1 (en) Method and apparatus for producing script data
Masoodian et al. TRAED: Speech audio editing using imperfect transcripts
KR20100014031A (en) Device and method for making u-contents by easily, quickly and accurately extracting only wanted part from multimedia file
Creer et al. TEI mark-up of spoken language data: the BASE experience
Anderson et al. Internet delivery of time-synchronised multimedia: the SCOTS project
Wald et al. Benefiting disabled students by developing an application that uses captioning of multimedia to enhance learning for all students
JP2001014137A (en) Electronic document processing method, electronic document processor, and storage medium recording electronic document processing program

Legal Events

Date Code Title Description
AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CAO, XIANG;CANNISTRARO, ALAN C.;ROBBIN, GREGORY S.;AND OTHERS;SIGNING DATES FROM 20110926 TO 20111006;REEL/FRAME:027028/0464

STCV Information on status: appeal procedure

Free format text: BOARD OF APPEALS DECISION RENDERED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION